Αρχική » 2020 (Σελίδα 2)

Αρχείο έτους 2020

Barefoot Abstraction

abstraction

What is abstraction?

Abstraction is about simplifying things – identifying what’s important without worrying too much about detail.

A school timetable is an abstraction of what happens in a typical week. It shows key information about classes, teachers, rooms and times but ignores further layers of detail such as learning objectives and activities.

An example of a school timetable.

A class timetable is an abstraction of the school day for one class.
Much is omitted, to provide a simplified summary.

Why is abstraction important?

Abstraction allows us to think about things to different degrees of detail. It’s a powerful tool in computer science, where it’s used to manage the complexity in much of what’s designed and created.

We can think of abstractions as layers, or boxes within boxes, allowing us to disregard what’s going on inside each of them. Software comprises layers of code, each hiding the complexity of the next. We can see items of hardware as “black boxes”, disregarding their internal workings unless we choose to look deeper.

What does abstraction look like in the curriculum?

Working with word problems in maths often involves identifying key information and thinking how to represent it in the more abstract language of arithmetic. A book can be abstracted to a story plan. Music is abstracted to notation. Models are abstractions. In geography, a map can be considered an abstraction of the complexity of the environment, and maps of different scales provide a sense of the layered nature of abstraction in computing.

Pupils can also gain experience of abstraction when playing computer games, appreciating that these interactive simulations are based on real life but are simpler.

A class storyboard for an episode of Dr Who.

A story plan summarizes a story, providing an abstraction of the story which shows just its key features.

Musical notation for a song called "First Pop Song".

Music is abstracted to musical notation.

More resources on abstraction by BBC

Barefoot Patterns / Generalisation

patterns

What are patterns?

Patterns are everywhere. By identifying patterns, we can create rules and solve more-general problems.

Children notice patterns in how teachers react to their behaviour. Weather patterns feed into our forecasts. In maths, pupils can measure the area of a rectangle drawn on graph paper, by counting the number of unit squares within it, but this could be difficult or longwinded for rectangles which are really small or large. A more elegant solution is to multiply the length of the rectangle by the width – and it works well for all rectangles. Once pupils can remember this formula, it’s so much faster than counting squares.

In computing, the method of looking for a general approach to a class of problems is called generalisation.

A diagram on graph paper, showing how to calculate the area of a rectangle.

Pupils learn mathematical formulae: these are generalisations.

Why are patterns important?

Computer scientists strive to solve problems quickly and efficiently, and they seek methods applicable elsewhere. If they see a pattern across an algorithm, they’ll look to create a single module of repeatable code, sometimes called a function or procedure – many programming languages have shared libraries of common functions. The recognition of patterns in input data plays an essential role in machine learning. This is an important application of computer science which plays a part in systems for, amongst many other things, algorithmic stock-market trading and the recognition of faces and vehicle number plates.

What do patterns look like in curriculum?

From an early age, children become familiar with repeated phrases in nursery rhymes. Later, they notice repeated structures in stories. We ask pupils to look for and learn from patterns to help them better understand the world. They might recognise common rules (and exceptions) for spellings, and repeating lines in many musical forms. In maths, pupils typically undertake investigations in which they spot patterns and deduce generalised results.

The sequence of numbers 16, 35, 73, 149 with a blank space for the next number.

Can pupils spot the pattern to reveal the number-sequence rule?

More resources on pattern by BBC

Barefoot Decomposition

decomposition

What is decomposition?

In computing, decomposition is the process of breaking down a task into smaller, more-manageable parts. It has many advantages. It helps us manage large projects and makes the process of solving a complex problem less daunting and much easier to take on.

With decomposition, a task can be tackled by several people working together as a team, each member contributing their own insights and skills to particular aspects of the project.

As a simple example, making breakfast can be decomposed into a number of smaller tasks, as below. Two people could make this breakfast at the same time, one making tea and one toast.

A branching diagram showing how a breakfast of tea and toast is prepared.

Two people could make this breakfast at the same time: one could make the tea and one the toast

An illustration of a plant can be decomposed into its constituent parts, and each of those can be further decomposed for extra detail:

A labelled diagram of a flowering plant with progressive close-ups of parts, to add detail.

A labelled diagram of a flowering plant. We find out more as we decompose.

Why is decomposition important?

Decomposing problems into their smaller parts is not unique to computing: it’s quite standard in engineering, in design and in project management.

Software development is a complex process, and so the ability to break down a large project into its component parts is essential: think of all the different elements that need to be combined to produce a program like PowerPoint.

The same principle is true of computer hardware: smartphones and laptop computers are each composed of many components, produced independently by separate manufacturers before being assembled into the finished product.

A photo of the parts of a tablet computer laid out.

A tablet can be broken down (decomposed) into smaller components. (With thanks to iFixit.com.)

What does decomposition look like in the curriculum?

Putting on a school play or a cake sale, creating a news report, tackling a maths problem, making a sandwich: any task or project will need to be decomposed into smaller, more-manageable parts. Decomposition is everywhere in school practice.

Pupils are always being asked to find out more; whenever they’re labelling drawings, adding detail to concept maps, creating instructions, sketching lifecycles or marking out timelines, they’re breaking down something and thinking about detail, developing their decomposition skills.

The concept map below is a decomposition of children’s knowledge of the ancient Egyptians.

A concept map of facts about the ancient Egyptians, with further detail on pharaohs.

In a concept map, pupils can add detail to their knowledge and understanding of the ancient Egyptians.

Pupils’ decompositions might include just the things they currently know about. They should learn to check that they haven’t neglected aspects of a topic. They can consider further decomposing each aspect into sub-parts.

A computer game might be decomposed into plot, characters and setting. Next, the characters could be decomposed into actions and appearance, and the setting into background, obstacles and scoring objects. In developing a robotic toy, pupils will consider the hardware components – both individually and as a system – and the algorithms and code required. In computing, technology = hardware + algorithms + code.

More resources on decomposition by BBC

Barefoot Algorithms

 concept approach placeholder icon

What are algorithms?

An algorithm is a sequence of instructions or a set of rules to get something done.

You’ll favour a particular route home from school – you can think of it as an algorithm. There are plenty of alternative routes home, and there’ll be an algorithm to describe each one of those too. There are even algorithms for deciding the shortest or fastest route, such as form the basis of satnav systems.

Algorithms are written for a human, rather than for a computer to understand. In this way, algorithms differ from programs.

07 computational thinking concepts algorithms algorithm pic make toasteb2efcdbdcfc6c779083ff0100ba3f46

08 computational thinking concepts algorithms algorithm pic multiply

A set of rules – an algorithm – for multiplying a number by ten.

Why are algorithms important?

Computer scientists strive for algorithms which solve problems in the most-effective and efficient ways – getting the most-accurate results, in the quickest time, with the fewest resources (memory or time).

Search engines such as Bing or Google use algorithms to order a list of search results so that, more often than not, the result we want is at the top of the front page of results. Your Facebook newsfeed is derived from your friends’ status updates and other activity, but it only shows that activity which the EdgeRank algorithm assesses to be of most interest to you. Your recommendations from Amazon, Netflix and eBay are algorithmically generated, based in part on what other people are interested in.

What do algorithms look like in the curriculum?

Helping pupils get an idea of what an algorithm is needn’t be confined to computing lessons. A recipe in cookery, instructional writing in English, the method for a science experiment: each can be considered an algorithm.

For various activities, pupils will already follow a sequence of steps – e.g. getting ready for lunch or going to PE. In maths, one’s approach to mental arithmetic might be an implementation of a simple algorithm.

A drawing of some flowers with four steps to planting a seed.

A sequence of instructions – an algorithm – for how to plant a seed.

A chart showing the different spellings of the "or" phoneme, and where they are used.

Spelling rules for the “or” phoneme.

More resources on algorithms by BBC

Barefoot Logic

logic

What is logic?

Logical reasoning helps us explain why something happens.

If you set up two computers in the same way, giving them the same instructions (the program) and the same input, you can pretty much guarantee the same output. This is because computers don’t make things up as they go along or work differently depending on how they feel – they are predictable. Because of this, we can use logical reasoning to work out exactly what a program or computer system will do.

Children quickly pick logic up for themselves. Watching others and experimenting for themselves, even very young children quickly develop a mental model of how technology works. A child learns that clicking a certain button brings up, for example, a list of different games to play, and that tapping a certain part of a screen produces a reliably predictable response.

At its heart, logical reasoning is about being able to explain why something is the way it is.

Why is logic important?

Deep inside a computer’s central processing unit , every action performed is reduced to logical operations based on electrical signals – everything a computer does is controlled by logic.

Software engineers use logical reasoning all the time. In developing new, effective code, they draw on mental models of the workings of computer hardware, operating systems and programming languages. They’ll also rely on logical reasoning when testing the new software, searching for mistakes (“bugs”) and fixing them (debugging).

What does logic look like in curriculum?

There are many ways that children draw on logical reasoning in their computing lessons and across the wider curriculum. In Language, pupils might use it to explain a character’s actions in a story so far, and to predict what the character will do next. In science, pupils should be able to explain how they’ve arrived at certain conclusions from the results of experiments. In history, pupils should understand how our knowledge is constructed from a variety of sources, and they should be able to discuss the logical connections between cause and effect. In design and technology, pupils need to reason what material is best suited to each part of a project. They’ll use logical reasoning to analyse philosophical arguments.

A sketch of a model truck, labelled with notes about the materials to use.

Pupils explain reasons for their choice of materials in design and technology projects.

Two statements and an incorrect conclusion drawn from them.

In sessions on philosophy for children, pupils use logical reasoning to analyse arguments.

BBC Evaluating

Evaluating solutions

Before solutions can be programmed, it is important to make sure that it properly satisfies the problem, and that it does so efficiently. This is done through evaluation.

What is evaluation?

Once a solution has been designed using computational thinking, it is important to make sure that the solution is fit for purpose.
Evaluation is the process that allows us to make sure our solution does the job it has been designed to do and to think about how it could be improved.
Once written, an algorithm should be checked to make sure it:

  • is easily understood – is it fully decomposed?
  • is complete – does it solve every aspect of the problem?
  • is efficient – does it solve the problem, making best use of the available resources (eg as quickly as possible/using least space)?
  • meets any design criteria we have been given

If an algorithm meets these four criteria it is likely to work well. The algorithm can then be programmed.

Failure to evaluate can make it difficult to write a program. Evaluation helps to make sure that as few difficulties as possible are faced when programming the solution.

Why do we need to evaluate our solutions?

Computational thinking helps to solve problems and design a solution – an algorithm – that can be used to program a computer. However, if the solution is faulty, it may be difficult to write the program. Even worse, the finished program might not solve the problem correctly.
Evaluation allows us to consider the solution to a problem, make sure that it meets the original design criteria, produces the correct solution and is fit for purpose – before programming begins.

What happens if we don’t evaluate our solutions?

Once a solution has been decided and the algorithm designed, it can be tempting to miss out the evaluating stage and to start programming immediately. However, without evaluation any faults in the algorithm will not be picked up, and the program may not correctly solve the problem, or may not solve it in the best way.

Faults may be minor and not very important. For example, if a solution to the question ‘how to draw a cat?’ was created and this had faults, all that would be wrong is that the cat drawn might not look like a cat. However, faults can have huge – and terrible – effects, eg if the solution for an aeroplane autopilot had faults.

Ways that solutions can be faulty

We may find that solutions fail because:

  • it is not fully understood – we may not have properly decomposed the problem
  • it is incomplete – some parts of the problem may have been left out accidentally
  • it is inefficient – it may be too complicated or too long
  • it does not meet the original design criteria – so it is not fit for purpose

A faulty solution may include one or more of these errors.

More resources on evaluation by Barefoot 

BBC Algorithms test

Visit BBc site to take the  Algorithms test

More tests on Computational Thinking by BBC 

Computational Thinking Techniques

 

LTTA 1 Meeting Agenda Rhodes, 27-31 JAN 2020

C1. Meeting Agenda Rhodes, 27-31 JAN 2020

BBC Representing an algorithm: Pseudocode and flowcharts

There are two main ways that algorithms can be represented – pseudocode and flowcharts.

Representing an algorithm: Pseudocode

Most programs are developed using programming languages. These languages have specific syntax that must be used so that the program will run properly. Pseudocode is not a programming language, it is a simple way of describing a set of instructions that does not have to use specific syntax.

Writing in pseudocode is similar to writing in a programming language. Each step of the algorithm is written on a line of its own in sequence. Usually, instructions are written in uppercase, variables in lowercase and messages in sentence case.

In pseudocode, INPUT asks a question. OUTPUT prints a message on screen.

A simple program could be created to ask someone their name and age, and to make a comment based on these. This program represented in pseudocode would look like this:

OUTPUT 'What is your name?'
INPUT user inputs their name
STORE the user's input in the name variable
OUTPUT 'Hello' + name
OUTPUT 'How old are you?'
INPUT user inputs their age
STORE the user's input in the age variable
IF age >= 70 THEN
OUTPUT 'You are aged to perfection!'
ELSE
OUTPUT 'You are a spring chicken!'

In programming, > means ‘greater than’, < means ‘less than’, ≥ means ‘greater than or equal to’ and ≤ means ‘less than or equal to’.

Representing an algorithm: Flowcharts

A flowchart is a diagram that represents a set of instructions. Flowcharts normally use standard symbols to represent the different instructions. There are few real rules about the level of detail needed in a flowchart. Sometimes flowcharts are broken down into many steps to provide a lot of detail about exactly what is happening. Sometimes they are simplified so that a number of steps occur in just one step.

Flowchart symbols

bbc flowchart

A simple program could be created to ask someone their name and age, and to make a comment based on these. This program represented as a flowchart would look like this:

bbc algo 2

Αλλαγή μεγέθους γραμματοσειράς
Αντίθεση