Real-World Problems for Secondary School Mathematics Students
Real-World Problems for Secondary School Mathematics St...

Real-World Problems for Secondary School Mathematics Students

Real-World Problems for Secondary School Mathematics Students Case Studies Edited by

Juergen Maasz University of Linz, Austria

John O’Donoghue University of Limerick, Ireland

SENSE PUBLISHERS ROTTERDAM/BOSTON/TAIPEI

A C.I.P. record for this book is available from the Library of Congress.

ISBN: 978-94-6091-541-3 (paperback) ISBN: 978-94-6091-542-0 (hardback) ISBN: 978-94-6091-543-7 (e-book)

Published by: Sense Publishers, P.O. Box 21858, 3001 AW Rotterdam, The Netherlands www.sensepublishers.com

Printed on acid-free paper Image – The Living Bridge, University of Limerick. © Patrick Johnson, 2008. “The Living Bridge – An Droichead Beo” The Living Bridge is the longest pedestrian bridge in Ireland and links both sides of the University of Limerick’s campus across the river Shannon. The bridge is constructed of 6 equal spans and follows a 350 metre long curved alignment on a 300 metre radius.

All Rights Reserved © 2011 Sense Publishers No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.

TABLE OF CONTENTS

Preface.................................................................................................................... vii 1. Modelling in Probability and Statistics: Key Ideas and Innovative Examples ......................................................................................... 1 Manfred Borovcnik and Ramesh Kapadia 2. Problems for the Secondary Mathematics Classrooms on the Topic of Future Energy Issues ................................................................................... 45 Astrid Brinkmann and Klaus Brinkmann 3. Coding Theory................................................................................................. 67 Tim Brophy 4. Travelling to Mars: A Very Long Journey: Mathematical Modelling in Space Travelling.......................................................................................... 87 Jean Charpin 5. Modelling the Storage Capacity of 2D Pixel Mosaics..................................... 99 Simone Göttlich and Thorsten Sickenberger 6. Mathematics for Problems in the Everyday World ....................................... 113 Günter Graumann 7. Political Polls and Surveys: The Statistics Behind the Headlines ................. 123 Ailish Hannigan 8. Correlations between Reality and Modelling: “Dirk Nowitzki Playing for Dallas in the NBA (U.S.A.)” ................................................................... 137 Herbert Henning and Benjamin John 9. Exploring the Final Frontier: Using Space Related Problems to Assist in the Teaching of Mathematics .................................................................... 155 Patrick Johnson 10. What are the Odds? ....................................................................................... 173 Patrick Johnson and John O’Donoghue 11. Models for Logistic Growth Processes (e.g. Fish Population in a Pond, Number of Mobile Phones within a Given Population) ................................ 187 Astrid Kubicek 12. Teaching Aspects of School Geometry Using the Popular Games Rugby and Snooker................................................................................................... 209 Jim Leahy v

TABLE OF CONTENTS

13. Increasing Turnover? Streamlining Working Conditions? A Possible Way to Optimize Production Processes as a Topic in Mathematics Lessons ........ 221 Juergen Maasz 14. Mathematics and Eggs: Does this Topic Make Sense in Education? ............ 239 Juergen Maasz and Hans-Stefan Siller 15. Digital Images: Filters and Edge Detection................................................... 257 Thomas Schiller 16. Modelling and Technology: Modelling in Mathematics Education Meets New Challenges .................................................................................. 273 Hans-Stefan Siller List of Contributors .............................................................................................. 281

vi

PREFACE

We should start by pointing out that this is not a mathematics text book – this is an ideas book. This is a book full of ideas for teaching real world problems to older students (15 years and older, Upper Secondary level). These contributions by no means exhaust all the possibilities for working with real world problems in mathematics classrooms but taken as a whole they do provide a rich resource for mathematics teachers that is readily available in a single volume. While many papers offer specific well worked out lesson type ideas, others concentrate on the teacher knowledge needed to introduce real world applications of mathematics into the classroom. We are confident that mathematics teachers who read the book will find a myriad of ways to introduce the material into their classrooms whether in ways suggested by the contributing authors or in their own ways, perhaps through miniprojects or extended projects or practical sessions or enquiry based learning. We are happy if they do! Why did we collect and edit them for you, the mathematics teachers? In fact we did not collect them for you but rather for your students! They will enjoy working with them at school. Having fun learning mathematics is a novel idea for many students. Since many students do not enjoy mathematics at school, students often ask: “Why should we learn mathematics?” Solving real world problems is one (and not the only one!) good answer to this question. If your students enjoy learning mathematics by solving real world problems you will enjoy your job as a mathematics teacher more. So in a real sense the collection of examples in this book is for you too. Using real world problems in mathematics classrooms places extra demands on teachers and students that need to be addressed. We need to consider at least two dimensions related to classroom teaching when we teach real world problems. One is the complexity (intensity or grade) of reality teachers think is appropriate to import into the classroom and the other is about the methods used to learn and work with real problems. Papers in this collection offer a practical perspective on each dimension, and more. Solving real world problems often leads to a typical decision situation where you (we hope together with your students) will ask: Should we stop working on our problem now? Do we have enough information to solve the real world problem? These are not typical questions asked in mathematics lessons. What students should learn when they solve real world problems is that an exact calculation is not enough for a good solution. They should learn the whole process of modelling from the first step abstracting important information from the complex real world situation, to the next steps of the mathematical modelling process. For example, they should learn to write down equations to describe the situation; do calculations; interpret the results of calculation; improve the quality of the model; calculate again (several times if needed); and discuss the results with others. Last but not least, they should reflect on the solution process in order to learn for the future.

vii

PREFACE

How real should real world problems be? More realistic problems are generally more complex and more complex problems demand more time to work them out. On the other hand a very simplified reality will not motivate students intrinsically to work for a solution (which is much better for a sustaining learning). Experience suggests starting with simple problems and simple open questions and moving to more complex problems. We think it is an impossible task for students without any experience of solving complex real problems to start by solving difficult real problems. It is better if you start with a simpler question and add complexity step by step. The second dimension of classroom teaching is concerned with methods of teaching real world problems. We are convinced that learning and teaching is more successful if you use open methods like group work, project planning, enquiry learning, practical work, and reflection. A lot of real world problems have more than one correct solution, and may in fact have several that are good from different points of view. The different solutions need to be discussed and considered carefully and this is good for achieving general education aims like “Students should become critical citizens”. Students are better prepared for life if they learn how to decide which solution is better in relation to the question and the people who are concerned. Finally we would like to counter a typical “No, thank you” argument against teaching real world problems. Yes, you will need more time for this kind of teaching than you need for a typical lesson training students in mathematical skills and operations. Yes, you will need to prepare more intensively for these lessons and be prepared for lot of activity in your classroom. You will need to change your role from a typical teacher in the centre of the classroom knowing and telling everything to that of manager of the learning process who knows how to solve the problem. But you need help to get started! We hope you will use this book as your starter pack We don’t expect you to teach like this every day but only on occasions during the year. It should be one of your teaching approaches but not the only one. Try it and you will be happy because the results will be great for the students and for you! ACKNOWLEDGEMENTS

We would like to thank all those who made this book possible especially the many authors who so generously contributed papers. This collaboration, sharing of insights, expertise and resources benefits all who engage in an enterprise such as this and offers potential benefits to many others who may have access to this volume. We are especially pleased to bring a wealth of material and expertise to an English speaking audience which might otherwise have remained unseen and untapped. The editors would like also to record their thanks to their respective organizations who have supported this endeavour viz. the Institut fur Didaktik der Mathematik, viii

PREFACE

Johannes Kepler University, Linz, and the National Centre for Excellence in Mathematics and Science Teaching and Learning (NCE-MSTL), at the University of Limerick. Juergen Maasz University of Linz, Austria John O’Donoghue NCE-MSTL University of Limerick and Linz Autumn 2010

ix

MANFRED BOROVCNIK AND RAMESH KAPADIA

1. MODELLING IN PROBABILITY AND STATISTICS Key Ideas and Innovative Examples

This chapter explains why modelling in probability is a worthwhile goal to follow in teaching statistics. The approach will depend on the stage one aims at: secondary schools or introductory courses at university level in various applied disciplines which cover substantial content in probability and statistics as this field of mathematics is the key to understanding empirical research. It also depends on the depth to which one wants to explore the mathematical details. Such details may be handled more informally, supported by simulation of properties and animated visualizations to convey the concepts involved. In such a way, teaching can focus on the underlying ideas rather than technicalities and focus on applications. There are various uses of probability. One is to model random phenomena. Such models have become more and more important as, for example, modern physicists build their theory completely on randomness; risk also occurs everywhere not only since the financial crisis of 2008. It is thus important to understand what probabilities really do mean and the assumptions behind the various distributions – the following sections deal with genuine probabilistic modelling. Another use of probability is to prepare for statistical inference, which has become the standard method of generalising conclusions from limited data; the whole area of empirical research builds on a sound understanding of statistical conclusions going beyond the simple representation of data – sections 6 and 7 will cover ideas behind statistical inference and the role, probability plays therein. We start with innovative examples of probabilistic modelling to whet the appetite of the reader. Several examples are analysed to illustrate the value of probabilistic models; the models are used to choose between several actions to improve the situation according to a goal criterion (eg., reduce cost). Part of this modelling approach is to search for crucial parameters, which strongly influence the result. We then explain the usual approach towards probability – and the sparse role that modelling plays therein by typical examples, ending with a famous and rather controversial example which led to some heated exchanges between professionals. Indeed we look at this example (Nowitzki) in some depth as a leitmotiv for the whole chapter: readers may wish to focus on some aspects or omit sections which deal with technical details such as the complete solution. In the third and fourth sections, basic properties of Bernoulli experiments are discussed in order to model and solve the Nowitzki task from the context of sports. J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 1–43. © 2011 Sense Publishers. All rights reserved.

BOROVCNIK AND KAPADIA

The approach uses fundamental properties of the models, which are not always highlighted as they should be in teaching probability. In the fifth section, the fundamental underlying ideas for a number of probability distributions are developed; this stresses the crucial assumptions for any situation, in which the distribution might be applied. A key property is discussed for some important distributions: waiting times, for example, may or may not be dependent on time already spent waiting. If independent, this sheds a special light on the phenomenon, which is to be modelled. In a modelling approach, more concepts than usual have to be developed (with the focus on informal mathematical treatment) but the effort is worthwhile as these concepts allow students to gain a more direct insight to understand the inherent assumptions, which are required from the situation to be modelled. In the sixth section, the statistical question – (is Nowitzki weaker in away than in home matches?) – is dealt with thoroughly. This gives rise to various ways to tackle this question within an inferential framework. We deal informally with the methods that comprise much of what students should know about the statistical comparison of two groups, which forms the core of any introductory course at university for all fields, in which data is used to enrich research. While the assumptions should be checked in any case of application, such a crucial test for the assumptions might be difficult. It will be argued that the perspective of probabilistic and statistical applications is different and linking heuristic arguments might be more attractive in the case of statistical inference. Probabilistic distributions are used to make a probability statement about an event, or to calculate expected values to make a decision between different options. Or, they may be used to describe the ‘internal structure’ of a situation by the model’s inherent structure and assumptions. Statistical applications focus on generalizing facts beyond available data. For that purpose they interpret the given data by probabilistic models. A typical question is whether the data is compatible with a specific hypothesized model. This model as well as the answer is interpreted within the context. For example, can we assume that a medical treatment is – according to some rules – better than a placebo treatment? The final two sections resume the discussion of teaching probability and statistics, some inherent difficulties, and the significance of modelling. Conclusions are drawn and pedagogical suggestions are made. INNOVATIVE EXAMPLES OF PROBABILISTIC MODELLING

As we shall see in a later section, a key assumption of independence is not justified in many exercises set in probability. Indeed the key question in any modelling is the extent to which underlying assumptions are or are not justified. Rather than the usual approach of mechanistic applications of probability, a more realistic picture of potential applications will be developed in this section by some selected innovative examples. They describe a situation from the ‘real world’ and state a target to improve or optimize. A spectrum of actions or interventions is open for use to improve a criterion such as reduction of costs. A probability distribution is chosen to model the situation, even though the inherent assumptions might not be perfectly fulfilled. 2

MODELLING IN PROBABILITY AND STATISTICS

A solution is derived and analysed: how does it change due to changes in parameters involved in the model, how does it change due to violations of assumptions? Sensitive parameters are identified; this approach offers ways of making the best use of the derived solutions and corroborating the best actions to initiate further investigations to improve the available information. The examples deal with novel applications – blood samples, twin light bulbs, telephone call times, and spam mail. Simulation, spreadsheets and other methods are used and illustrate the wide range of ideas where probability helps to model and understand a wide and diverse range of situations. Blood Samples Modelled with Binomial Probabilities The following example uses the binomial model for answering a typical question, which might be studied. The ‘outcome’ might be improved even if the model has some drawbacks: the cost situation is improved and hints for action are drawn from the model used though the actual cost improvement cannot be directly read off the solution. Crucial parameters that strongly influence the solution are identified, for which one may strive to get more information in the future. Example 1. Donations of blood have to be examined as to whether they are suitable for further processing or not. This is done in a special laboratory after the simple determination of the blood groups. Each donation is judged – independently of each other – ‘contaminated’ with a probability p = 0.1 and suitable with the complementary probability of q = 0.9. a. Determine the distribution of the number of non-suitable donations if 3 are drawn at random. b. 3 units of different donations are taken and mixed. Only then are they examined jointly as to whether they are suitable or not. If one of those mixed was already contaminated then the others will be contaminated and become useless. One unit has a value of € 50.-. Determine the loss for the various numbers of non-suitable units among those which are drawn and mixed. Remember: If exactly one is non-suitable then the two others are ‘destroyed’. c. Determine the distribution of the loss as a random variable. d. Calculate the expected loss if 3 units of blood are mixed in a ‘pool’. e. Testing of blood for suitability costs of € 25.- per unit tested; the price is independent of the quantity tested. By mixing 3 units in a pool, a sum of € 50.is saved. With the expected loss from d., does it pay to mix 3 different units, or should one apply the test for suitability separately to each of the blood units? A solution to this example is presented in a spreadsheet (Figure 1); the criterion for decision is based on the comparison of potential loss and benefit by pooling; pooling is favourable if and only if: Expected loss by pooling < Reduction of cost of lab testing

(1)

All blood donations are assumed to be contaminated with a probability of 0.1, independently of each other. That means we model the selection of blood donations, 3

BOROVCNIK AND KAPADIA

which should be combined in a pool and analysed in the laboratory jointly, as if it were ‘coin tossing with a success probability of p = 0.1’. While the benefit is a fixed number, loss has to be modelled by expected values. Comparison (1) yields 25.65 < 50, hence mixing 3 blood units and testing jointly, saves costs. Modelling of pooling and calculation of expected cost 0/1

Coding: Single unit is suitable = 0, is NOT suitable = 1

n

3

1-p

0.9

Probability that a single unit is suitable for further processing

p

0.1

Probability that a single unit is NOT suitable for further processing

a.

X ~ Bin(n, p)

b.

Y

Number of blood units to combine to a pool

Model for number X of NOT suitable single units in the pool: Draw with replacement Loss as a function of the number of not suitable units in the pool 50

c.

Cost of units destroyed by pool = loss P(Y = yi) - has to be determined

Distribution of Y

d.

E(Y)

e.

Comparison

Average cost of units destroyed by pooling = loss To compare reduction of cost in testing pooled units and expected loss by pooling 25

Cost of testing a unit in the laboratory

a.

b. Destroyed by pool

c. Loss Y

e.

d.

Distribution of loss P(Y = yi) Y = yi

Not suitable X=i

Probability P(X = i)

Expected loss yi pi

0

0.729

0

0

0

0.730

0

1

0.243

2

100

50

0.027

1.35

2

0.027

1

50

100

0.243

3

0.001

0

0

*

24.3 25.65

Cost of testing Single

25

Size of pool

3

separate

75

pooled

25

Reduction

50

These figures have to be compared.

Figure 1. Spreadsheet with a solution to Example 1.

Example 2. Explore the parameters in Example 1. a. For a probability of p = 0.1 of being contaminated what is a sensible recommendation for the number of blood units being mixed? b. How does the recommendation in a. depend on the probability of suitable blood donations? While a spreadsheet is effective in assisting to solve Example 1, it becomes vital for tackling the open questions in Example 2. Such an analysis of input parameters gives much more rationale to the questions posed. Of course, there are doubts on whether the suitability of blood donations might be modelled by a Bernoulli process. Even if so, how is one to get trustworthy information about the ‘success’ probability for suitable blood units? As this is a crucial input, it should be varied and the consequences studied. Also, just to mix a fixed number of blood units gives no clue why such a size of a pool should be chosen. It also gives no feeling about how the size makes sense relative to the expected monetary gain or loss connected to the size of a pool, which is examined jointly. In a spreadsheet, a slide control for the input parameters p and size n is easily constructed and allows an effective interactive investigation of the consequences (or the laboratory cost and the value of a unit). From a spreadsheet as in Figure 2, one may read off (q = 1 – p): Expected net saving (q = 0.9, n = 4) = 26.22 4

(2)

MODELLING IN PROBABILITY AND STATISTICS

This yields an even better expected net saving as in (1). Interactively changing the size n of the pool shows that n = 4 yields the highest expected value in (2) so that to combine 4 is ‘optimal’ if the proportion of suitable blood units is as high as q = 0.9. With a smaller probability of good blood units, the best size of the pool drastically declines; with q = 0.8, size 2 is the most favourable in cost reduction; the cost reduction is as low as 9 € per pool of 2 as compared to 26.22 € per each 4 units combined to a pool. Reduction of cost per unit is 4.5 with q = 0.8 and 6.6 € with q = 0.9. There is much more to explore, which will be left to the reader. Modelling of pooling - exploring the effect of parameter changes Lab cost of testing

Value of a unit

Proportion q of suitable units

Size of pool n

25

50

0.90

4

70

k suitable 0

P(k)

reduction of test cost by pooling

cost of destroyed units by pooling 0

net saving 75

xi * pi

Expected net saving

0.0001

75

0.0075

1

0.0036

75

-50

25

0.0900

2

0.0486

75

-100

-25

-1.2150

3

0.2916

75

-150

-75

-21.8700

4

0.6561

75

0

75

49.2075

5

#ZAHL!

75

0

75

0.0000

lines

10

#ZAHL!

75

0

75

0.0000

hidden

26.22

Figure 2. Spreadsheet to Example 2 – with slide controls for an interactive search.

It is merely an assumption that the units are independent and have the same probability of being contaminated. Nevertheless, the crucial parameter is still the size of such a probability as other measurements (of success) are highly sensitive to it. If it were a bit higher than given in the details of the example, pooling would not lead to decreasing cost of testing. If it were considerably smaller, then pools of even bigger size than suggested would lead to considerable saving of money. The monotonic influence becomes clear even if the exact size of decreasing cost cannot be given. Closer monitoring of the quality of the incoming blood donations is wise to clarify circumstances under which the required assumptions are less justified than usual. Lifetime of Bulbs Modelled with Normal Distribution Example 3. Bulbs are used in a tunnel to brighten it and increase the security of traffic; the bulbs have to be replaced from time to time. The first question is when a single bulb has to be replaced. The second is, whether it is possible to find a time when all bulbs may be replaced jointly. The reason for such an action is that one has to send a special repair team into the tunnel and block traffic for the time of replacement. While cost maybe reduced by a complete replacement, the lifetime of the still functioning lamps is lost. The time for replacement is, however, mainly determined by security arguments: with what percentage of bulbs still working is the tunnel still sufficiently lit? Here we assume that the tunnel is no longer secure if over two thirds of the bulbs have failed. 5

BOROVCNIK AND KAPADIA

Two systems are compared for their relative costs: Single bulbs and twin bulbs, which consist of two single bulbs – the second is switched on when the first fails. Which system is less costly to install? Lifetime of bulbs in hours is modelled by a normal distribution with mean lifetime μ =1900 and standard deviation σ =200. a. What is the probability that a single bulb fails within the 2300 hours in service? b. Determine the time when two thirds of single bulbs have failed. c. Determine an adequate model for the lifetime of twin bulbs and – based upon it – the time when two thirds of the twin bulbs have failed. Remark. If independence of failure is assumed then lifetime of twin bulbs is also normally distributed with parameters: mean = sum of single means, variance = sum of single variances. d. Assume that at least one third of the lamps in the system have to function for security reasons. The cost of one twin bulb is 2.5 €, a single lamp costs 1 €. The cost of replacing all lamps in the tunnel is 1000 €. For the whole tunnel, 2000 units have to be used to light it sufficiently at the beginning. Relative to such conditions, is it cheaper to install single or twin bulbs? The solution can be read-off from a spreadsheet like Figure 3. The probability in a. to fail before 2300 hours may be found by standardizing this value by the parameters to (2300–1900)/200 = 2 and calculating the value of the standard normal Φ (2). Parts b. and c. require us to calculate the ⅔ quantile of the normal distribution, which traditionally is done by using probability tables, or which may be directly solved by a standard function. As a feature of spreadsheets, such a quantile may be determined by a sliding control, which allows us to control x in the formula P(X ≤ x) = ∞ until the probability ⅔ is reached in the box. For twin bulbs, first the parameters have to be calculated: μT = 1900 + 1900 and σT = √(2002+2002). Such a relation between parameters of single and twin bulbs may be corroborated by simulation – a proof is quite complex. For the final comparison of costs in d., the calculation yields that single bulbs will be exchanged after 1986.1, twin bulbs after 3921.8 hours. The overall costs of the two systems have to be related to unit time. Of course, the lifetime of the bulbs is not really normally distributed. The estimation of the parameters (mean and standard deviation of lifetime) is surrounded with imprecision. As a perfect model, normal distribution might not serve. However, perceived as a scenario in order to investigate “what happens if …?” it helps to analyse the situation and detect crucial parameters. Thus, it might yield clues for the decision on which system to install. On the basis of such a scenario, we have a clear goal, namely to compare the expected costs per unit time E[C] /h of the two systems (see Figure 3): E[C] /h (single bulbs) – E[C] /h (twin bulbs) = 1.51 – 1.53 < 0

(3)

The comparison of expected costs gives 1.51 € per unit time for single as opposed to 1.53 € for twin bulbs. A crucial component is the price of twin bulbs: a reduction from 2.5 to 2.4 € per twin bulb (which amounts to a reduction of 4%) 6

MODELLING IN PROBABILITY AND STATISTICS

changes the decision in favour of twin bulbs. Hence this approach allows students to investigate assumptions, the actual situation and relative costs. It encourages them to use their own prior knowledge in the situation by varying parameters and costs. This gives practice in applying the normal distribution and then investigating consequences from the situation, some of which may be hard to quantify. Single bulbs

μ

σ

1900

x

200

z = (x - μ)/σ

2300

p

zp

x p = μ + z p *σ

0.6667

0.4307

1986.1

Lifetime of a single bulb

f(x)

Φ (z)

2

0.002

0.9772

a. survive 2300 hours

2/3 fail until then

0.001

b.

x

0.000

Twin bulbs

0

500

1000

1500

2000

2500

3000

Lifetime of single bulbs - - and twin bulbs ___

μT

σT

3800

p 0.6667

x

z = (x - μ)/σ

282.84

4600

2.83

zp

x p = μ + z p *σ

0.4307

3921.8

Φ (z)

0.002

0.9977 0.001

c. x

0.000 0

Comparison of cost per unit time D

e

v

i

c

e

s

C

o

s

t

1000

2000

T i m e

Cost per

€ per unit

Number

Light system

Exchange

total

working

unit time

Single

1.0

2000

2000

1000

3000

1986.1

1.51

Twin bulbs

2.5

2000

5000

1000

6000

3921.8

1.53

Variant

2.4

2000

4800

1000

5800

3921.8

1.48

3000

4000

5000

} d.

Figure 3. Spreadsheet with a solution to Example 3.

In fact, there is a systematic error in the normal distribution as for a lifetime no negative values are possible. However, for the model used, the probability for values less than 0 amounts to 10-21, which is negligible for all practical purposes. We will illustrate key ideas behind distributions in section 5; hazard rate is one of them, which does not belong to the standard repertoire of concepts in probability at lower levels. Hazard is a tendency to fail. However, it is different from propensity (to fail), which relates the tendency to fail for new items only. To look at an analogy: it is folklore that the tendency to fail (die) for human beings differs with age. The older we get, the higher the age-related tendency to die. Again, such a relation is complex to prove. However, a simulation study should be analysed accordingly and the percentage of failures can be calculated in two ways: one based on all items in the experiment, the other based only on those, which are still in function at a specific point of time. This enhances the idea of an age-related (conditional) risk to fail as compared to a general risk to fail, which implicitly is related to all items, which are exposed to failure in the experiment. The normal distribution may be shown to follow an increasing hazard. Such an increasing risk to fail with increasing age is reasonable with lifetime of bulbs – as engineers know. Thus, even if the normal model may not be interpreted directly by relative frequencies of failures, it may count as a reasonable model for the situation. The exact amount of net saving by installing the optimal system cannot be quantified 7

BOROVCNIK AND KAPADIA

but the size of it may well be read from the scenario. These are the sort of modelling issues to discuss with students in order to enhance their understanding and interest. Call Times and Cost – the Exponential and Poisson Distributions Combined Example 4. The length Y of a telephone call is a random variable and can be modelled as exponentially distributed with parameter λ = 0.5, which corresponds to a mean duration of a call of 1/λ = 2. The cost k of a call is a function of its duration y and is given by a fixed amount of 10 for y ≤ 5 and amounts to 2y for y > 5. a. Determine the expected cost of a telephone call. b. Calculate the standard deviation (s.d.) of the cost of a telephone call. c. Determine an interval for the cost of a single call with probability of 99%. The number of calls during a month will be modelled by a Poisson distribution with μ = 100; this coincides with an expected count of calls of 100. Determine d. the probability that the number of phone calls does not exceed 130 per month; e. a reasonable upper constraint for the maximum cost per month and a reasonable number depicting the risk that such a constraint does not hold – describe how you would proceed to determine such a limit more accurately. Cost of single calls and bills of 130 calls - a simulation study

λ = Nr.

c.

e.

Cost of call

Zi

Xi

c(X i )

Length of 130 Bill for period with 130 calls calls

empty column!

Many bills simulated *

Bill > 1387.5 ?

*

data tables in EXCEL

1329.84

0

1329.86

0

1325.67

0

lines

10.00

1337.23

0

hidden

10.00

1307.75

0

1

0.525

1.487

10.00

2

0.233

0.530

10.00

713

0.419

1.086

714

0.202

0.452

240.69

1329.84

E = Expected cost of single call

Single calls

b.

Exponential - length of call

10.25

σ = S.d.

Bill

a.

0.5

Random numbers from (0, 1)

mean

1.41

s.d.

Risk is

0.010

that single call cost more than:

18.25

cost

Risk is

0.010

that length of call longer than:

9.13

length

}

of simulated data on cost

}

99% quantile of simulated data

Risk is

0.020

that bill (with 130 calls) costs more:

1387.50

percentage rank in simulated bills

Risk is

0.010

that bill for the period is higher than

1393.78

99% quantile of simulated bills.

Calls in a period - Poisson with μ = 100

The simulation study is based on only 714 random numbers.

0.05

It fluctuates still, but would stabilize if more data were generated. d. Risk that number of calls > 130:

130

0.002 Risk seems 'negligible'

While for the single calls the cost cannot be approximated by a normal (as exponential distributions are heavily skewed), the cost of 130 calls (as a sum) may be approximated by the normal distribution, though it still is slightly skewed

0.00 0

(CLT!) single call

**

130 calls

130

expected cost

E

10.3283

En = n *E

1342.68

s.d.

σ

1.5871

σ n = √n * σ

18.10

** tricky integrals or simulation

50

100

150

Bills: Cost of 130 calls - artificial data 0.15

0.10 1387.5

standard normal e. cost of 130 calls

p quantile

0.99

zp

2.326

x p = E 130 + z p * σ 130

1384.78

0.05

99% of 130 calls cost less

0.00 1250

1300

Figure 4. Spreadsheet with a solution to Example 4. 8

1350

1400

1450

MODELLING IN PROBABILITY AND STATISTICS

The models suggested are quite reasonable. However, the analytic difficulties are considerable – even at university level. A solution to part e. may be found from a simulation scenario of the assumptions. The message of such examples is that not all models can be handled rigorously. The key idea here is to understand the assumptions implicit in the model and judge whether they form a plausible framework for the real situation to be modelled and interpret the result accordingly. In section 5, the model of the exponential distribution will be related to the idea of ‘pure’ random length and to the memory-less property according to which the further duration of a call has the same distribution regardless how long it already goes on. If such a condition is rejected for the phoning behaviour of a person the result derived here would not be relevant. The number of phone calls is modelled here by a Poisson distribution. The key idea behind this model is that events (phone calls) occur completely randomly in time with a specific rate per unit time; see section 5 for more details. The simulation is based on the following mathematical relation between distributions (which itself may be investigated by simulation): If Z is a random number in the interval (0, 1) then Y = – ln(1–Z)/λ is exponentially distributed with parameter λ

(4)

A further key idea used in this example is the normal approximation for the cost of the bill of 130 calls for a period as it is the sum of the ‘independent’ single call cost. The histogram of simulated bills ‘confirms’ that to some extent but shows still some skewness, which originates from the highly skewed exponential distribution. To apply the approximation, mean and s.d. of the sum of independent and identically distributed (called iid, a property analogue to (9) later) variables has to be known. An estimate of single-call cost may again be gained from a simulation study as the integrals here are not easy to solve. From single calls Xi to bills of n calls, the following mathematical relation is needed: iid

X i ~ X , Tn := X 1 + ... + X n , then E (Tn ) = n ⋅ E ( X ) , σ (Tn ) = n ⋅ σ ( X ) . (5)

The first part is intuitive – if E(X) is interpreted as fair prize of a game, then Tn describes the win of n repetitions of it; the fair prize of it should be n times the prize of a single game. The second part is harder and has to be investigated further. As it is fundamental, a simulation study can be used to clarify matters. Spam Mail – Revising Probabilities with Bayes’ Formula Example 5. Mails may be ‘ham’ or ‘spam’. To recognize this and build a filter into the mail system, one might scan for words that are contained in hams and in spams; e.g., 30% of all spams contain the word ‘free’, which occurs in hams with a frequency of only 1%. Such a word might therefore be used to discriminate between ham and spam mails. Assume that the basic frequency of spams in a specific mailbox is 10 (30)% 9

BOROVCNIK AND KAPADIA

a. If a mail arrives with the word ‘free’ in it, what is the probability that it is spam? b. If a message passes such a mail filter, what is its probability that it is actually spam? c. Suggest improvements of such a simple spam filter. The easiest way to find the solution is to re-read all the probabilities as expected frequencies of a suitable number of mails. If we base our thought on 1000 mails, we expect 100 (300) to be spam. Of these, we expect 30 (90) to contain the word ‘free’. The imaginary data – natural frequencies in the jargon of Gigerenzer (2002) are in Table 1, from which it is easy to derive answers to the questions: If the mail contains ‘free’, its conditional probability to be spam is 0.7692 (30/39 with 10% spam overall) and 0.9278 (90/97 with 30% spam). The filter, however, has limited power to discriminate between spam and ham as a mail, which does not contain the word ‘free’ still has a probability to be spam of 0.0728 (70/961) or 0.2326 (210/903) depending on the overall rate of spam mails. Table 1. Probabilities derived from fictional numbers – natural frequencies Spam overall ‘free’ Ham 9 Spam 30 all 39

10% ‘free’ 891 70 961

all 900 100 1000

‘free’ 7 90 97

30% ‘free’ 693 210 903

all 700 300 1000

This shows that the filter is not suitable for practical use. Furthermore, the conditional probabilities are different for each person. The direction for further investigation seems clear: to find more words that separate between ham and spam mails and let the filter learn from the user who classifies several mails into these categories. The table with natural (expected) frequencies – sometimes also called the statistical village behaviour – delivers the same probabilities as the Bayesian formula but is easier to understand and is accepted much better by laypeople. For further development, the inherent key concept of conditional probability should be made explicit. Updating of probabilities according to new incoming information is a basic activity. It has a wide-spread range of application such as in medical diagnoses or before court when indicative knowledge has to be evaluated. The Bayesian formula to solve this example is P( spam |' free' ) =

P( spam) ⋅ P(' free' | spam) P( spam) ⋅ P(' free' | spam) + P( ham) ⋅ P(' free' | ham)

(6)

Gigerenzer (2002) gives several examples how badly that formula is used by practitioners who apply it quite frequently (not knowing the details, of course). There are many issues to clarify, amongst them the tendency to confuse P(spam|'free') and P('free'|spam). Starting with two-way tables is advisable, as advocated by Gigerenzer (2002); later the structure of the process should be made explicit to promote the idea of continuously updating knowledge by new evidence. 10

MODELLING IN PROBABILITY AND STATISTICS

Summary Overall, these four examples show the interplay between modelling and probability distributions, with simplifying assumptions made but then analysed in order for students to develop deeper understanding of the role, probability may play to make decisions more transparent. Probability models are not only part of glass bead games but may partially model situations from reality and contribute to compare alternatives for action rationally. In each case, there is an interesting and key question to explore within a real-life context. Simplifying assumptions are made in order to apply a probability distribution. The results are then explored in order to check the validity of the assumptions and find out the sensitivity of the parameters used. We have only given an outline of the process in each example; a longer time would be needed with students in the classroom or lecture hall. THE USUAL APPROACH TOWARDS TEACHING PROBABILITY

The usual approach towards probability is dominated by games of chance, which per se is not wrong as the concepts stem from such a context, or from early insurance agreements, which are essentially loaded games of chance with the notable exception of symmetry arguments; probabilities that would otherwise arise from symmetry considerations in games are replaced by estimates for the basic (subjective) probabilities. The axiomatic rules of probabilities are usually discussed cursorily and merely used as basic rules to obey when dealing with probabilities. Thus, no detailed proofs of simple properties are done and if done are simplified and backed up by didactical methods like tree diagrams, which may be applied as one deals primarily with finite or countably infinite probability spaces. The link from an axiomatically ‘determined’ probability to distributions is not really established1 – so the many probability distributions develop their own ‘life’. Part of their complexity arises from their multitude. Normally, only a few are dealt with in teaching ‘paradigmatically’. At the secondary stage this is mainly the binomial and normal distributions; in introductory courses at universities hypergeometric, Poisson, or exponential distributions are also added. The various distributions are dealt with rather mechanistically. A short description of an artificial problem is followed by the question for the probability of an event like ‘the number of successes (in a binomial situation) does not exceed a specified value’. Hereby, the context plays a shallow role. Questions of modelling, e.g., what assumptions are necessary to apply the distribution in question, or, in what respect are such requirements fulfilled in the context, are rarely addressed. The following examples illustrate the usual ‘approach’. Example 6 illustrates an attitude towards modelling, which is not rare. Example 6. For the production of specific screws, it is known that ‘on average’ 5% are defective. If 10 screws are packed in a box, what is the probability that one finds two (or, not more than two) defective pieces in a box? The screws could be electric bulbs, or electronic devices, etc; defective may be defined as ‘lasting less than 2000 hours’. The silent assumption in all these examples 11

BOROVCNIK AND KAPADIA

is: ‘model’ the selected items as a random sample from the production. Sometimes the model is embodied by the paradigmatic situation of random selection from an urn with two sorts of marbles – marked 1 and 0 – predominantly with (sometimes without) replacement of the drawn marbles. The context is used as a fig leaf to tell the different stories for very similar tasks, namely to drill skills in calculating probabilities from the right distribution. Neither a true question to solve, nor alternative models, nor a discussion of validity of assumptions is involved. No clear view is given of why probabilistic modelling helps to improve one’s understanding of the context. The full potential of probability to serve as a means of modelling is missed by such an attitude; see Borovcnik (2011). Modelling from a Sporting Context – the Nowitzki Task The following example shows that such restricted views on probability modelling are not bound to single teachers, textbooks, or researchers in educational statistics. The example is taken from a centrally organized final exam in a federal state in Germany but could be taken from anywhere else. The required assumptions to solve the problems posed are clarified. This is – at the same time – a fundamental topic in modelling. The question, as we shall see later, aroused fierce controversy. Example 7 (Nowitzki task). The German professional basketball player Dirk Nowitzki plays in the American professional league NBC. In the season 2006–07 he achieves a success rate of 90.4% in free throws. (For the original task, which was administered in 2008, see Schulministerium NRW, n.d.2) Probabilistic part. Calculate the probability that he a. scores exactly 8 points with 10 trials; b. scores at the most 8 points with 10 trials; c. is successful in free throws at the most 4 times in a series. Statistical part. In home matches he scored 267 points with 288 free throws, in away matches the success rate was 231/263. A sports reporter [claimed that] Nowitzki has a considerably lower success rate away. At a significance level of 5%, analyse whether the number of scores in free throws away a. lies significantly below the ‘expected value’ for home and away matches; b. lies significantly below the ‘expected value’3 for home matches. This example will be referred to as the Nowitzki task and will be used extensively below to illustrate the various modelling aspects both in probability and statistics. From the discussion it will become clearer what assumptions we have to rely on and how these are ‘fulfilled’ differently in probabilistic and statistical applications. MODELLING THE NOWITZKI TASK

The Nowitzki task (Example 7) has a special history in Germany as it was an item for a centrally administered exam. The public debate provoked a harsh critique: its probabilistic part was criticized as unsolvable in the form it was posed; its statistical 12

MODELLING IN PROBABILITY AND STATISTICS

part was disputed as difficult and the ministerial solution was ‘attacked’ for blurring the fundamental difference between ‘true’ values of parameters of models and estimates thereof. The probabilistic task shows essentially the same features as Example 6; the context could be taken from anywhere, it is arbitrary. The statistical part, however, allows for more discussion on the fundamental problem of empirical research which has to deal with generalizing results from samples to populations. Various models are compared with respect to their quality to model the situation. Here we focus on modelling and then solving the probability part. Basic Assumptions of Bernoulli Processes This task is designed to be an exemplar of an application of the binomial distribution. It is worthwhile to repeat the basic features of the model involved. This distribution allows one to model experiments, with a fixed number of trials with two outcomes for each repetition of the experiment (trial); one is typically named ‘success’ and the other ‘failure’. The basic assumption is that – the probability p of success is the same for all trials, and – single trials do not ‘influence’ each other, which means – probabilistically speaking – the trials are independent. Such assumptions are usually subsumed under the name of Bernoulli experiments (Bernoulli process of experiments). If the results of the single trials are denoted by random variables X 1 , ..., X n with

X i = 1 (success) or X i = 0 (failure)

(7)

the random variables have to be independent, ie., P( X i = xi , X j = x j ) = P( X i = xi ) P( X j = x j ) if i ≠ j .

(8)

Such a product rule holds also for more than two variables X i of the process. The assumption usually is denoted by: iid

X i ~ X ~ B(1, p) ,

(9)

where the ‘iid ’ refers to independent, identically distributed random variables Xi. The model, however, is not uniquely determined by such a Bernoulli process. Still missing is information about the value of p. From the perspective of modelling, the decision on which distribution to apply is only one step towards a model for the situation in question. Usually, the model consists of a family of distributions, which differ by the values of one (or more) parameters. The next step is to find values for the parameters to fix one of the distributions as the model to use for further consideration. How to support the modeller to choose such a family like the binomial or normal distributions is discussed from 13

BOROVCNIK AND KAPADIA

a modelling perspective in section 5. The step of modelling to get values for the parameters of an already chosen model is outlined here. The process to get numbers for the parameters (p here) is quite similar for all models. In this section matters are discussed for Bernoulli processes. Some features arise from the possibility to model the data to come from a finite or an infinite population. The parameter p will be called the ‘strength’ of Nowitzki. The Nowitzki task was meant to go beyond an application within games of chance. The probability of success of a single trial of Nowitzki is not determined by a combinatorial consideration (justified by the symmetry of an experiment). The key idea to get numbers for the parameters is to incorporate some ‘knowledge’ or information. The reader is reminded that the problem involves 10 trials and the task will be treated as an open task to explore rather than just an examination question. So one might form a hypothesis on the success rate subjectively, for example. Model 1. The probability p could be determined by a hypothesis like Nowitzki is equally good as in the last two years when his success rate has been (e.g.) 0.910. Supposed that such a value holds also for the present season, the model is fixed. This value could well be a mere fiction – just to ‘develop a scenario’ and determine what would be its consequences. On the basis of this model, the random variable Tn = X 1 + ... + X n ~ B( n = 10, p = 0.910) ,

(10)

ie., the (overall) number of successes follows a binomial distribution with parameters 10 and 0.910. Model 2. The probability p is estimated from the given data, i.e., pˆ = 0.904 . In this case, the further calculations usually are based on Tn ~ B(n = 10, p = 0.904) .

(11)

This is somehow the ‘best’ model as the estimation procedure leading to pˆ = 0.904 fulfils certain optimality criteria like unbiasedness, minimum variance, efficiency, and asymptotic normality. Model 2 (variant). The approach in model 2 misses the fact that the estimate pˆ is not identical to the ‘true’4 value of p. There is some inaccuracy attached to the estimation of p. Thus, it might be better to derive a (95%) confidence interval for the unknown parameter p in a first approach leading to an interval [ p L , pU ] = [0.8792, 0.9284]

(12)

and only then calculate the required probabilities in Example 7 with the worst case pL and the best case pU. This procedure leads to a confidence interval for the required probability reflecting the inaccuracy of the estimate pˆ . 14

MODELLING IN PROBABILITY AND STATISTICS

Model 3. The value of p is equated (not estimated) to the success rate of the whole season, i.e., p:=0.904. This leads to the same model as in (11) but with a complete different connotation. The probability of success in a free throw may be – after the end of the season – viewed as factually known as the number of successes (498) divided by the number of trials (551). There is no need to view it any longer as unknown and treat the success rate of 0.904 as an estimate pˆ of an unknown p, which results from an imagined infinite series of Bernoulli trials. Investigating and Modelling the Unknown Value of p Three different ways may be pursued to provide the required information for the unknown parameter. With the binomial distribution, one needs to have information about the value of p, which may be interpreted as success rate in the Bernoulli process in the background. The information to fix the parameter has different connotation as described here. In due consequence, the models applied inherit part of this meaning. The cases to differentiate are: i. p is known ii. p is estimated from data iii. p is hypothesized from similar situations With games of chance, symmetries of the involved random experiment allow one to derive a value for p; eg., ½ for head in tossing a ‘fair’ coin – case i. Most applications, however, lack such considerations and one has to evoke ii. or iii. i. The assumption of equiprobability for all possible cases (of which one is called ‘success’) is – beyond games of chance – sometimes more a way of putting it, just to fix a model to work with. For coin tossing, this paves the way to avoid tedious data production (actually tossing the coin quite often) and work with such a model to derive some consequences on the basis ‘what would be if we suppose the coin to be fair (symmetric)’. Normally, for coins, a closer scrutiny would not deviate too much from the presupposition of p = ½ and yield quite similar results. With the basketball task, the value of p could be known from the past (model 1), which also refers to the further assumption that ‘things remain the same’5, i.e., there was no change from past to present. This is an assumption for which usually a substantial justification is lacking – those who do not rely too heavily on the information of the past might react more intelligently in the present.6 ii. Closer to the present situation in Example 7 is to use the latest data available (from the present season). The disadvantage of such a procedure is that the difference between the ‘true’ value of p and an estimate pˆ might be blurred, and thereby forgetting that the estimate is inaccurate. One possibility to deal with this is the variant of model 2. The inaccuracy of estimates is best illustrated by confidence intervals. To vary the size of underlying samples where the data stem from, gives a clearer picture of the influence of this lack of information. An important assumption for the data base from which the estimate is calculated, is: it has to be a random sample of the underlying Bernoulli process, which is 15

BOROVCNIK AND KAPADIA

essentially the same as the parent Bernoulli process. Clearly, the assumption of a sample to be random is rarely fulfilled and often is beyond scrutiny. Usually there are qualitative arguments to back up such an assumption. It is to be noted that the estimation of the probability is interwoven with the two key assumptions of Bernoulli experiments – the same success probability, and occurring independently in all trials. Otherwise, probabilities, such as a probability of success with a free throw, have no meaning. iii. A hypothesis about the success probability could be corroborated by knowledge about the past as in i. However, the season is completed and as a matter of fact, the success rate of Nowitzki was 0.904. To apply the factual knowledge about all games yields a value for the unknown parameter as p: = 0.904,

(13)

which amounts to much more concrete information than the estimation procedure leading to p ≈ pˆ = 0.904 . The success probability in (13) may be justified and clarified by the following: As the season is completed, one knows all data. There will be no more. A new season will have irretrievably different constellations. The success rate in 2006–07 is – as a matter of fact – 0.904. There could well be the question as to how to interpret this number and whether it is possible to interpret it as a success probability. The key question is whether it makes sense to interpret this 0.904 as a success probability. This interpretation is bound to the assumption that the data stem from an independent repetition of the same Bernoulli experiment. This requires – taken literally – that for each free throw of Nowitzki the conditions have been exactly the same, independently of each other and independent of the actual score and the previous course of the match, etc. With this point of view one might question whether the data really are compatible with the pre-requisites of a Bernoulli process. One could, e.g., inspect the number of ‘runs’ (points or failures in a series) and evaluate whether they are above or below the expected value for a Bernoulli process or not in order to check for the plausibility of its underlying assumptions. The way in which information is used to get numerical values for the unknown parameter influences the character of the model, which is fixed by it. From a modelling perspective, this has deep consequences as any interpretation of results from the model has to take such ‘restrictions’ into consideration. If the value of p is thought to be known – either by reference to a symmetric experiment, or by an unambiguous statement like ‘from long-standing experience from the past we know p to be 0.910’ in the wording of Example 7, the probabilistic part of it becomes trivial from a modelling perspective and a direct application of binomial probabilities is required. The solution may be found either by means of a hand-held calculator or a spreadsheet, or even by old-fashioned probability tables – the answer is straightforward and undisputed. The discussion about the various ways to deal with information about the success rate p might lead to the didactical conclusion that such questions have to be 16

MODELLING IN PROBABILITY AND STATISTICS

excluded from a final exam, especially if it is put forward centrally. The information in such tasks has to be conveyed clearly, the models have to be precisely and explicitly determined by the very text (not the context) of the task. The question remains – under such circumstances – would it still be worthwhile to teach probability as it would be reduced to a mere mechanistic application of the formulae in such exams? What is interesting is how the process of modelling used allows for an answer of the problem and in what respect such a model misses important features of the situation involved. To choose between various possible models and to critically appreciate the model finally chosen is a worthwhile standard to reach in studying about probability. In only rare cases is there one distinct answer to a problem in question. The assumptions of a Bernoulli process are not well fulfilled in sports and in many other areas where such methods are ‘blindly’ applied. Such assumptions establish (more or less well) a scenario (as opposed to a model that fits very well to the real situation), which allows an inspection of the situation on the basis of an ‘what would be – if we assume …’ Then of course, situations have to be set out where such scenarios may deliver suitable orientations despite their lack of fit to the situation (for the idea of a scenario instead of a model, see Borovcnik, 2006). If p is not known directly, there are various ways to fill in the gap of information – the scale ranges from hypotheses of differing credibility to estimates from statistical data of differing relevance (depending on the ‘grade of randomness’ of the sample). Clearly, a true value of p has to be distinguished from an estimate pˆ of p. The whole of inferential statistics is based on a careful discrimination between true parameters and estimations thereof. However, again, issues are not as easy and clear-cut. What may be viewed as a true parameter in one model may be viewed as an estimate in another model – see the ideas developed subsequently. If the option of a statistical estimate of the unknown parameter is chosen as in (11), then the data has to fulfil the assumption of a random sample – an independent repetition of the same basic experiment yielding each item of data. The accuracy linked to a specific sample may be best judged by a confidence interval as in (12). It might be tempting to reduce the length of such a confidence interval and to increase the precision of information about the unknown parameter by increasing the sample size. However, in practice, to obtain more data usually means a lower quality of data; ie., the data no longer fulfil their fundamental property of being a random sample, which involves a bias in the data with no obvious way to repair it. If the option of hypothesizing values for the unknown parameter is chosen as in (10), or in (13), one might have trouble in justifying such a hypothesis. In some cases, however, good arguments might be given. For the statistical part of Example 7 when it comes to an evaluation whether Nowitzki is better in home than in away matches, a natural hypothesis emerges from the following modelling. Split the Bernoulli process for home and away matches by a different value for p as pH and pA. The assumption of equal strength (home and away) leads to the hypothesis p A = p H , or p A − p H = 0 .

(14)

17

BOROVCNIK AND KAPADIA

Analysis of the data is then done under the auspices ‘as if the difference of the success probabilities home and away were zero’. However, it is not straightforward to derive the distribution for the test statistic pˆ A − pˆ H . More about Assumptions – A Homogenizing Idea ‘Behind’ the Binomial Distribution In the context of sports, it is dubious to interpret relative frequencies as a probability and – vice versa – it is difficult to justify estimating an unknown probability by relative frequencies. What is different in the sports context from games of chance where the idea of relative frequencies has emerged? It is the comparability of single trials, the non-dependence of single trials – that is the hinge for transferring the ideas from games to other contexts. For probabilistic considerations such a transfer seems to be more crucial than for a statistical purpose, which focuses on a summary viewpoint. In theory, the estimation of the success parameter p improves by increasing the sample size. Here, this requires combining several seasons together. However, the longer the series – especially in sports – the less plausible the assumptions for a Bernoulli process. And, if relative frequencies are used to estimate the underlying probabilities, condition (9) of a Bernoulli process has to be met. Only then do the estimates gain in precision by increasing the sample size. However, for Nowitzki’s scores, the assumptions have to be questioned. People and the sport change over time, making assumptions of random, independent trials as the basic modelling approach less tenable. Take the value of 0.904 as an estimate for his current competency to make a point with a free throw – formalized as a probability p. This requires an unrealistic assumption of ‘homogenizing’: Nowitzki’s capacity was constant for the whole season and independent of any accompanying circumstances, not even influenced by the fact that in one game everything is virtually decided with his team leading or trailing by a big gap close to the end of the match, or there is a draw in the match and this free throw – the last event in the match – will decide about winning or not. For statements related to the whole entity of free throws, such a homogenization might be a suitable working hypothesis. Perhaps the deviations from the assumptions balance for a longer series. To apply results derived on such a basis for the whole entity to a sequence of 10 specific throws and calculate the probability of 8 points at the most, however, makes hardly any sense, and even less so if it deals with the last 10 of an all-deciding match. To imbue p with a probabilistic sense, to apply the binomial distribution sensibly, one has to invent a scenario like the following: All free throws of the whole season have been recorded by video cameras. Now we randomly select 10 clips and ask: How often does Nowitzki make the point? ‘SOLUTION’ OF THE PROBABILISTIC PART OF THE NOWITZKI TASK

In this section, the solutions are derived from the various models and critically appraised. As the choice of model depends not only on assumptions but also on an 18

MODELLING IN PROBABILITY AND STATISTICS

estimation of unknown parameters, the question arises, which of the available models to choose – a fundamental issue in modelling. Several models are dealt with in the Nowitzki task and their relative merits are made clear. The methods of determining a good choice for the parameters also convey key features of pedagogy – some knowledge is taken from the context, some will be added by statistical estimation. While for parts a. and b. the results seem straightforward, part c. gives ‘new’ insights. This task was fiercely rejected by some experts as unsolvable. However, by highlighting a fundamental property of Bernoulli series a solution of part c. is easier. If the chosen model is taken seriously, then the modeller is in the same situation as in any game of chance. In such games, the player can start at any time – it does not matter. The player can also eliminate any randomly chosen games without a general change in the result. That is the key idea involved. Calculation of the Probability of Simple Events – Parts a. and b. With model 1 and the assumption that p = 0.910, the distribution is given in (10). Using a spreadsheet gives the solution with this specific binomial distribution as set out in Table 2. Model 3 is handled in the same way. Probability distribution for number of hits in 10 trials under various models

%

Probability distribution for number of hits in 10 trials - best and worst case

%

50

50

40

40

30

30

Model 2 or 3

20

Best case 20

Model 1

Worst case 10

10

0

0 0

5

10

hits

0

5

10

hits

Figure 5. Probability distributions for the number of hits under the various models.

With model 2 and the estimate p = 0.904, the distribution is fixed in (11). Using model 2 (variant), one derives the confidence interval (12) for Nowitzki’s ‘strength’ and uses the binomial distribution with parameters corresponding to worst and best cases for the playing strength. The distributions for the number of hits in 10 trials are depicted in Figure 5. While models 1 and 2 are similar, there is a huge difference between best and worst case in model 2 (variant). Table 2. Probabilities calculated under the various models

p = 0.910

Model 2 (3)7 pˆ = 0.904

Worst case pL

Best case pU

P (T10 = 8)

0.1714

0.1854

0.2345

0.1273

P (T10 ≤ 8)

0.2254

0.2492

0.3449

0.1573

Model 1

Model 2 (variant)

19

BOROVCNIK AND KAPADIA

It is remarkable, and worthy of pedagogical discussion in the classroom, that the solutions differ so much when the assumptions seem to be very similar. The inaccuracy as conveyed by the confidence interval (12) on p only reflects a margin of just under 5 percentage points. Nevertheless the variant of model 2 gives a margin of 0.1573 to 0.3449 for the probability in question b.8 Thus, it is crucial to remember that one has only estimates of the unknown parameter and imperfect knowledge. The Question ‘Nowitzki Scores at Most Four Times in a Series’ Task c. was disputed in a public discussion, in which statisticians were also involved. It was claimed that it can not be solved without an explicit number of observations given. Suggestions to fix the task are reported and a correct solution using a key property of Bernoulli processes is given below. The following passage is taken from an open letter to the ministry of education (Davies et al, 2008): “The media reported that one part of the [Nowitzki task] was not solvable because the number of trials is missing. This – in fact – is true and therefore several interpretations of the task are admissible, which lead to differing solutions.” Davies (2009, p. 4) illustrates three ways to cope with the missing number of trials: The first is to take n = 10 as it was used in the first part of the task. This suggestion comes out of ‘student logic’ but leads into an almost intractable combinatorial problem. One has to inspect all the 210 = 1024 series of 0’s and 1’s (for failures and successes) whether they have a single segment of five or more 1’s in it (indicating the complement of the event in question), or not. A second possibility is to take n = 5, which makes the problem very simple: The only sequence not favourable to the event in question is 1 1 1 1 1. Thus the probability for the complementary series, for which one is looking for here, equals 1 – p5

(15)

and the result is dependent on the chosen model (see Table 3). Again it is surprising that the change from model 2 to its variant reveals such a high degree of imprecision implicit in the task as the required probability is known only within a range of 0.31 to 0.47, if one refers to an estimate of the unknown probability of Nowitzki’s strength. But the reader is reminded that model 3 does not have this problem. Its result coincides with model 2 (without the variant) as the parameter is set to be known by (13). Table 3. Probabilities of the series ‘less than 5’ with n = 5 under the various models

5

p 1 – p5 20

Model 1 p = 0.910

Model 2 (3) pˆ = 0.904

Worst case pL

Model 2 (variant) Best case pU

0.6240 0.3760

0.6031 0.3969

0.5253 0.4747

0.6898 0.3102

MODELLING IN PROBABILITY AND STATISTICS

The third and final remedy to fill in for the gap of the missing number of trials, which Davies (2009) offers, refers to an artificial new gaming situation. “We imagine a game where two players perform free throws. One of the players begins and continues to throw as long as his ball passes correctly through the basket and he scores. If Nowitzki starts this game what is his probability that he scores at the most four times in a series in his first try?” Now, the possible sequences of this game are 0

10

110

1110

11110

(16)

The solution emerging from (16) coincides exactly with that where the number of trials is fixed by 5, which numerically was the officially accepted solution (though it was derived without the assumption of n = 5 trials). Regarding the common factor 1 – p in the single probabilities involved in (16), we get the solution by: (1 + p + p 2 + p 3 + p 4 )(1 − p) = 1 − p 5 .

(17)

The third attempt to solve task c. without the missing number of trials yields the solution. However, it implies an artificial new gaming situation, which makes things unnecessarily complicated. In fact, the task is solvable without supplementing the missing number of trials and without this artificial game. One only has to remind oneself of what really amounts to a Bernoulli process, what properties are fundamental to such processes. The property in question will – once recalled – lead to a deeper understanding of what Bernoulli processes are. The next section will illustrate this idea. If one agrees with the assumption (9) of a Bernoulli process for the trials of Nowitzki then part c. of the probabilistic task is trivial. If the conditions are always the same throughout then it does not matter when one starts to collect the data. Mathematically speaking: If X 1 , X 2 , ... is a Bernoulli process with relation (9), then the following two sub processes have essentially the same probabilistic features, i.e., they also follow property (9): – random start of data collection i0: Xi , Xi 0

iid

0+

1

, ... ~ X ~ B (1, p ) ;

(18a)

– random selection i0, i1, … of all data: iid

X i , X i , ... ~ X ~ B (1, p ) 0

1

(18b)

This is a fundamental property of Bernoulli processes in particular and of random samples in general. One may start with the data collection whenever one chooses, therefore, (18a) applies. One can also eliminate some data if the elimination is 21

BOROVCNIK AND KAPADIA

undertaken randomly as in (18b). While this key property of Bernoulli processes should be explained intuitively to students, it could also be supported by simulation studies to address ‘intuitive resistance’ from students. Statisticians coin the term iid variables. One needs to explain to students that each single reading comes from a process that – at each stage (for each single reading) – has a distribution independent of the other stages and which follows an identical (i.e., the same) distribution throughout (hence iid); this deceptively complex idea takes time for students to absorb. It has already been mentioned that in sports such an assumption is doubtful but the example was put forward with this assumption for modelling, which therefore will not be challenged at this stage. If it does not matter when we (randomly) start the data collection, we just go to the sports hall and wait for the next free throws. We note whether – Nowitzki does not score a point more than four times in a series – event A, or, – he succeeds in scoring more than four times in a series – event A Clearly it holds:

P ( A ) = p 5 ⋅1 and P( A) = 1 − p 5 .

(19)

The term p5 in (19) stands for the first five 1’s in the complementary event; this probability has to be multiplied by 1 as from the sixth trial onwards the outcome does not matter (and therefore is the certain event). The result coincides with solution (16). However, there is no need to develop this imaginary game with an opponent as Davies (2009) does in his attempt to ‘fix’ the task. The task is easily solved using the fundamental property (18a) of Bernoulli processes. If one just goes to a match (maybe one is late, it would not matter!) and observes whether Nowitzki scores more than four times in a series right from the start, then everything beyond the fifth trial is redundant and the solution coincides with solution (15) where the number of observations is fixed with n = 5. The Solution is Dependent on the Number of Trials Observed! If the number n of trials is pre-determined, the probability to have at the most four successes in a series changes. If one observes the player only up to four times, he cannot have more than four successes, whence it holds: P(A) = 1. The longer one observes the player, the more is the chance to finally see him score more than four times in a series. It holds: P( A | n) → 0, n → ∞ .

(20)

KEY IDEA BEHIND VARIOUS DISTRIBUTIONS

In this section, we explain the underlying key pedagogical ideas of the following seven distributions. a. Binomial distribution: repeated, independent trials, called Bernoulli process; b. Hypergeometric distribution: repeated dependent trials; 22

MODELLING IN PROBABILITY AND STATISTICS

c. d. e. f. g.

Poisson distribution: completely random events in time – Poisson process; Geometric distribution: waiting times in the Bernoulli process; Exponential distribution: Poisson and memory-less waiting; Weibull distribution: conditional failure rates or hazards; Normal distribution: the hypothesis of independent elementary errors;

For students’ understanding and also for good modelling reasons it is of advantage to have a key idea behind each of the distributions. Otherwise, it is hard to justify a specific distribution as a suitable model for the phenomenon under scrutiny. Why a Poisson distribution, or why a normal? The key idea of a distribution should convey a direct way of judging whether such a distribution could model the phenomenon in question. It allows one to check the necessary assumptions in the data generating process and whether they are plausible. Such a fundamental idea behind a specific distribution is sometimes hidden; it is difficult to recognise it from discrete probabilities or density functions, which might also have complicated mathematical terms. Other concepts related to a random variable might help to reveal to students ‘the’ idea behind a distribution. For example, a feature like a memory-less property is important for the phenomenon, which is described by a distribution. However, this property is a mathematical consequence of the distribution but cannot directly be recognized from its shape or mathematical term. In the context of waiting time, the memory-less property means that the ongoing waiting time until the ‘event’ occurs, has the same distribution throughout – regardless of the time already waiting for this event. Or, technical units might show (as human beings do) a phenomenon of wearingout, i.e., the future lifetime has, amongst others, an expected value decreasing by the age of the unit (or, the human being). To describe such behaviour, further mathematical concepts have to be introduced like the so-called hazard (see below). In technical applications continuous service of units might postpone wearing-out. For human beings, insurance companies charge higher premiums for a life insurance policy to older people. While further mathematical concepts might be – at first sight – an obstacle for teaching, they help to shed light on key ideas for a distribution that enhance ‘internal mechanisms’ lurking in the background, and also help to understand the phenomena better. On the contrary, the usual examination as to whether a specific distribution is an adequate model for a situation is performed by a statistical test on whether the data is compatible with what is to be ‘expected’ from a random sample of this model or not. Such tests focus on the ‘external’ phenomenon of frequencies as observed in data. a. Binomial Distribution – Repeated Independent Trials A Bernoulli process may be represented by drawing balls from an urn where there is a fixed proportion p of balls marked by a 1 (success) and the rest marked by a 0 (failure). If one draws a ball repeatedly from that urn n times always replacing the drawn ball, the number of successes follows a binomial distribution with parameters n and p. 23

BOROVCNIK AND KAPADIA

There are characteristic features inherent to the depicted situation: repeated experiments with the same success probability p, independent trials (mixing the balls before each draw), so that the success probability remains the same throughout. One could spin a wheel repeatedly with a sector marked as 1 and another marked as 0. This distribution was discussed at length with the Nowitzki task. The binomial distribution is intimately related to the Bernoulli process (9), which may also be analysed from the perspective of continuously observing its outcomes, until the first event occurs – see the geometric distribution below. b. Hypergeometric Distribution – Repeated Dependent Trials This distribution, too, is best explained by the artificial but paradigmatic context of drawing balls from an urn with a fixed number of marked (by 1’s) and non-marked (the 0’s) balls as in the binomial situation; however, now the drawn balls are not replaced. Under this assumption, the number of marked balls among the n drawn follows a hypergeometric distribution. The characteristics are repeated experiments with the same success probability p, but dependent trials, so that the success probability remains the same only if one does not know the history of the process, otherwise there is a distinct dependence. The context of drawing balls explains also that – under special circumstances – the hypergeometric may be well approximated by the binomial distribution: if the number n of balls drawn from the urn is small compared to the number N of all balls in the urn, then the dependence between successive draws is weaker and the conditions (9) of a Bernoulli process are nearly met. c. Poisson Distribution – Pure Random Events in Time It is customary to introduce the Poisson distribution as the – approximate – distribution of rare events in a Bernoulli process (p small); it is also advantageous to refer this distribution to the Poisson process even if this is lengthy and more complex. The process of generating ‘events’ (e.g., emitted radioactive particles), which occur in the course of time (or in space), should intuitively obey some laws that may compare to the Bernoulli process: – The start of the observations is not relevant for the probability to observe any event; see the fundamental property (18a) – this leads to A1 in (21) below. – If one observes the process in non-overlapping intervals, the pertinent random variables have to be independent, which corresponds to the independence of the single observations Xi in the Bernoulli process – this leads to A4. – The main difference in the processes lies in the fundamental frequency pulsing: in the Bernoulli process, there is definitely a new experiment Xi at ‘time’ i whereas in the Poisson process time is continuously flowing with no apparent performing of an experiment (leading to an event or not) – events just occur. 24

MODELLING IN PROBABILITY AND STATISTICS

– It remains to fix the probability of an event. As there is no distinct experiment with the outcome of the event (or its non-occurrence), we can speak only of an intensity λ of the process to bear events. This intensity has to be related to unit time; its mathematical treatment in A2 involves infinitesimal concepts. – Paradoxically, a further requirement has to be demanded: even if two or more events may occur in an interval of time, which is not too small, such a probability of coincidences should become negligible if the length of the observation interval becomes small – that leads to A3 below. Mathematically speaking (compare, e.g., the classic text of Meyer, 1970, p. 166), a Poisson process has to meet the following conditions (the random variable Xt counts the number of events in the interval ( 0, t ) ): A1 A2

If Yt counts the events in (t 0 , t 0 + t ) then Yt ~ Xt P ( X Δ t = 1) = λ ⋅ Δ t + o(Δ t )

A3

P( X Δ t ≥ 2) = o(Δ t )

A4

Xt and Yt are independent random variables if they count events in non-overlapping time intervals.

(21)

Assumptions (21) represent pure randomness; they imply that such a process has no preference for any time sequence, has no coincidences as they would occur by ‘intention’, and shows no dependencies on other events observed. The assumptions may also be represented locally by a grid square as is done in Example 8. The main difference of the Poisson to the Bernoulli process lies in the fact that there is no definite unit of time, linked to trials 1, 2, 3, etc., which may lead to the event (1) or not. Here, the events just occur at a specific point of time but one cannot trace when an ‘experiment’ is performed. The success probability p of the Bernoulli process associated with single experiments becomes an intensity λ per unit time. The independence of trials becomes now an independence of counting events in mutually exclusive intervals in the sense of A4. The Poisson process will have a further analogue to the Bernoulli process in terms of waiting for the first event – the Geometric and the Exponential distribution (which describe waiting times in pertinent situations) both have similar properties (see below). We present one example here to illustrate a modelling approach to the Poisson. This shows how discussions can be initiated with students on the theoretical ideas presented above, and help students to understand how and when to apply the Poisson distribution. Example 8. Are the bomb attacks of London during World War II the result of a planned bombardment, or may they be explained by pure random hitting? To compare the data to the scenario of a Poisson process, the area of South London is divided into square grids of ¼ square kilometres each. The statistics in Table 4 shows e.g., 93 squares with 2 impacts each, which amounts to 186 bomb hits. In sum, 537 impacts have been observed. 25

BOROVCNIK AND KAPADIA

Table 4. Number of squares in South London with various numbers of bomb hits – Comparison to the frequencies under the assumption of a Poisson process with λ = 0.9323 No. of hits in a square No. of grid squares with such a no. of hits Expected numbers under Poisson process

0

1

2

3

4

5 and more

229.

211.

93.

35.

7.

1.

226.74

211.39

98.54

30.62

7.14

1.57

all 576.

If targeting is completely random, it follows the rules of a Poisson process (21) and the number of ‘events’ per grid square follows then a Poisson distribution. The parameter λ is estimated from the data to fix the model by

λ=

537 576

= 0.9323 hits per grid square.

(22)

As seen from Table 4, the fit of the Poisson distribution to the data is extremely good. Feller (1968, pp 161) highlights the basic property of the Poisson distribution as modelling pure randomness and contrasts it to wide-spread misconceptions: “[The outcome] indicates perfect randomness and homogeneity of the area; we have here an instructive illustration of the established fact that to the untrained eye randomness appears as regularity or tendency to cluster.” In any case of an application, one might inspect whether the process of data generation fulfils such conditions – which could justify or rule out this distribution as a candidate for modelling. The set of conditions, however, also structures thinking about phenomena, which may be modelled by a Poisson distribution. All phenomena following internal rules, which come close to the basic requirements of a Poisson process, are open to such a modelling. d. Geometric Distribution – Memory-Less Waiting for an Event Here, a Bernoulli process with success parameter p is observed. In contrast to the binomial distribution, the number of trials is not fixed. Instead, one counts the number of trials until the first event (which corresponds to the event ‘marked’ by a 1) occurs. The resulting distribution ‘obeys’ the following ‘memory-less’ property: P (T > k 0 + k | T > k 0 ) = P (T > k ) .

(23)

This feature implies that the remaining waiting time for the first event is independent of the time k0 one has already waited for it – waiting gives no bonus. Such a characterization of the Bernoulli process helps in clarifying some basic misconceptions. The following example can be used to motivate students on the underlying features. 26

MODELLING IN PROBABILITY AND STATISTICS

Example 9. Young children remember long waiting times for the six on a die to come. As waiting times of 12 and longer still have a probability of 0.1346, see also Figure 6, this induces them to ‘think’ that a six has less probability than the other numbers on the die for which such a ‘painful’ experience is not internalised. Exponential distribution - with λ = 6

Geometric distribution - waiting times for the first 6 of a die 0.20

0.20

0.10

0.10

0.00

0.00

0

10

20

30

Figure 6. Waiting for the first six of a die – Bernoulli process with p = 1/6.

0

10

20

30

Figure 7. Exponential distribution has the same shape as the geometric distribution.

e. Exponential Distribution – Memory-Less Waiting for Events in Time The exponential distribution is connected to two key ideas: one links it to the Poisson process; the other uses the concept of conditional failure rate. In a Poisson process, if one is waiting for the next event to occur and the data are subsequent waiting times between the events, then the exponential distribution is the model of choice. This is due to a mathematical theorem (see Meyer 1970, p. 191). It can also be illustrated by simulation studies. An important feature of the exponential distribution is its memory-less property: P (t 0 < T ≤ t 0 + Δt | T > t 0 ) =

P (t 0 < T ≤ t 0 + Δt ) P (T > t 0 )

= P (0 < T ≤ Δt ) .

(24)

Due to the memory-less property, the conditional probability to fail within Δt units of time for a device that has reached age t0 is the same as within the first Δt units for a new device. This implies that the future lifetime (or, waiting time) is independent of age reached (or, the time already spent in waiting), i.e., t0, which amounts to a further characterization of ‘pure’ randomness. Exponential and geometric distributions share the memory-less property. This explains why the models have the same shape. If this conditional failure probability is calculated per unit time and the time length Δt is made smaller, one gets the conditional failure rate, or hazard h(t): h(t ) = lim

Δt → 0

P (t 0 < T ≤ t 0 + Δt | T > t 0 ) Δt

.

(25)

A hazard (rate) is just a different description of a distribution. Now it is possible to express the other key idea behind the exponential distribution, namely that its 27

BOROVCNIK AND KAPADIA

related conditional failure rate (or, hazard) is constant over the lifetime. If a (technical) unit’s lifetime is analysed and the internal structure supports that the remaining lifetime is independent of the unit’s age, then it may be argued that an exponential distribution is the model of choice. While such a property might seem paradoxal (old units are equally good as new units), it is in fact well fulfilled for electronic devices for a long part of their ordinary lifetime. Mechanical units, on the contrary, do show a wearing effect, so that their conditional lifetime gets worse with age. Similarly with human beings, with the exception of infant mortality when – in youngest ages – humans’ lifetime as a probability distribution improves. f. Weibull Distribution – Age-Related Hazards Lifetimes are an important issue in technical applications (reliability issues and quality assurance), waiting times are important in describing the behaviour of systems. There are some families of distributions, which may serve as suitable models. The drawback with these is that they require more mathematics to describe their density functions. Furthermore, the shape of their density gives no clue why they should yield a good model for a problem to be analysed. To view lifetimes (or waiting times) from the perspective of units that have reached some specific age already (have waited some specific time) sheds much more light on such phenomena than to analyse the behaviour of new items (with no time spent in waiting in the system). One would, of course, simulate such models first, explore the simulated data, and draw preliminary conclusions before one starts to delve deeper into mathematical issues. It may pay to learn – informally – about hazards and use this concept instead of probability densities to study probability models. Hazards will directly enhance the basic assumptions, which have to be fulfilled in case of applications. With the key idea of hazard or conditional failure rate, the discussion can relate to infant mortality (decreasing), purely random failures due to exponential lifetime (constant) and wearing-out effects (increasing). The power function is the simplest model to describe all these different types of hazard: h(t ) = β ( αt ) β −1 , α , β > 0 .

(26)

The parameter α is interpreted as the scale of time while β influences the shape and thus the quality of the change of hazard over lifetime. g. Normal Distribution – the Hypothesis of Independent Elementary Errors Any random variable that might be split into a sum of other (hidden) variables is – according to the central limit theorem (CLT) – approximately normally distributed. This explains the key underlying idea and ubiquity of the normal distribution. In the history of probability, the CLT prompted the ‘hypothesis of elementary errors’ (Gauss and earlier) where any measurement error was hypothesized to be the result 28

MODELLING IN PROBABILITY AND STATISTICS

(sum) of other, elementary errors. This supported the use of the normal distribution for modelling measurement errors in astronomy and geodesy. A generalization to the normal ‘law’ of distribution by Quételet and Galton is straightforward: it is an expression of God’s will (or Nature) that any biometric measurement of human beings and animals is normally distributed as it emerges from a superposition of elementary ‘errors of nature’ (Borovcnik, 2006)9. An interesting article about the history and myth of the normal law is Goertzel (n.d.). The mathematics was first proved by de Moivre and Laplace; the single summands Xi had then been restricted to a Bernoulli process (9). In this way, the binomial distribution is approximately normally distributed and the approximation is good enough if there are enough elements in the sum: Tn = X 1 + X 2 + ... + X n .

(27)

To illustrate matters, the Galton board or an electronic quincunx (see, e.g., Pierce, R., n.d.) may be used in teaching. Such a board has several rows of pegs arranged in a shape similar to Pascal’s triangle. Marbles are dropped from the top and then bounce their way down. At the bottom they are collected in little bins. Each time the marble hits one of the pegs, it may bounce either left or right. If the board is set up symmetrically the chances of bouncing either way are equal and the marbles in the bins follow the ‘bell shaped’ curve of the normal distribution. If it is inclined, a skewed distribution emerges, which normalizes, too, if enough rows are taken. Theoretically, one has to standardize the value of the sum Tn according to Un =

Tn − E (Tn )

(28)

var(Tn )

and the CLT in its crudest form becomes: iid

If X i ~ X are an iid process with finite variance var (X ) < ∞ , then it holds lim P(U n ≤ u ) = Φ (u ) n→∞

(29)

Here Φ (u) stands for the cumulative distribution function of the standard normal distribution, i.e., with parameters 0 and 1. Despite such mathematical intricacies, the result is so important that is has to be motivated in teaching. The method of simulation again is suitable not only to clarify the limiting behaviour of the sum (the distribution of its standardized form converges to the normal distribution), but also to get an orientation about the speed of convergence. Furthermore, this convergence behaviour is highly influenced by the shape of the distribution of the single Xi’s. A scenario of simulating 1000 different samples of size n = 20 and then n = 40 from two different distributions (see Figure 8) may be seen from Figure 9. The graph shows the frequency distributions of the mean of the single items of data 29

BOROVCNIK AND KAPADIA

Distribution of single data

Distribution of single data

7

Skewed distribution

0.3

0.3

0.2

0.2

0.1

0.1

0

0 0

2

4

6

8

10

0

10

20

30

40

50

Figure 8. A symmetric and a skewed distribution for the single summands in the scenario. Means of repeated samples of 20 data n=20

Means of repeated samples of 20 data

Normal curve

n=20

0,15

0,03

0,10

0,02

0,05

0,01

0,00

Normal curve

0,00 2

4

6

0

Means of repeated samples of 40 data n=40

4

8

12

16

20

16

20

Means of repeated samples of 40 data

Normal curve

n=40

0,15

0,03

0,10

0,02

0,05

0,01

0,00

Normal curve

0,00 2

4

6

0

4

8

12

Figure 9. Scenario of 1000 samples: distribution of the mean compared to the normal curve – left drawing from an equi-distribution, right drawing from a skewed distribution.

instead of the sum (27) – this is necessary to preserve scales as the sums are simply diverging. For the limit, the calculation of the mean still does not suffice as the mean converges weakly to one number (the expected value of Xi if all have the same distribution) – thus in the limit there would be no distribution at all. The simulation scenarios in Figure 9 illustrate that the calculated means of the repeated samples have a frequency distribution, which comes quite close to a normal distribution for only 20 summands. If the items of data are skewed, the approximation is slightly worse but with calculated means of 40 items of data in each sample the fit is sufficiently good again. The influence of the single summands (like those in Figure 8) on the convergence behaviour of the sum may be studied interactively: With a spreadsheet with slide 30

MODELLING IN PROBABILITY AND STATISTICS

controls for the number of values in the equi-distribution for the single data one could easily see that more values give a faster convergence to the fit. With a slide control for the skewness, e.g., to move the two highest values further away from the bulk of the data, one may illustrate a negative effect on convergence as the fit would become worse this way. By changes of the slide controls the effect on the distribution for an item of data Xi is seen from the bar graphs in Figure 8 and the effect on the ‘normalizing’ of the distribution of the mean of the repeatedly drawn samples may be studied from Figure 9 interactively. Distributions Connected to the Normal Distribution There are quite a few distributions, which are intimately connected to the normal distribution. The main usage of these is to describe the theoretical behaviour of certain test statistics based on a sample from a normal distribution. Amongst them are the t, the χ2 and F distribution. They are mainly used for coping with the mathematical problems of statistical inference and not for modelling phenomena. The χ2 distribution is somehow an exception to this, as the so-called Maxwell and Rayleigh distribution (the square root of a χ2) are also used by physicists to model velocity of particles (like molecules) in two or three dimensional space (with 2 or 3 degrees of freedom), see also Meyer (1970, pp. 220). SOLUTIONS TO THE STATISTICAL PART OF THE NOWITZKI TASK This section returns to the statistical questions of the Nowitzki task. Is Nowitzki weaker away than home? The usual way to answer such questions is a statistical test of significance. Such a modelling approach includes several steps to transfer the question from the context into a statistical framework, in which a null hypothesis reflects the situation of ‘no differences’ and alternative distributions depict situations of various degree of difference. As always in empirical research, there is no unique way to arrive at a conclusion. The chosen model might fit more for the one expert, and less for another one. Already the two questions posed in the formulation of the example (away weaker than home, or, away weaker than in all matches) give rise to disputes. The logic of a statistical test makes things not easier as one always has to refer to fictional situations in the sense ‘what would be if …’ Errors of type I and II, or p values give rise to many misinterpretations by students. And there are many different test statistics which use information, which could discriminate between the null and alternative hypotheses differently (not only in the sense of less precise and more precise but simply different with no direct way for a comparison). Moreover, one has to estimate parameters, or use other information to fix the hypotheses. If a Bernoulli process with success probability p is observed n times, then for the expected value of the number of successes Tn it holds: Tn = X 1 + ... + X n :

E (Tn ) = n p .

(30)

31

BOROVCNIK AND KAPADIA

Here, the value of p usually is not known and is called the true value for the (success) probability. Solutions to the First Statistical Part – Nowitzki Away Weaker than Home & Away? Part a. of the statistical questions in Example 7 is ill-posed10 insofar as the comparison of away matches against all matches does not reflect what one really should be interested in. Is Nowitzki away weaker than in home matches? A reference to comparison including all matches blurs the differences. Therefore, this question is omitted here. We will only discuss the inherent problems. With the three different success probabilities for home, away and all matches, it holds: p0 =

nH n ⋅ pH + A ⋅ p A n n

(31)

If the season is regarded as a self-contained entity, all success rates are known. If they are perceived as probabilities, the next question to discuss is whether there is one common process or two or more with different probabilities; a covariate like ‘location’ of the free throw (home, away) would explain the differences. If p0 is set as known, the problem may be handled in this way: ‘May the away free throws be modelled by a Bernoulli process with p0 from the overall strength?’ If the data is seen as a sample from an infinite (Bernoulli) process, p0 has to estimated from it, however, there are drawbacks in question a. and its modelling. Firstly, by common sense, no one would compare away scores to all scores in order to find differences between the two groups of trials away and home. Secondly, as the overall strength is estimated, it could also be estimated by the separate scores of away and home matches using equation (31): pˆ A and pˆ H are combined to an estimate pˆ 0 of p0. And the test would be performed by the data on away matches, which coincide with 231 = n A ⋅ pˆ A . Confusing here is that pA is dealt with as unknown (a test is performed whether it is lower than the overall strength), an estimate of it is used to get an estimate of the overall strength, and it is used as known data to perform the test. Solution to the Second Statistical Part – Nowitzki Weaker Away Than at Home?

In this subsection, the away scores are compared to the home matches only (part b). Various steps are required to transform a question from the context to the statistical level. It is illustrated how these steps lead to hypotheses at the theoretical level, which correspond and ‘answer’ the question at the level of context. Three different Bernoulli processes are considered: home, away, and all (home and away combined). After the end of the season, the related success probabilities are (factually) known from the statistics (see Table 5). Or, one could at least estimate some of these probabilities from the data. 32

MODELLING IN PROBABILITY AND STATISTICS

Table 5. Playing strength as success probabilities from hits and trials of the season Matches Home

Hits

Trials

TH = 267

nH = 288

Strength ‘known’ or estimated pH =

267

231

Away

TA = 231

nA = 263

pA =

All

T = 498

n = 551

p0 =

288

263 498 551

= 0.927 = 0.878 = 0.904

The basis for reference to compare the away results is the success in home matches. For home matches the success probability is estimated as in model 2 or ‘known’ from the completed season as in model 3 pH = 0.927.

(32)

Formally, the question from the context can be transferred to a (statistical) test problem in several steps of choosing the statistical model and hypotheses: T A ~ B (n A , π ) ,

(33)

ie., the number of hits (successes) in away matches is binomially distributed with unknown parameter π (despite reservations, this model is used subsequently). As null hypothesis H 0 : π = pH ,

(34)

will be chosen. This corresponds to ‘no difference of away to home matches’ from the context with pH designating the success probability in home matches. For the alternative, a one-sided hypothesis H1 : π < pH

(35)

is suggested. As in sports in general, the advantage of the home team is strong folklore, a one-sided11 hypotheses makes more sense than a two-sided alternative of π ≠ pH. The information about pH comes from the data, therefore it will be estimated by 0.927 to form the basis of the ‘model’ for the away matches. No other information about the ‘strength’ in home matches is available. Thus, the reference distribution for the number of successes in away matches is the following:

TA

H0

~ B ( n A , 0.927) .

(36)

Relation (36) corresponds to the probabilistic modelling of the null effect that ‘away matches do not differ from home matches’. The alternative is chosen to be 33

BOROVCNIK AND KAPADIA

one-sided as in (35). The question is whether the observed score of TA = 231 in away matches amounts to an event, which is significantly too low for a Bernoulli process with 0.927 as success probability, which is in the background of (36). Under this assumption, the expected value of successes in away matches is 263 ⋅ 0.927 = 243.8 . The p value of the observed number of 231 successes is now as small as 0.0033! Consistently, away matches differ significantly from home matches (at the 5% level of significance).

%

Number of hits in 263 trials - with the home strength p = 0.927

10

5

Upper limit of the smallest 5% of scores observed score

0 210

hits 220

230

240

250

260

Figure 10. Results of 2000 fictitious seasons with nA = 263 trials – based on an assumed strength of p = 0.927 (corresponding to home matches).

In Figure 10, the scores of Nowitzki in 2000 fictitious seasons are analysed. The scenario is based on his home strength with 263 free throws (the number of trials in away matches in the season 2006–07) ie., on the distribution of the null hypothesis in (36). From the bar chart it is easily seen that the observed score of 231 is far out in the distribution; it belongs to the smallest results of these fictitious seasons. In fact, the p value of the observation is 0.3%. A simulation study gives concrete data; the lower 5% quantile of the artificial data separates the 5% extremely low values from the rest. It is easy to understand that if actual scores are smaller than this threshold, they may be judged as not ‘compatible’ with the underlying assumptions (of the simulated data, especially the strength of 0.927). To handle a rejection limit (‘critical value’) from simulated data is easier than to derive a 5% quantile from an ‘abstract’ probability distribution. Validity of Assumptions – Contrasting Probabilistic and Statistical Point of Views The scenario of a Bernoulli process is more compelling for evaluating the question of whether Nowitzki is weaker in away matches than for the calculation of single probabilities of specific short sequences. In fact, whole blocks of trials are compared. This is not to ask for the probability for a number of successes in short periods of the process but to ask whether there are differences in large blocks on the whole. Homogenization means a balancing-out of short-term dependencies, or of fluctuation of the scoring probability over the season due to changes in the form of the player, 34

MODELLING IN PROBABILITY AND STATISTICS

or due to social conditions like quarrels in the team over an unlucky loss. In this way, the homogenization idea seems more convincing. The compensation of effects across single elements of a universe is essentially a fundamental constituent of a statistical point of view. For a statistical evaluation of the problem from the context, the scenario of a Bernoulli process – even though it does not apply really well – might allow for relevant results. For whole blocks of data, which are to be compared against each other, a homogenization argument is much more compelling as the violations of the assumptions might balance out ‘equally’. The situation seems to be different from the probabilistic part of the Nowitzki problem where it was doomed to failure to find suitable situations for which this scenario could reasonably be applied. Alternative Solutions to – Nowitzki Weaker Away than at Home? Some alternatives to deal with this question are discussed; they avoid the ‘confusion’ arising from the different treatment of the parameters (some are estimated from the data and some are not). Not all of them are in the school curricula. Table 6. Number of hits and trials Matches Home Away All matches

Hits 267 231 498

Failures 21 32 53

Trials 288 263 551

Fisher’s exact test. is based on the hypergeometric distribution. It is remarkable that it relies on less assumptions than the Bernoulli series and it uses nearly all information about the difference between away and home matches. The (one-sided) test problem is now depicted by the pair of hypotheses: H0: p A − p H = 0 against H1: p A − p H < 0

(37)

‘No difference’ between away and home matches is modelled by an urn problem relative to the data in Table 6: All N = 551 trials are represented by balls in an urn; A = 498 are white and depict the hits, 53 are black and model the failures. The balls are well mixed and then one draws n = 263 balls for the away matches (without replacement). The test is based on the number of white balls NW among the drawn balls. Here, a hypergeometric distribution serves as reference distribution: NW

H0

~ Hyp( N = 551, A = 498, n = 263) ,

(38)

Small values of NW indicate that the alternative hypothesis H1 might hold. The observation of 231 white balls has a p value of 3.6%. Therefore, at a level of 5% (one-sided), Nowitzki is significantly weaker away than in home matches. 35

BOROVCNIK AND KAPADIA

With this approach neither the success probability for away nor for home matches were estimated. The hypergeometric distribution needs not to be tackled with all the mathematical details. It is sufficient to describe the situation structurally and get (estimations of ) the probabilities by a simulation study. Test for the difference of two proportions. This test treats the two proportions for the success in away and home matches in the same manner as both are estimated from the data, which are modelled as separate Bernoulli processes according to (9): pˆ A estimates the away strength p A ; pˆ H the home strength p H

(39)

The (one-sided) test problem is again depicted by (37). However, the test statistic now is directly based on the estimated difference in success rates pˆ A − pˆ H . By the central limit theorem, this difference (as a random variable) – normalized by its standard error – is approximately normally distributed; it holds:

U :=

pˆ A − pˆ H pˆ A ⋅(1− pˆ A ) nA

+

pˆ H ⋅(1− pˆ H ) nH

approx

~ N (0, 1) .

(40)

There is now a direct way to derive rejection values to decide whether the observed difference between the success rates of the two Bernoulli processes is significant or not: The estimated value of the difference of the success rates of – 0.04876 gives rise to an U of –1.9257 which amounts to a p value of 2.7%. In this setting a simulation of the conditions under the null hypothesis is not straightforward. If one tests mean values from two different samples for difference then one can use the so-called Welch test. Both situations – testing two proportions or two means for significant differences – are too complex for introductory probability courses at the university. One could motivate the distributions and focus on the problem of judging the difference between the two samples. However, the simpler Fisher test may be seen as the better alternative for proportions. Some Conclusions on the Statistical Modelling of the Nowitzki Task Inferential statistics means evaluating hypotheses by data (and mathematical techniques). Is the strength of Nowitzki away equal to p = 0.927? An answer to this question depends on whether we search for deviations from this hypothesized value in both directions (two-tailed) or only in the direction of lower values (one-tailed). From the context, the focus may well be on lower values of p for the alternative as the advantage of the home team is a well-known matter in sport. The null hypothesis forms the reference basis for the given data. For its formulation, further knowledge is required, either from the context, or from the data. Such knowledge should never be mixed with data, which is used in the subsequent test procedure; this is a crucial problem of task a., which asks to compare the away 36

MODELLING IN PROBABILITY AND STATISTICS

scores to the score in all matches. A value for all matches – if estimated from the data – also contains the away matches. However, this should be avoided, not only for methodological reasons but by common sense too. If the season is seen as self-contained, the value of p = 0.927 is known. A test of 0.927 against alternative values of the strength less than 0.927 corresponds to the question ‘Is Nowitzki away weaker than home?’ 0.927 might as well be seen as an estimate of a larger imaginary season. An evaluation of its accuracy (as in section 3) is usually not pursued. A drawback might be seen in the unequal treatment of the scores: home scores are used to estimate a parameter pH while the away scores are treated as random variable. Note that in this test situation no ‘overlap’ occurs between data used to fix the null hypothesis and data used to perform the test. The alternative tests discussed here treat the home and away probabilities in a symmetric manner: both are assumed as unknown; either both are estimated from the data, or estimation is avoided for both. These tests express a correspondence between the question from the context and their test statistics differently. They view the situation as a two-way-sample. Such a view paves the way to more general types of questions in empirical research, which will be dealt with below. STATISTICAL ASPECTS OF PROBABILITY MODELLING

Data have to be seen in a setting of model and context. If two groups are compared – be it a treatment group receiving some medical treatment and a control group receiving only placebo (a pretended treatment), two success rates might be judged for difference as in the Nowitzki task. Is treatment more effective than placebo? The assumption of Bernoulli processes ‘remains’ in the background (at least if we measure success only on a 0–1 scale). However, such an assumption requires a heuristic argument like the homogenization of data in larger blocks. The independence assumption for the Bernoulli model is not really open to scrutiny as it leads to methodological problems (a null hypothesis can not be statistically confirmed). The idea of searching for covariates serves as a strategy to make the two groups as equal as they could be. Data may be interpreted sensibly and used for statistical inference – in order to generalize findings from the concrete data – only by carefully checking whether the groups are homogenous. Only then, do the models lead to relevant conclusions beyond concrete data. If success is measured on a continuous scale, the mathematics becomes more complicated but the general gist of this heuristic still applies. Dealing with the Inherent Assumptions A further example illustrates the role of confounders. Example 10. Table 7 shows the proportions of girl births in three hospitals. Can they be interpreted as estimates of the same probability for a female birth? Assume such a proportion equals 0.489 worldwide; with hospital B (and 358 births) the proportion of girls would lie between 0.437 and 0.541 (with a probability of 37

BOROVCNIK AND KAPADIA

Table 7. Proportion of girls among new-borns Hospital A B C ‘World stats’

Births 514 358

Proportion of girl births 0.492 0.450 0.508 0.489

approximately 95%). The observed value of 0.450 is quite close to the lower end of this interval. This consideration sheds some doubt on it that the data have been ‘produced’ by mere randomness, ie., by a Bernoulli process with p = 0.489. It may well be that there are three different processes hidden in the background and the data do not emerge from one and the same source. Such ‘phenomena’ are quite frequent in practice. However, it is not as simple as that one could go to hospital B if one wants to give birth to a boy? One may explain the big variation of girl births between hospitals by reference to a so-called covariate: one speculation refers to location; hospital A in Germany, C in Turkey, and B in China. In cases where covariates are not open to scrutiny as information is missing about them, they might blur the results – in such cases these variables are called confounders. For the probabilistic part of Nowitzki, it might be better to search for confounders (whether he is in good form, or had a quarrel with his trainer or with his partner) in order to derive at a probability that he will at the most score 8 times out of 10 trials instead of modelling the problem by a Bernoulli process with the seasonal strength as success probability. Such an approach cannot be part of a formal examination but should feature in classroom discussion. Empirical Research – Generalizing Results from Limited Data Data always has to be interpreted by the use of models and by the knowledge of context, which influences not only the thinking about potential confounders but also guides the evaluation of the practical relevance of conclusions drawn. A homogenization idea was used to support a probabilistic model. For the Bernoulli process the differing success probabilities should ease out, the afflictions of independency should play less of a role when series of data are observed, which form a greater entity – as is done from a statistical perspective. This may be the case for the series of away matches as a block and – in comparison to it – for the home matches as a block. One might also be tempted to reject such a homogenization idea for the statistical part of the Nowitzki task. However, a slight twist of the context, leaving the data unchanged, brings us in the midst of empirical research; see the data in Table 8, which are identical to Table 5. Table 8. Success probabilities for treatment and control group Group Treatment Control All 38

Success 267 231 498

Size nT = 288 nC = 263 N = 551

Success probability pT = 0.927 pC = 0.878 p0 = 0.904

MODELLING IN PROBABILITY AND STATISTICS

This is a two-sample problem, we are faced with judging a (medical) treatment for effectiveness (only with a 0, 1 outcome, not with a continuous response). The Nowitzki question reads now as: Was the actual treatment more effective than the placebo treatment applied to the persons in the control group? How can we justify the statistical inference point of view here? We have to model success in the two groups by a different Bernoulli process. This modelling includes the same success probability throughout, for all people included in the treatment group as well the independence of success between different people. Usually, such a random model is introduced by the design of the study. Of course, the people are not selected randomly from a larger population but are chosen by convenience – they are primarily patients of the doctors who are involved in the study. However, they are randomly attributed to one of the groups, i.e., a random experiment like coin tossing decides whether they are treated by the medical treatment under scrutiny or they receive a placebo, which looks the same from outside but has no expected treatment effect – except the person’s psychological expectation that it could affect. Neither the patient, nor the doctors, nor persons who measure the effect of the treatment, should know to which group a person is attributed – the golden standard of empirical research is the so-called double-blind randomized treatment and control group design. The random attribution of persons to either group should make the two groups as comparable as they could be – it should balance all known covariates and all unknown confounders, which might interfere with the effect of treatment. Despite all precautions, patients would differ by age, gender, stage of the disease, etc. Thus, they do not have a common success probability that the treatment is effective. All what one can say is that one has undertaken the usual precautions and one hopes that the groups are now homogenous enough to apply the model in a manner of a scenario: ‘what does the data tell us if we think that the groups meet a ceteris paribus condition’. A homogenization argument is generally applied to justify drawing conclusions out of empirical data. It is backed by random attribution of persons to the groups, which are to be compared. The goal of randomizing is to get two homogenous groups that differ only with respect to what has really been administered to them: medication or placebo. CONCLUSIONS

The two main roles for probability are to serve as a genuine tool for modelling and to prepare and understand statistical inference. – Probability provides an important set of concepts in modelling phenomena from the real world. Uncertainty or risk, which combines uncertainty with impact (win or loss, as measured by utility) is either implicitly inherent to reality or emerges of our partial knowledge about it. – Probability is the key to understand much empirical research and how to generalize findings from samples to populations. Random samples play an eminent role in that process. The Bernoulli process is a special case of random sampling. Moreover, inferential statistical methods draw heavily on a sound understanding of conditional probabilities. 39

BOROVCNIK AND KAPADIA

The present trend in teaching is towards simpler concepts focusing on a (barely) adequate understanding thereof. In line with this, a problem posed to the students has to be clear-cut, with no ambiguities involved – neither about the context nor about the questions. Such a trend runs counter to any sensible modelling approach. The discussion about the huge role intuitions play in the perception of randomness was initiated by Fischbein (1975). Kapadia and Borovcnik (1991) focused their deliberations on ‘chance encounters’ towards the interplay between intuitions and mathematical concepts, which might influence and enhance mutually. Various, psychologically impregnated approaches have been seen in the pertinent research. Kahneman and Tversky (1972) showed the persistent bias of popular heuristics people use in random situations; Falk and Konold (1992) entangle with causal strategies and the so-called outcome approach, a tendency to re-formulate probability statements into a direct, clear prediction. Borovcnik and Peard (1996) have described some specifities, which are peculiar to probability and not to other mathematical concepts, which might account for the special position of probability within the historic development of mathematics. The research on understanding probability is still ongoing, as may be seen from Borovcnik and Kapadia (2009). Lysø (2008) makes some suggestions to take up the challenge of intuitions right from the beginning of teaching. All these endeavours to understand probability more deeply, however, seem to have had limited success. On the contrary, the more the educational community became aware about the difficulties, the more it tried to suggest cutting out critical passages, which means that probability is slowly but silently disappearing from the content being taught. It is somehow a solution that resembles that of the mathematicians when they teach probability courses at university: they hurry to reach sound mathematical concepts and leave all ambiguities behind. The approach of modelling offers a striking opportunity to counterbalance the trend. Arguments that probability should be reduced in curricula at schools and at universities in favour of more data-handling and statistical inference might be met by the examples of this chapter; they connect approaches towards context and applications like that of Kapadia and Andersson (1987). Probability serves to model reality, to impose a specific structure upon it. In such an approach, key ideas to understand probability distributions turn out to be a fundamental tool to convey the implicit specific thinking about the models used and the reality modelled hereby. Contrary to the current trend, the position of probability within mathematics curricula should be reinforced instead of being reduced. We may have to develop innovative ways to deal with the mathematics involved. To learn more about the mathematics of probability might not serve the purpose as we may see from studies in understanding probabilistic concepts by Díaz and Batanero (2009). The perspective of modelling seems more promising: a modeller never understands all mathematical relations between the concepts. However, a modeller ‘knows’ about the inherent assumptions of the models and the restrictions they impose upon a real situation. Indirectly, modelling was also supported by Chaput, Girard, and Henry (2008, p. 6). They suggest the use of simulation to construct mental images of randomness. 40

MODELLING IN PROBABILITY AND STATISTICS

Real applications are suggested by various authors to overcome the magic ingredients in randomness, e.g. Garuti, Orlandoni, and Ricci (2008, p. 5). Personal conceptions about probability are characterized by an overlap between objective and subjective conceptions. In teaching, subjective views are usually precluded; Carranza and Kuzniak (2008, p. 3) note the resulting consequences: “Thus the concept […] is truncated: the frequentist definition is the only one approach taught, while the students are confronted with frequentist and Bayesian problem situations.” In modelling real problems, the two aspects of probability are always present; it is not possible to reduce to one of these aspects as the situation might lose sense. Modelling thus might lead to a more balanced way in teaching probability. From the perspective of modelling, the overlap between probabilistic and deterministic reasoning is a further source of complications as Ottaviani (2008, p. 1) stresses that probability and statistics belong to a line of thought which is essentially different from deterministic reasoning: “It is not enough to show random phenomena. […] it is necessary to draw the distinction between what is random and what is chaos.” Simulation or interactive animations may be used to reduce the need for mathematical sophistication. The idea of a scenario helps to explore a real situation as shown in section 1. The case of taking out an insurance policy for a car is analysed to some detail in Borovcnik (2006). There is a need for a reference concept wider than the frequentist approach. The perspective of modelling will help to explore issues. In modelling, it is rarely the case that one already knows all relevant facts from the context. Several models have to be used in parallel until one may compare the results and their inherent assumptions. A final overview of the results might help to solve some of the questions posed but raises some new questions. Modelling is an iterative cycle, which leads to more insights step by step. Of course, such a modelling approach is not easy to teach, and it is not easy for the students to acquire the flexibility in applying the basic concepts to explore various contexts. Examinations are a further hindrance. What can and should be examined and how should the results of such an exam be marked? Problems are multiplied by the need for centrally set examinations. Such examination procedures are intended to solve the goal of comparability of results of final exams throughout a country. They also form the basis for interventions in the school system: if a high percentage fail such an exam in a class, the teacher might be blamed, while if such a low achievement is found in a greater region, the exam papers have to be revised etc. However, higher-order attainment is hard to assess. While control over the result of schooling via central examinations ‘guarantees’ standards, such a procedure also has a levelling effect in the end. The difficult questions about a comparison of different probability models and evaluating the relative advantages of these models, and giving justifications for the choice of one or two of these models – genuine modelling aspects involve ambiguity – might not leave enough scope in the future classroom of probability. 41

BOROVCNIK AND KAPADIA

As a consequence of such trends, teaching will focus even more on developing basic competencies. From applying probabilistic models in the sense of modelling contextual problems, only remnants may remain – mainly in the sense of mechanistic ‘application’ of rules or ready-made models. To use one single model at a time does not clarify what a modelling approach can achieve. When one model is used finally, there is still much room for further modelling activities like tuning the model’s parameters to improve a specific outcome, which corresponds to one’s benefit in the context. A wider perspective on modelling presents much more potential for students to really understand probability. NOTES 1 2 3 4 5 6 7 8 9

10

11

Kolmogorov’s axioms are rarely well-connected to the concept of distribution functions. All texts from German are translated by the authors. ’What is to be expected?’ would be less misleading than ‘significantly below the expected value’. A ‘true’ value is nothing more than a façon de parler. This is quite similiar to the ‘ceteris paribus’ condition in economic models. In fact, the challenge is to detect a break between past and present; see the recent financial crisis. Model 3 yields identical solutions to model 2. However, its connotation is completely different. The final probability needs not be monotonically related to the input probability p as in this case. Quetelet coined the idea of ‘l’homme moyen’. Small errors superimposing to the (ideal value of) l’homme moyen ‘lead directly’ to the normal distribution. Beyond common sense issues, the ill-posed comparison of scores in away matches against all matches has several – statistical and methodological drawbacks. The use of a one-sided alternative has to be considered very carefully. An unjustified use could lead to a rejection of the null hypothesis and to a statistical ‘proof ’ of this pre-assumption.

REFERENCES Borovcnik, M. (2006). Probabilistic and statistical thinking. In M. Bosch (Ed.), European research in mathematics education IV (pp. 484–506). Barcelona: ERME. Online: ermeweb.free.fr/CERME4/ Borovcnik, M. (2011). Strengthening the role of probability within statistics curricula. In C. Batanero, G. Burrill, C. Reading, & A. Rossman, (Eds). Teaching statistics in school mathematics. Challenges for teaching and teacher education: A joint ICMI/IASE study. New York: Springer. Borovcnik, M., & Kapadia, R. (2009). Special issue on “Research and Developments in Probability Education”. International Electronic Journal of Mathematics Education, 4(3). Borovcnik, M., & Peard, R. (1996). Probability. In A. Bishop, K. Clements, C. Keitel, J. Kilpatrick, & C. Laborde (Eds.), International handbook of mathematics education (pp. 239–288). Dordrecht: Kluwer. Carranza, P., & Kuzniak, A. (2008). Duality of probability and statistics teaching in French education. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Chaput, B., Girard, J. C., & Henry, M. (2008). Modeling and simulations in statistics education. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Joint ICMI/IASE study: Teaching statistics in school mathematics. Challenges for teaching and teacher education. Monterrey: ICMI and IASE. Online: www.stat.auckland.ac.nz/~iase/publications Davies, P. L. (2009). Einige grundsätzliche Überlegungen zu zwei Abituraufgaben (Some basic considerations to two tasks of the final exam). Stochastik in der Schule, 29(2), 2–7. Davies, L., Dette, H., Diepenbrock, F. R., & Krämer, W. (2008). Ministerium bei der Erstellung von Mathe-Aufgaben im Zentralabitur überfordert? (Ministry of Education overcharged with preparing the exam paper in mathematics for the centrally administered final exam?) Bildungsklick. Online: http://bildungsklick.de/a/61216/ministerium-bei-der-erstellung-von-mathe-aufgaben-im-zentralabiturueberfordert/ 42

MODELLING IN PROBABILITY AND STATISTICS Díaz, C., & Batanero, C. (2009). University Students’ knowledge and biases in conditional probability reasoning. International Electronic Journal of Mathematics Education 4(3), 131–162. Online: www. iejme.com/ Falk, R., & Konold, C. (1992). The psychology of learning probability. In F. Sheldon & G. Sheldon (Eds.). Statistics for the twenty-first century, MAA Notes 26 (pp 151–164). Washington DC: The Mathematical Association of America. Feller, W. (1968). An introduction to probability theory and its applications (Vol. 1, 3rd ed.). New York: J Wiley. Fischbein, E. (1975). The intuitive sources of probabilistic thinking in children. Dordrecht: D. Reidel. Garuti, R., Orlandoni, A., & Ricci, R. (2008). Which probability do we have to meet? A case study about statistical and classical approach to probability in students’ behaviour. In C. Batanero, G. Burrill, C. Reading, & A. Rossman (2008). Joint ICMI/IASE study: Teaching statistics in school mathematics. Challenges for teaching and teacher education. Monterrey: ICMI and IASE. Online: www.stat. auckland.ac.nz/~iase/publications Gigerenzer, G. (2002). Calculated risks: How to know when numbers deceive you. New York: Simon & Schuster. Girard, J. C. (2008). The Interplay of probability and statistics in teaching and in training the teachers in France. In C. Batanero, G. Burrill, C. Reading, & A. Rossman (2008) Joint ICMI/IASE study: teaching statistics in school mathematics. Challenges for teaching and teacher education. Monterrey: ICMI and IASE. Online: www.stat.auckland.ac.nz/~iase/publications Goertzel, T. (n.d.). The myth of the Bell curve. Online: crab.rutgers.edu/~goertzel/normalcurve.htm Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgement of representativeness. Cognitive Psychology, 3, 430–454. Kapadia, R., & Andersson, G. (1987). Statistics explained. Basic concepts and methods. Chichester: Ellis Horwood. Kapadia, R., & Borovcnik, M. (1991). Chance Encounters: Probability in education. Dordrecht: Kluwer. Lysø, K. (2008). Strengths and limitations of informal conceptions in introductory probability courses for future lower secondary teachers. In Eleventh International Congress on Mathematics Education, Topic Study Group 13 “Research and development in the teaching and learning of probability”. Monterrey, México. Online: tsg.icme11.org/tsg/show/14 Meyer, P. L. (1970). Introductory probability and statistical applications reading. Massachusetts: AddisonWesley. Ottaviani, M. G. (2008). The interplay of probability and statistics in teaching and in training the teachers. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Pierce, R. (n.d.), Quincunx. Rod. In Math is Fun – Maths Resources. Online: www.mathsisfun.com/ data/quincunx.html Schulministerium NRW. (n.d.). Zentralabitur NRW. Online: www.standardsicherung.nrw.de/abiturgost/fach.php?fach=2. Ex 7: http://www.standardsicherung.nrw.de/abitur-gost/getfile.php?file=1800

Manfred Borovcnik Institute of Statistics Alps-Adria University Klagenfurt Austria Ramesh Kapadia Institute of Education University of London London WC1H OAL United Kingdom

43

ASTRID BRINKMANN AND KLAUS BRINKMANN

2. PROBLEMS FOR THE SECONDARY MATHEMATICS CLASSROOMS ON THE TOPIC OF FUTURE ENERGY ISSUES

INTRODUCTION

The students’ interest and motivation in mathematics classroom towards the subject as a whole may be increased by using and applying mathematics. “The application of mathematics in contexts which have relevance and interest is an important means of developing students’ understanding and appreciation of the subject and of those contexts.” (National Curriculum Council 1989, para. F1.4). Such contexts might be, for example, environmental issues that are of general interest to everyone. Hudson (1995) states “it seems quite clear that the consideration of environmental issues is desirable, necessary and also very relevant to the motivation of effective learning in the mathematics classroom”. One of the most important environmental impacts is that of energy conversion systems. Unfortunately this theme is hardly treated in mathematics education. Dealing with this subject may not only offer advantages for the mathematics classroom, but also provide a valuable contribution to the education of our children. The younger generation especially, would be more conflicted with the environmental consequences of the extensive usage of fossil fuels, and thus a sustainable change from our momentary existing power supply system to a system based on renewable energy conversion has to be achieved. The decentralised character of this future kind of energy supply surely requires more personal effort of everyone and thus it is indispensable for young people to become familiar with renewable energies. However, at the beginning of the 21th century there was a great lack of suitable school mathematical problems concerning environmental issues, especially strongly connected with future energy issues. An added problem is that the development of such mathematical problems requires the co-operation of experts in future energy matters, with their specialist knowledge, and mathematics educators with their pedagogical content knowledge. The authors working in such a collaboration have developed a special didactical concept to open the field of future energy issues for students, as well as for their teachers, and this is presented below. On the basis of this didactical concept we have created several series of problems for the secondary mathematics classroom on the topics of rational usage of energy, photovoltaic, thermal solar energy, biomass, traffic, transport, wind energy and hydro power. The collection of worked out problems, with an extensive solution to each problem, has been published in a book in the J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 45–66. © 2011 Sense Publishers. All rights reserved.

BRINKMANN AND BRINKMANN

German language (Brinkmann & Brinkmann, 2005). Further problems dealing with so-called energy hybrid systems i.e., combinations of several energy types, will be developed (see Brinkmann & Brinkmann, 2009). Some problem examples are presented in paragraph 3 of this article. DIDACTICAL CONCEPT

The cornerstones of the didactical concept developed by the authors in order to promote renewable energy issues in mathematics classrooms are: – The problems are chosen in such a way that the mathematical contents needed to solve them are part of mathematics school curricula. – Ideally every problem should concentrate on a special mathematical topic such that it can be integrated into an existing teaching unit; as project-oriented problems referring to several mathematical topics are seldom picked up by teachers. – The problems should be of a greater extent than usual text problems, in order to enable the students and also their teachers to concern themselves in a more intensive way with the subject. – The problems should not require special knowledge of teachers concerning future energy issues and especially physical matters. For this reason all nonmathematical information and explanations concerning the problem’s foundations are included in separate text frames. – In this way information about future energy issues is provided for both teachers and students, helping them to concentrate on the topic. Thus, a basis for interdisciplinary discussion, argumentation and interpretation is given. EXAMPLES OF MATHEMATICAL PROBLEMS

The Problem of CO2 Emission This is an inter-disciplinary problem linked to the subjects of mathematics as well as chemistry, physics, biology, geography, and social sciences. Nevertheless, it may be treated in lower secondary classrooms. With respect to mathematics the conversion of quantities is practised, knowledge of rule of three and percentage calculation is required. The amount of CO2 produced annually in Germany especially by transport and traffic is illustrated vividly so that students become aware of it. Information: In Germany, each inhabitant produces an annual average of nearly 13 t of CO2 (Carbon dioxide). Combustion processes (for example from power plants or vehicle combustion motors) are responsible for this emission into the atmosphere. Assume now that this CO2 would build up a gaseous layer which stays directly above the ground. a) What height would this CO2-layer reach in Germany after one year? 46

PROBLEMS FOR THE SECONDARY MATHEMATICS

Hints: – Knowledge from chemical lessons is How long can helpful for your calculations. There you I breathe? learned that amounts of material could be measured with the help of the unit ‘mole’. 1 mole of CO2 weighs 44 g and takes a volume of 22.4 l, under normal standard conditions (pressure 1013 hPa and temperature 0°C). With these values CO2 you can calculate approximately. – You will find the surface area and the number of inhabitants of Germany in a lexicon. Help: Find the answers of the following partial questions in the given order. i) How many tons of CO2 are produced in total in Germany every year? ii) What volume in l (litres) takes this amount of CO2? (Regard the Hint!) iii) How many m3 of CO2 are therefore produced annually in Germany? Express this in km3! iv) Assume, the CO2 produced annually in Germany forms a low layer of gas directly above the ground, what height would it have? Information: – In Germany the amount of waste is nearly 1 t for each inhabitant (private households as well as industry) every year, the average amount of CO2 produced per inhabitant is therefore 13 times of this. – The CO2, which is produced during combustion processes and emitted into the atmosphere, distributes itself in the air. One part will be absorbed by the plants with the help of the photosynthesis, a much greater part goes into solution in the oceans’ waters. But the potential for CO2 absorption is limited. – In the 1990s, 20% of the total CO2-emissions in Germany came from the combustion engines engaged in traffic activities alone. b) What height would the CO2-layer over Germany have, if this layer results only from the annual emissions from individual vehicles? How many km3 of CO2 is this?

Usable Solar Energy This problem deals with the heating of water in private households using solar energy. It can be treated in lessons involving the topic of percent calculation and the rule of three. It requires the understanding and usage of data representations. Information: In private households the warm water required can be partly heated up by solar thermal collectors. They convert the solar radiation energy in thermal energy. This helps us to decrease the usage of fossil fuels which leads to environmental problems.

47

BRINKMANN AND BRINKMANN

Private households need heated water preferably in the temperature region 45°–55°C. Permanently in our region, the usable thermal energy from the sun is not sufficient to bring water to this temperature because of seasonal behavior. Thus, an input of supplementary energy is necessary.

Figure 1. A solar thermal energy plant (source: DGS LV Berlin BRB).

The following figure (Figure 2) shows how much of the needed energy for heating up water to a temperature of 45°C in private households can be covered respectively by solar thermal energy and how much supplementary energy is needed. Energy Coverage in % 100 90

Percentage of Covered Energy

80 70 60 50 40 30 20 10

0

1

2

3

4 5 6 7 8 9 Month March (1) to February (12)

10

11

12

Figure 2. Usable solar energy and additional energy. 48

PROBLEMS FOR THE SECONDARY MATHEMATICS

The following problems refer to the values shown by Figure 2. a) What percent of the thermal energy needed for one year can be provided by solar thermal energy? Information: – Energy is measured by the unit kWh. An average household in central Europe consumes nearly 4000 kWh energy for the heating of water per year. – 1 l fossil oil provides approximately 10 kWh thermal energy. The combustion of 1 l oil produces nearly 68 l CO2. – The great amount of CO2 worldwide produced at present by the combustion of fossil fuels damages the environment. b) How many kWh may be provided in one year by solar thermal energy? c) How many litres of oil have to be bought to supply the supplementary thermal energy needed for one year for a private household? d) How many litres of oil would be needed without solar thermal energy? e) How many litres of CO2 could be saved by an average household in Germany during one year by using Solar Collectors?

The Problem of Planning Solar Collector Systems This problem deals with calculations for planning solar collector systems by using linear functions. The understanding and usage of graphical representations is performed. Information: – In private households, the heating of warm water can partly be done by solar collector systems. Solar collector systems convert radiation energy in thermal energy. This process is called solar heat. – The usage of solar heat helps to save fossil fuels like natural gas, fuel oil or coal that damages the environment. – Energy is measured by the unit kWh. An average household in central Europe consumes nearly 4000 kWh thermal energy per year for the heating warm water. – Private households need heated water preferably in the temperature region 45°–55°C. Permanently at our latitude, the usable thermal energy from the sun is not sufficient to bring water to this temperature because of the seasonal behavior. Thus, an input of supplementary energy is necessary. – The warm water requirement per day per person can be reckoned at about 50 l. By energy-conscious living this value can be easily reduced to 30 l per person and day. – 1 l of fossil oil provides approximately 10 kWh of thermal energy. The combustion of 1 l of oil produces nearly 68 l of CO2. The diagram in Figure 3 provides data for planning a solar collector system for a private household. It shows the dependence of the collector area needed, for example on the part of Germany where the house is situated, on the number of persons living in the respective household, on the desired amount of warm water per day per person, as well as the desired output of thermal energy needed by solar thermal energy (in per cents). 49

BRINKMANN AND BRINKMANN

Storage Capacity 400 l 300 l 200 l 100 l

2

4

6 Persons

2 4 6 8 Collector Area in m2

Figure 3. Dimensioning diagram.

Example: In a household in central Germany with 4 persons and a consumption of 50 l of warm water per day for each one, a collector area of 4 m2 is needed for a reservoir of 300 l and an energy coverage of 50%. a) What would be the collector area needed for the household you are living in? What assumptions do you need to make first? What would be the minimal possible collector area, what the maximal one? b) A collector area of 6 m2 that provides 50% of the produced thermal energy is installed on a house in southern Germany. How many persons could be supplied with warm water in this household? c) Describe using a linear function the dependence of the storage capacity on the number of persons in a private household. Assume first a consumption of 50 l of warm water per day per person, and second a consumption of 30 l. Compare the two function terms and their graphical representation. d) Show a graphical representation of the dependence of the collector area on a chosen storage capacity assuming a thermal energy output of 50% for a house in central Germany. Sun Collectors This problem can be integrated in lessons about quadratic parabola and uses their focus property as application in the field of sun collectors. 50

PROBLEMS FOR THE SECONDARY MATHEMATICS

Information: – Direct solar radiation may be concentrated in a focus by means of parabolic sun collectors (Figure 4). These use the focus property of quadratic parabola. – sun collectors are figures with rotational symmetry, they evolve by rotation of a quadratic parabola. Their inner surface is covered with a reflective mirror surface; that is why they are named parabolic mirrors. – Sun beams may be assumed to be parallel. Thus, if they fall on such a collector, parallel to its axis of rotation, the beams are reflected so that they all cross the focus of the parabola. The thermal energy radiation may be focused this way in one point. – The temperature of a heating medium, which is lead through this point, becomes very high, relative to the environment. This is used for heating purposes, but also for the production of electric energy.

Figure 4. Parabolic sun collectors (source: DLR).

a) A parabolic mirror was constructed by rotation of the parabola y 0.125 x 2 . Determine its focal length (x and y are measured in meters). b) A parabolic mirror has a focal length of 8 m. Which quadratic parabola was used for its construction? c) Has the parabolic mirror with y 0.0625x 2 a greater or a smaller focal length than that one in b)? Generalize your result. d) A parabolic mirror shall be constructed with a width of 2.40 m and a focal length of 1.25 m. How great is its arch, i.e., how much does the vertex lay deeper than the border? e) In Figure 5 you see a parabolic mirror, the EuroDish with a diameter of 8.5 m. Determine from the figure, neglecting errors resulting from projection sight, its approximate focal length and the associated quadratic parabola. 51

BRINKMANN AND BRINKMANN

Figure 5. EuroDish system (source: Wikimedia Commons).

Information: Other focussing sun collectors are figures with length-symmetry, they evolve by shifting a quadratic parabola along the direction of one axis. They are named parabolic trough solar collectors (Figure 6).

Figure 6. Parabolic trough solar collectors in Almería, Spain and California, USA (source: DLR and FVEE/PSA/DLR). 52

PROBLEMS FOR THE SECONDARY MATHEMATICS

f) The underlying function of a parabolic trough solar collector is given by

y

0.35 x 2 (1 unit

1 m). Where has the heating pipe to be installed?

Photovoltaic Plant and Series Connected Efficiencies The aim of this problem is to make students familiar with the principle of series connected efficiencies, as they occur in complex energy conversion devices. As an example, an off-grid photovoltaic plant for the conversion of solar energy to ACcurrent as a self-sufficient energy supply is considered. The problem can be treated in a teaching unit on the topic of fractions. Figure 7 shows the components of an interconnected energy conversion system to build up a self-sufficient electrical energy supply. This kind of supply system is of special interest for developing countries, and also for buildings in rural off-grid areas (Figure 8). Figure 7 shows in schematic form the production of electrical energy from solar radiation with the help of a solar generator for off-grid applications. In order to guarantee a gap-free energy supply for times without sufficient solar radiation, a battery as an additional storage device is included. SolarGenerator

K PV

Charge Control

KCC

Battery

KB

Inverter

Consumer

KI

Figure 7. Off-Grid photovoltaic plant.

Figure 8. Illustration of an off-grid photovoltaic plant on the mountain hut “Starkenburger Hütte” (Source: Wikipedia). 53

BRINKMANN AND BRINKMANN

Information: The components of an off-grid photovoltaic (PV) plant are 1) a solar generator, 2) a charge control, 3) an accumulator and 4) an inverter (optional for AC-applications). The solar generator converts the energy of the solar radiation into electrical energy as direct current (DC). The electricity is passed to a battery via a charge control. From there it can be transformed directly, or later after storage in the battery, to alternating current (AC), when it is needed by most of the electric devices. Unfortunately, it is not possible to use the radiation energy without losses. Every component of the conversion chain produces losses, so only a fraction of the energy input for each component would be the energy input for the following Power going out component. The efficiency K of a component is defined by K . Power coming in Power is the energy converted in 1 second. It is measured by the unit W or kW (1 kW = 1000 W). For comparison standard electric bulbs need a power of 40 W, 60 W or 100 W, a hair blower consumes up to 2 kW. Assume in the tasks a), b) and c) that all the electric current is first stored in the battery before it reaches the consumer. a) Consider that the momentary radiation on the solar generator would be 20 kW. Calculate the out going power for every component of the chain, if:

K PV

3 , KCC 25

19 , KB 20

b) What is the total system efficiency Ktotal

4 and K I 5

23 . 25

gained power for the consumer ? insolated power

How can you calculate Ktotal by using only the values K PV , KCC , K B and KI ? Give a formula for this calculation. c) Transform the efficiency values given in a) into decimal numbers and percents. Check your result obtained in a) with these numbers. d) How do the battery efficiency and the total system efficiency change, if only 1 / 3 of the electric power delivered by the charge control would be stored in the battery and the rest of 2 / 3 goes directly to the inverter? What is your conclusion from this? Wind Energy Converter This problem deals with wind energy converters. It can be treated in lessons on geometry, especially calculations of circles or in lessons on quadratic parabola. The conversion of quantities is practised. Information: The nominal power of a wind energy converter depends upon the rotor area A with the diameter D as shown in Figure 9 below. 54

PROBLEMS FOR THE SECONDARY MATHEMATICS

2500 kW 2000 kW 1500 kW 1000 kW 750 kW 600 kW 500 kW 300 kW

80 m 72 m 64 m 54 m 48 m 44 m 40 m 33 m 27 m

225 kW

Figure 9. Nominal power related to the rotor area.

a) Interpret the meaning of Figure 9. b) Show the dependence of the nominal power of the wind energy converter on the rotor diameter D and respectively on the rotor area A by graphs in co-ordinate systems. c) Find the formula which gives the nominal power of the wind energy converter as a function of the rotor area and of the rotor diameter respectively. d) What rotor area would you expect to need for a wind energy converter with a nominal power of 3 MW? Give reason for your answer. (Note: 1 MW = 1000 kW.) What length should the rotor blades have for this case? Information: – The energy that is converted in one hour [h] by the power of one kilowatt [kW] is 1 kWh. – In central Europe, wind energy converters produce their nominal power on average for 2000 hours a year when wind energy conditions are sufficient. e) Calculate the average amount of energy in kWh, which would be produced by a wind energy converter with a nominal power of 1.5 MW during one year in middle Europe. Information: – An average household in central Europe consumes nearly 4000 kWh electrical energy per year. 55

BRINKMANN AND BRINKMANN

f) In theory, how many average private households in central Europe could be supplied with electrical energy by a 1.5 MW wind energy converter? Why do you think, that this could only be a theoretical calculation? g) Assume the nominal power of a 600 kW energy converter would be reached at a wind speed of 15 m/s, measured at the hub height. How many km/h is this? How fast are the movements of the tips of the blades, if the rotation speed is 15/min. Give the solutions in m/s and km/h, respectively. Compare the result with the wind speed. Wind Energy Development This problem requires the usage and interpretation of data and statistics which is done in the context of wind energy development in Germany. Information: At the end of 1990 the installed wind energy converters in Germany had a total nominal power of 56 MW. At the end of 2000 this amount increased to a total of 6113 MW. Power is the energy converted in a time unit; it is measured by the unit Watt [W]. 106 W are one Megawatt [MW]. The following table shows the development of the new installed wind power in Germany in the years 1991–2000. Table 1. Development of new installed wind energy in Germany Year

Number of new installed wind energy converters

Total of new installed nominal power

1991

300

1992

405

74

1993

608

155

1994

834

309

1995

911

505

1996

804

426

Total of nominal power

48

1997

849

534

1998

1010

733

1999

1676

1568

2000

1495

1665

a) Fill in the missing data in the 4th column. b) Show the development of the annual new installed nominal power and of the total annual nominal power in graphical representations. 56

PROBLEMS FOR THE SECONDARY MATHEMATICS

c) Considering only the data of the years 1991–1998, what development in respect to the installation of new wind power in Germany could be expected in your opinion? Give a well-founded answer! Compare your answer with the real data given for 1999 and 2000 and comment on it. What is your projection for 2005? Why? d) Calculate using the data in Table 1 for each of the years 1991–2000, the average size of new installed wind energy converters in kW. (Note: 1 MW = 1000 kW.) Show the respective development graphically and comment on it. Can you offer a projection for the average size of a wind energy converter that will be installed 2010? e) Comment on the graphical representation in Figure 10. Also take into account political and economical statements and possible arguments.

Development of wind energy use in Europe: installed capacity Predictions and Reality

Installed capacity in gigawatt

70

64.2

60

European Commission: Advanced Scenario, 1996 EWEA 1997

50 European Commission: White paper, 1997

40 30 20

European Commission: PRIMESt, 1998

10

Greenpeace/EWEA, 2002 2.5

0

IEA 2002

1990 1995 2000 2005 2010 2015 2020

Actual development

Figure 10. Development of wind energy use in Europe.

Betz’ Law and Differentiation This problem deals with the efficiency of a wind energy converter; it can be treated in lessons on differentiation and the determination of local extreme values. 57

BRINKMANN AND BRINKMANN

Information: 2% of the radiated solar energy is converted to kinetic energy of air molecules. In combination with the earth’s rotation, this results in a wind production. The kinetic energy of an air mass 'm is E 1 2 'm v 2 , in which v denotes the velocity of

the air mass. The kinetic energy can be written as E 1 2 U 'V v 2 given the density of air U 1, 2g/l and the relation 'm U 'V with the volume element

'V . The power is defined as the ratio of energy to time as P

E 't .

a) A wind volume 'V flows through a rotor area A and needs the time 't to travel the distance 's . Therefore the speed is v 's 't . Determine the general formula for the volume element which passes the rotor area A during the time interval 't as a function of the wind speed. b) Give the formula for the amount of wind power Pwind , which passes through the rotor area A as a function of the wind velocity. Show that the power increases with the third power of the wind velocity. Information:

A rotor of a wind energy converter with area A slows down the incoming wind speed from v1 in front of the rotor to the lower speed v2 behind the rotor (Figure 11). The wind speed in the rotor area itself can be shown to be the average of v1 and v2 i.e., v Pc

v1 v2

P1 P2

2 . The converted power is then given by:

1 'V U v12 v22 . 2 't

v1

v2

v A

A1

A2

Figure 11. Wind flow through the rotor area.

c) Express the formula for the determination of Pc as a function of A , v1 and v2 . d) Describe the converted power Pc in c) as a function of the variable x and v1 . 58

v2 v1

PROBLEMS FOR THE SECONDARY MATHEMATICS

Information: The efficiency of a wind energy converter (power coefficient) is defined as the ratio of the converted power to the wind power input as cP Pc Pwind .

e) Express the power coefficient as a function of the variable x

v2 v1 . Draw the

graph of this function as a function of x . Note that x > 0,[email protected] , why?

f) Determine the value xmax which corresponds to the maximum value of the power coefficient, the so-called Betz’ efficiency. This is the value for x which gives the best energy conversion.

v2 v1

Biomass and Reduction of CO2 Emissions This problem deals with fossil fuels and biomass, especially with the production of CO2 emissions and possibilities for their reduction. The conversion of quantities is practised, and knowledge of rule of three and percentage calculation is required. Information: In Germany for example, an average private household consumes nearly 18000 kWh of energy annually. 80% of this amount is for heating purposes and 20% for electrical energy. The energy demand for heating and hot water is mainly covered by the use of fossil fuels like natural gas, fuel oil or coal. Assume that the calorific values of gas, oil and coal can be converted to useable heating energy with a boiler efficiency of 85%. This means that 15% is lost in each case.

a) The following typical specific calorific values are given Natural gas: 9.8 kWh/m³ Fuel oil: 11.7 kWh/kg Coal: 8.25 kWh/kg (That is, the combustion of 1 m3 natural gas supplies 9.8 kWh, 1 kg fuel oil supplies 11.7 kWh and 1 kg coal supplies 8.25 kWh.) What amount of these fuels annually is necessary for a private household in each case? b) The specific CO2-emissions are approximately: Natural gas: 2.38 kg CO2/kg Fuel oil: 3.18 kg CO2/kg Coal: 2.90 kg CO2/kg The density of natural gas is nearly 0.77 kg/m³. How many m³ of CO2 each year for a private household in Germany does it take in each case? Hint: Amounts of material could be measured with the help of the unit ‘mole’. 1 mole of CO2 weights 44 g and has a volume of approximately 22.4 l. 59

BRINKMANN AND BRINKMANN

Information: Wood is essentially made with CO2 taken from the atmosphere and water. The bound CO2 is discharged by burning the wood and is used again in the building of plants. This is the so-called CO2-circuit. Spruce wood has a specific calorific value of nearly 5.61 kWh/kg. The specific CO2-emissions are approximately 1.73 kg CO2/kg. c) How many kg of spruce wood would be needed annually for a private household instead of gas, oil or coal? (Assume again a boiler efficiency of 85%). How many m³ of fossil CO2-emissions could be saved in this case? Information: Spruce wood as piled up split firewood has a storage density of 310 kg/m³. d) How much space has to be set aside in an average household for a fuel storage room, which contains a whole year’s supply of wood? Compare this with your own room! e) Discuss the need for saving heat energy with the help of heat reduction.

Automobile Energy Consumption This problem can be treated in lessons on trigonometry. Its solution requires knowledge of the rule of three. The problem makes clear the dependence of an automobile’s energy consumption on the distance-height-profile, the moved mass and the velocity. Tim and Lisa make a journey through Europe. Just before the frontier to Luxembourg their fuel tank is empty. Fortunately they have a reserve tank filled with 5 l fuel. “Let’s hope it will be enough to reach the first filling station in Luxembourg. There, the fuel is cheaper than here” Tim says. “It would be good if we had an exact description of the route, than we would be able to calculate our range”, answers Lisa. Information: – In order to drive, the resisting forces have to be overcome. Therefore a sufficient driving force Fdrive is needed. For an average standard car, the law for this force (in N) is given by the following formula: Fdrive (0.2 9.81 sin D ) m 0.3 v2 for Fdrive t 0 , where m the moving mass (in kg) is the mass of the vehicle, passengers and packages; v is the velocity (in m/s), and D is the angle relative to the horizontal line. D is positive for uphill direction and negative in the downhill case (Figure 12). – The energy E (in Nm) which is necessary for driving, can be calculated in cases of a constant driving force by: E Fdrive s , with s as the actual distance driven (in m). – The primary energy consisting of the fuel amounts to about 9 kWh for each l l of fuel. (kWh is the symbol for the energy unit ‘kilowatt-hours’; it is 1 kWh = 3 600000 Nm). 60

PROBLEMS FOR THE SECONDARY MATHEMATICS

– The efficiency of standard combustion engines in cars for average driving conditions is between 10% and 20% nowadays; this means only 10%–20% of the primary energy in the fuel is available to generate the driving forces.

D D Figure 12. Definition of the angle D .

The distance which Tim and Lisa have to drive to the first filling station in Luxembourg can be approximately given by a graphical representation like that one given in Figure 13. (Attention: think of the different scaling of the co-ordinate axis). The technical data sheet of their vehicle gives the unladen weight of their car as about 950 kg. Tim and Lisa together weigh c. 130 kg, and their packages nearly 170 kg. The efficiency of the engine can be assumed to be c. 16%. h [m] 350

200 140

I

II

III

Figure 13. Distance-Height-Diagram to the next filling station (h is the height above mean sea level).

a) Can Tim and Lisa take the risk of not searching for a filling station before the frontier? Assume at first, the speed they drive is 100 km/h. b) Would Tim and Lisa have less trouble, if they had only 50 kg packages instead of 170 kg? c) Would the answer to a) change if Tim and Lisa chose their speed to be only 50 km/h? Help for a): i) Note, the speed in the formula for Fdrive has to be measured in m/s. ii) The value for sin D can be calculated with the information given in Figure 13. iii) Determine for each section the force Fdrive and the energy needed. The distance s has to be measured in m. Convert the energy from Nm in kWh. 61

BRINKMANN AND BRINKMANN

iv) v)

Determine the total energy which is needed for the whole distance as a sum over the three different sections. How many kWh of energy to drive are given by the 5 l reserve fuel? Consider the efficiency of the motor.

Automobiles: Forces, Energy and Power This is a problem that can be treated in higher secondary mathematics education, in the context of differential and integral calculus. This problem shows the dependence of an automobile’s power and energy consumption on the distance-height-profile, the moved mass and the velocity. Kay has a new electric vehicle, of which he is very proud. He wants to drive his girlfriend Ann from the disco to her home. Ann jokes: “You will never get over the hill to my home with this car!” “I bet that I will”, says Kay. The distance-height-characteristic of the street from the Disco (D) to the house (H), in which Ann lives, is shown in Figure 14, and it can be described by the following function:

h( x )

1 (4 x 2 104 x 300) for x > 0; [email protected] , 1 000

where x and h are measured in kilometres (km). h [km] H

D

x [km] Figure 14. Distance-Height-Diagram between D and H.

a) Show, that it is possible to calculate the real distance s depending of a given height function h over the interval [ x1 , x2 ] with the help of the following formula: x2

s

³

x1

62

1 (hc( x)) 2 dx .

PROBLEMS FOR THE SECONDARY MATHEMATICS

Help: Consider the right angle triangle as shown in Figure 15.

's

'h 'x

Figure 15. Geometrical representation.

b) How long is the real distance which Kay has to drive from the Disco to the house where Ann is living? Help: Show that 1 2 2 2 ³ 1 x dx 2 ( x 1 x ln( x 1 x )) const. 1 with the help of the derivative of ( x 1 x 2 ln( x 1 x 2 )) . 2 (Hint: This partial result is helpful for solving the problem f ).) c) D means the angle of the tangent to the curve h at the point x0 on the x -axis. Prove that hc( x0 ) . sin D 1 (hc( x0 ))2 (Note that hc( x0 )

tan D .)

d) Assume Kay wants to drive at a constant speed of 110

km . h

Determine the driving power necessary at the top of the hill (maximum of h ) and at the points with h( x ) 0.4 and h( x) 0.8 . For this purpose you need the following data: Kay’s electric vehicle has an empty weight of 620 kg, Kay and Ann together weigh nearly 130 kg. Information: – In order to drive, the resisting forces have to be overcome. Therefore a sufficient driving force Fdrive is needed. For an average standard car, the law for this force (in N) is given by the following formula: Fdrive (0.2 9.81 sin D ) m 0.3 v2 for Fdrive t 0 , where m the moving mass (in kg), is the mass of the vehicle, passengers and packages, v is the velocity (in m ), and D is the angle relative to the horis

zontal line. D is positive for uphill direction and negative in the downhill case (Figure 16). 63

BRINKMANN AND BRINKMANN

– The driving power P (in

Nm ), s

which is needed to hold the constant speed, can

be calculated using the product of the driving force (in N) and the velocity (in m ): s

P

Fdrive v .

The power P is measured with the unit [kW], with: 1 kW = 1 000

h

Nm . s

D

D

h Figure 16. Angle D dependant on the function of h x .

e) Kay’s electric vehicle has a nominal power of 25 kW. Is it possible for him to bring Ann home? f) Determine the driving energy which has to be consumed for the route from the disco to Ann’s home. Assume that Kay drives uphill with a speed of 80 km and h

downhill with a speed of 110

km . h

(Attention: Because h x as well as x are

expressed in km in the function equation of h , the resulting energy in the following equation is obtained in N km = 1 000 Nm .) Information: – The pure driving energy E (in Nm), which is necessary for driving, can be calculated as: s0

E

³ Fdrive ds 0

x0

³F

drive

1 (hc( x)) 2 dx , where s is the actual distance (in m) driven.

0

– The energy E is usually measured in kilowatt-hour [kWh]. 1 kWh 3 600 000 Nm . g) The actual charged electrical energy in the batteries of Kay’s vehicle is 6 kWh. The driving efficiency of his electrical vehicle is nearly 70%; this means only 70% of the stored energy can be used for driving. Is the charging status of Kay’s batteries sufficient to bring Ann to her home, under the assumptions in f )? Will I make it home afterwards??

64

PROBLEMS FOR THE SECONDARY MATHEMATICS

CLASSROOM IMPLEMENTATION

The problems on environmental issues developed by the authors must be seen as an offer for teaching material. In each case, the students’ abilities have to be considered. In lower achieving classes it might be advisable not to present every problem in full length. In addition, lower achievers need a lot of help for solving complex problems that require several calculation steps. The help given in some problems, like in example 1 above, addresses such students. The problems should be presented to higher achievers without much help included. It might even be of benefit not to present the given hints from the beginning. Students would thus have to find out, which quantities are yet needed in order to solve the problem. The problem would become more open and the students would be more involved in modelling processes. As the intention of the authors is also an informal one, in order to give more insight in the field of future energy issues, the mathematical models/formulas are mostly given in the problem texts. Students are generally not expected to find out by themselves the often complex contexts; these are already presented, thus guaranteeing realistic situation descriptions. The emphasis in the modelling activities lies rather in the argumentation and interpretation processes demanded, recognising that mathematical solutions lead to a deeper understanding of the studied contents. In the context of an evaluation of lessons dealing with problems like those presented in this paper, students amongst others were asked to express what they have mainly learned. The given answers can be divided in three groups: mathematical concepts, contents concerning renewable energy topics, as well as convenient problem solving strategies. As regards the last point, students stressed especially that they had learned that it is necessary to read the texts very carefully, and also to consider the figures and tables very carefully. Almost all students expressed, that they would like to work on much more problems of this kind in mathematical classes, as the problems are interesting, relevant for life, and are more fun than pure mathematics. Classroom experiences show that students react in different ways to the problem topics. While some are horrified by recognizing for example that the worldwide oil reserves are already running low during their life time, others are unmoved by this fact, as twenty or forty years in the future is not a time they worry about. In school lessons there are again and again situations in where students drift away in political and social discussions related to the problem contexts. Although desirable, this would sometimes lead to too much time loss for mathematical education itself. Cooperation with teachers of other school subjects would be profitable if possible. OUTLOOK AND FINAL REMARKS

In order to integrate future energy issues into curricula of public schools, several initiatives have already been started in Germany, supported and in co-operation with the ‘Deutsche Gesellschaft für Sonnenergie e.V. (DGS)’, the German section of the ISES (International Solar Energy Society). There exists a European project, named “SolarSchools Forum”, that aims to integrate future energy issues into curricula of public schools. In the context of this 65

BRINKMANN AND BRINKMANN

project the German society for solar energy DGS highlights the teaching material the authors created (http://www.dgs.de/747.0.html). Most of this material is only available in German language. This article is a contribution towards making these materials accessible in English also. (The English publications up to now (see e.g. Brinkmann & Brinkmann, 2007) present only edited versions of some of the problems.) Although the education in the field of future energy issues is of general interest, the project that we presented in this paper seems to be the only major activity focusing especially on mathematics lessons. The amount of problems should thus be increased, especially with problems which deal with a combination of different renewable energy converters, like hybrid systems, to give an insight into the complexity of system technology. Additionally, the sample mathematical problems on renewable energy conversion and usage have to be permanently adjusted to actual and new developments because of the dynamic evolution of the technology in this field. REFERENCES Brinkmann, A., & Brinkmann, K. (2005). Mathematikaufgaben zum Themenbereich Rationelle Energienutzung und Erneuerbare Energien. Hildesheim, Berlin: Franzbecker. Brinkmann, A., & Brinkmann, K. L. (2007). Integration of future energy issues in the secondary mathematics classroom. In C. Haines, P. Galbraith, W. Blum, & S. Chan (Eds.), Mathematical modelling (ICTMA 12): Education, engineering and economics (pp. 304–313). Chichester: Horwood Publishing. Brinkmann, A., & Brinkmann, K. (2009). Energie-Hybridsysteme – Mit Mathematik Fotovoltaik und Windkraft effizient kombinieren. In A. Brinkmann & R Oldenburg, (Eds.). Schriftenreihe der ISTRONGruppe. Materialien für einen realitätsbezogenen Mathematikunterricht, Band 14 (pp. 39–48), Hildesheim, Berlin: Franzbecker. Hudson, B. (1995). Environmental issues in the secondary mathematics classroom. Zentralblatt für Didaktik der Mathematik, 27(1), 13–18. National Curriculum Council. (1989). Mathematics non-statutory guidance. York: National Curriculum Council.

Astrid Brinkmann Institute of Mathematics Education University of Münster, Germany Klaus Brinkmann Umwelt-Campus Birkenfeld University of Applied Science Trier, Germany

66

TIM BROPHY

3. CODING THEORY

INTRODUCTION

Throughout the world teachers of mathematics instruct their pupils in the wonders of numbers. It is always a challenge to find areas where the topics covered at school intersect the world of the student. The search for such intersection points is well rewarded by arousing the interest of the student in the topic. Teachers already have a very heavy workload and may not simply have the time to do such research. This chapter attempts to link the students’ world of shopping, curiosity and music to modular arithmetic, trigonometry and complex numbers. It is hoped that the busy teacher will find here ideas to enliven classwork and use the students’ natural curiosity as a pedagogical tool in the exploration of numbers. In the world today we rely for many things on digital information. Without digital storage there would be no such thing as a CD or a DVD, satellite television or an mp3 player. NASA would not have been able to receive pictures from Mars. Mobile phones would not work.

Figure 1. Asteroid ice (Courtesy NASA/JPL-Caltech).

Information stored in digital form can, indeed will, become corrupted. This leads to errors in the information stored or transmitted. This article is about the two processes of first detecting and then correcting these errors. Sometimes the errors are unimportant. You may be speaking to someone on a telephone line with a faint crackle in the background. This crackle, while it may be irritating, does not prevent the transmission or reception of information: your voices. Sometimes the errors that could occur are very important. You may be making a purchase with a credit card. If there is an error in transmitting or receiving the amount of money involved it could seriously upset either you or your bank. J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 67–85. © 2011 Sense Publishers. All rights reserved.

BROPHY

The simplest way of storing information for use in digital media is in binary form. All the information will be encoded as a sequence of the two digits 0 and 1. A computers is a machine that contains banks of switches that can be either on or off. A switch that is on can be represented by the number 1. A switch that is off can be represented by the number 0. To store and transmit information in this format means working with only two integers, 0 and 1. These are called binary digits from which we get the word bit. When we limit the available integers used in arithmetic the process is called modular arithmetic. This is the key to much error detection and correction. We will look first at errors in bar codes. BAR CODES

Modular Arithmetic Prime numbers are numbers that are divisible only by themselves and one. Prime numbers are involved in coding information so that it can be used in digital media. It turns out that many of the uses of prime numbers depend on using Modular Arithmetic. What is this? While the numbers go on forever, human beings don’t. A way to begin looking at the endless rise of the numbers is to look at them in cycles. We do not count the hours of the day as rising forever. In fact we have two different systems for keeping track of time, a twelve hour and a twenty four hour system, and they are both cyclic. The older of the two that we use takes a fundamental cycle of twelve and begins again after the twelve is reached. An addition table would look something like the following (Table 1): Table 1. Addition on a clock + 1 2 3 4 5 6 7 8 9 10 11 12

1 2 3 4 5 6 7 8 9 10 11 12 1

2 3 4 5 6 7 8 9 10 11 12 1 2

3 4 5 6 7 8 9 10 11 12 1 2 3

4 5 6 7 8 9 10 11 12 1 2 3 4

5 6 7 8 9 10 11 12 1 2 3 4 5

6 7 8 9 10 11 12 1 2 3 4 5 6

7 8 9 10 11 12 1 2 3 4 5 6 7

8 9 10 11 12 1 2 3 4 5 6 7 8

9 10 11 12 1 2 3 4 5 6 7 8 9

10 11 12 1 2 3 4 5 6 7 8 9 10

11 12 1 2 3 4 5 6 7 8 9 10 11

12 1 2 3 4 5 6 7 8 9 10 11 12

Notice that the number 12 plays the part that is normally taken by zero. No matter what we add to 12 it remains the same: 12 + 2 = 2 68

CODING THEORY

12 + 7 = 7 12 + 12 = 12 What this means is that two hours after 12 noon is 2 pm. Seven hours after 12 noon is 7 pm and twelve hours after 12 noon is 12 midnight. Adding hours is arithmetic. To calculate the sum of two numbers in this system we add them together and then, if the total is greater than twelve, subtract twelve from the total. We learn to do this at a very early age and do not see any problems with it. The method is illustrated below: 7+2=9 7 + 6 = 13 and 13 – 12 = 1 5 + 11 = 16 and 16 – 12 = 4 Since the number 12 plays the part of zero we will use zero whenever the number 12 appears. This gives rise to Table 2 which is addition using only the numbers from zero to eleven. We call this addition modulo 12 and it is an example of modular arithmetic. Table 2. Addition modulo 12 + 1 2 3 4 5 6 7 8 9 10 11 0

1 2 3 4 5 6 7 8 9 10 11 0 1

2 3 4 5 6 7 8 9 10 11 0 1 2

3 4 5 6 7 8 9 10 11 0 1 2 3

4 5 6 7 8 9 10 11 0 1 2 3 4

5 6 7 8 9 10 11 0 1 2 3 4 5

6 7 8 9 10 11 0 1 2 3 4 5 6

7 8 9 10 11 0 1 2 3 4 5 6 7

8 9 10 11 0 1 2 3 4 5 6 7 8

9 10 11 0 1 2 3 4 5 6 7 8 9

10 11 0 1 2 3 4 5 6 7 8 9 10

11 0 1 2 3 4 5 6 7 8 9 10 11

0 1 2 3 4 5 6 7 8 9 10 11 0

Barcodes contain information that can be read by scanners electronically. This eliminates certain errors and minimises others. The barcode, as shown in Figure 2, consists of various groups of digits. The first two or three digits give the country to which the barcode has been issued. This is not the same as the country in which the product is manufactured. In Figure 2 the first three digits, 539, indicate that the barcode was issued to some group in Ireland. The remaining digits in this example, except for the final one, have no meaning. The final digit is called a check digit. It is in evaluating the check digit that modular arithmetic is used. When a barcode is scanned errors can occur. These errors can 69

BROPHY

Figure 2. Barcode.

be caused by a multitude of events, from dirt blocking either a bar or a space to random electronic errors. While no automatic system can trap all errors, a check digit can catch quite a few. Check Digit In the EAN-13 (European Article Number) system (Figure 3) the first two (or sometimes three) digits give the country to which the barcode has been issued. The next five (or four if the country used three) digits belong to a particular company. The following five digits identify the actual product. There is one more digit to go. This is the check digit. It is calculated from all the digits already present.

Figure 3. Meaning of the numbers.

The barcode reader will do the same calculation and if a different digit is arrived at then an error has occurred. For some errors this can even be corrected. The process of calculating the check digit is quite simple to carry out. Indeed a description of the process is much more complicated than actually doing it. Consider the fictitious bar code above: 5391234512342. The final digit, 2, is the check digit. This is calculated using the following steps in Table 3: Table 3. How to calculate the check digit 1. 2. 3. 4. 5. 6. 70

Write the number in a row. Label the digits alternately Odd(O) or Even(E) starting at the right and moving left. Make out a table with 3 below each O and 1 below each E. These are the weights. Multiply the digits by their weights mod 10. Add up the new row mod 10. If the total is t then the check digit, c, is the solution to t + c = 0 mod 10.

CODING THEORY

All this may seem very complicated but an example should make it clear. The barcode we invented above before a check digit was assigned to it was 539123451234. We will go through the process of calculating the check digit. Table 4. Calculation of check digit Barcode Position Weights Calculation Result

5 E 1

3 O 3

9 E 1

1 O 3

2 E 1

3 O 3

4 E 1

5 O 3

1 E 1

2 O 3

3 E 1

4 O 3

5×1 5

3×3 9

9×1 9

1×3 3

2×1 2

3×3 9

4×1 4

5×3 5

1×1 1

2×3 6

3×1 3

4×3 2

Adding these results together modulo 10 gives us 5+9+9+3+2+9+4+5+1+6+3+2=8 This is the value of t above. We calculate the check digit by subtracting this number from 10 to get c = 10 – 8 = 2 When the barcode reader analyses this barcode it only takes it a fraction of a second to calculate the check digit and compare it to the one on the barcode. However the check digit can do more than this. Quite often a particular bar can be smudged and be unreadable to the scanner. If there is only one error and the check digit is legible then the barcode reader can calculate the correct digit by reversing the process and shopping can proceed. For example if the scanner reads the number 4#02030145620 where # represents an unreadable smudge the calculation would proceed as follows. Table 5. Correction of an error Barcode Position Weights Calculation Result

4 E 1

x O 3

0 E 1

2 O 3

0 E 1

3 O 3

0 E 1

1 O 3

4 E 1

5 O 3

6 E 1

2 O 3

4×1 4

x×3 3x

0×1 0

2×3 6

0×1 0

3×3 9

0×1 0

1×3 3

4×1 4

5×3 5

6×1 6

2×3 6

Adding these results together modulo 10 gives us 4 + 3x + 0 + 6 + 0 + 9 + 0 + 3 + 4 + 5 + 6 + 6 = 3x + 3 So the check digit should be 10 – 3x – 3 = 7 – 3x = 0. We are looking for a number between 0 and 10 that when multiplied by 3 gives a remainder of 7. 7 is 71

BROPHY

not divisible by 3, neither is 17 but 3×9 = 27. This tells us that the unreadable digit was 9. The error has been both detected and corrected. How good is this method at picking up errors? Studies have shown that slightly more than 79% of errors are the replacement of one digit by a different digit. Suppose a digit, d, whose weight is 1 has been replaced by the digit e. The weighted sum will now change by the amount d – e (mod 10). This will only go undetected if d – e = 0 (mod 10) which means that d = e so there is no error. Similarly if a digit whose weight is 3 was altered we would have 3 (d – e) = 0 (mod 10) so again there is no error. The check digit approach traps all of the commonest types of error. DEEP SPACE

Figure 4. Saturn (Courtesy NASA/JPL-Caltech).

We have all looked in wonder at the photographs sent back from deep space by NASA (Figure 4). How are they sent? How are they received? How do we know that the information has not been corrupted? Mathematics is, of course, the answer to all these questions. To begin the information must be recorded. In other words a digital photograph has to be taken. This process stores all the required information as a sequence of the digits 0 and 1. This information, a sequence of bits, is sent to Earth as a bitstream. Images as Bits The grid drawn in Figure 5 has certain squares coloured black. It represents a fairly badly drawn smiley face. The only colours used here are black and white. This is very easy to translate into the digits that are used by all computers. If we represent a white square by the digit 1 and a black square by the digit 0 then the whole picture is represented by the number 72

CODING THEORY

1001001110010011111111111100011101101110101111011101101111100111

Figure 5. Smiley face.

The receiver of the above number can reconstruct the face only because it is known that the face is drawn on a grid with 8 squares horizontally and 8 squares vertically. The eight squares can each have a white colour or not. Eight squares will have 256 possible combinations of white or black (1 or 0). All black will be 0 and all white will be 255. A collection of eight bits is called a byte. Going from left to right the decimal value of a 1 will be 128

64

32 16

8 4 2 1

while the decimal value of a zero is always zero. For example the number 10010011 = 128 + 0 + 0 + 16 + 0 + 0 + 2 + 1 = 147 The big sequence of zeros and ones written above can be thought of as the sequence of the following eight numbers: 10010011 = 147 10010011 = 147 11111111 = 255 11000111 = 199 01101110 = 110 10111101 = 189 73

BROPHY

11011011 = 219 11100111 = 231 Can you see that 0110110001101100000000000011100010010001010000100000011000 will give the inverse smiley face of Figure 6?

Figure 6. Inverse smiley.

This shows how simple images can be translated into a sequence of binary digits. Depending on the information that is going to be transmitted the details of the coding will be a lot different. This does not matter here. You can see how a sequence of binary digits can even carry pictures. The next question is how to get the binary sequence from outer space to Earth. Phase Modulation You have probably seen the graph of y = sin x many times before (Figure 7). This graph, and modifications that can be carried out on it, is the secret to transmitting information from deep space back to our own planet.

Figure 7. Sin(x). 74

CODING THEORY

Electromagnetic radiation travels through space at about 300 000 km/s. Nothing can travel any faster. The distances to be covered are so vast that this speed is needed. Electromagnetic radiation, depending on its frequency, is perceived as light, radio, x-rays, γ-rays etc. It is as radio waves that NASA’s spacecraft transmit information back to their receivers. Radio waves have the advantage of requiring little energy to produce and passing easily through the Earth’s atmosphere. How can these radio waves carry information? Specifically, how can these waves carry a stream of the digits 0 and 1? To answer this question we need to look more closely at the graph of Sin(x). Figure 8 shows two sin waves with different phases.

Figure 8. Illustration of phase.

The red curve is the graph of Sin(x). The blue curve has exactly the same shape but is in a different place. It is the graph of Sin(x – θ). θ displaces the original wave by a certain amount. This is called the phase of the wave. There are three different methods by which waves can be used to carry information. The maximum height of the wave is called its amplitude. This can be modified to give Amplitude Modulation or AM which is common with certain radio signals. The number of wave fronts that pass a given point in a certain time is called the frequency of the wave. This gives rise to Frequency Modulation or FM which is also used in the transmission of radio waves. The third method, Phase Modulation, is the method used to get signals across the Solar System. Recall that all we need is to transmit a sequence of the digits 0 and 1. This is done in the following manner. Figure 9 shows three different graphs of waves. When the space craft is ready to transmit information back to Earth it begins the process by broadcasting a simple wave. This is received some time later at the Earth’s surface and allows the receiver to detect the frequency and phase of the transmitted wave. For our purposes we can regard this as a phase of zero. The point P is a typical point on the wave. Once communication is established between the transmitter on the spacecraft and the receiver on the ground the transmitter begins to modify the wave. It does this by changing the phase of the wave. The point P’ is on a wave that has its phase shifted 75

BROPHY

Figure 9. Phase shift.

by 900 which might represent the digit 0. The point P’’ is on a wave that has its phase shifted by –900 which might represent the digit 1. So by modulating the phase of the wave a sequence of the two digits 0 and 1 can be transmitted back to Earth. This requires very little energy. This is just as well because, as NASA points out, by the time the signal reaches Earth it is so weak that it would take 40,000 years to collect enough energy from it to light a Christmas tree bulb for one millionth of a second. Coding the Information If the signal is very weak and the distances to be covered are very large then there is obviously a strong possibility of errors occurring. These can occur in creating the signal, transmitting the signal or receiving the signal. While the mathematicians at the Jet Propulsion Laboratory (JPL) have invented many ways of detecting and correcting errors they all involve redundant information. In its simplest form this means that every binary digit (bit) is sent three times. Suppose that a very small part of the information being transmitted is 1011 then the transmitting computer will first transform this into 111 000 111 111. Figure 10 shows the sequence of waves that would then be transmitted to Earth. Each digit 1 in our system is sent as Sin (x + π/2) which gives a phase shift of –90o. Each digit 0 is sent as Sin (x – π/2) which gives a phase shift of 90o. Naturally this is just one continuous signal that is received at Earth. By looking at the phases of these waves the scientists at the receiving end are able to retrieve the message 111 000 111 111. Hence, knowing the redundancy that was applied, they can deduce that the original message was 1011.

Figure 10. 111 000 111 111. 76

CODING THEORY

Suppose that instead of the sequence of waves that are shown in Figure 10 a slightly different set arrived at the receiver. Look at the sequence of waves shown in Figure 11. An error has occurred and one wave has a different phase than those above.

Figure 11. 111 000 111 011.

At some point one of the waves had its phase changed. There are many ways this could have happened. The result is that the message received is not a string of triples of digits. Instead it is 111 000 111 011. The fact that not every digit is a triple alerts the receiver immediately that an error has occurred. The receiver even knows where the error is: 011. The fact that there are two occurrences of the digit 1 and only one occurrence of the digit 0 strongly implies that it is the 0 that is incorrect and the message received should have been 111 000 111 111 which means that the original sequence was 1011. This method, with its built-in redundancy, increases the length of the message by a factor of three. This is very wasteful and the methods actually used by NASA are much more efficient. This example merely demonstrates the principle that errors can be both detected and corrected. Hamming codes are much more efficient and are the basis of many of the codes used by NASA and, indeed, in most types of digital coding. The Hamming code uses redundancy also but it is not anything like as wasteful as the tripling method defined above. Hamming codes work by a very clever use of Geometry. Geometries The word “Geometry” conjures up for us ideas of points and lines and probably certain properties of triangles. Euclid, about 300 BC, assembled all that was known of Geometry into thirteen books called Elements. For many centuries the theorems in these books were regarded as much a part of the physical world as the mathematical one. During the Renaissance artists discovered the secrets of perspective drawing. This is the method of drawing far things smaller than near ones. Here parallel lines will meet. Think of the appearance of railway tracks as they disappear into the distance. Mathematicians, particularly the French mathematician Gérard Desargues, became very interested in these ideas and realized, after some resistance, that the artists were using a geometry other than that of Euclid. If one other geometry could exist then why not more? The very meaning of the words point, line and distance can change. To see how this can be used in error correction we need to look at some definitions, particularly distance. In the geometry of Euclid the distance between two 77

BROPHY

points is the length of the line segment connecting them. In coordinate geometry we use a formula involving squares and square roots to calculate this. (x1 − x 2 ) 2 + (y1 − y 2 ) 2

In ordinary life you may use quite different definitions. If a motorist stopped you to inquire how far he was from his destination you might reply: “It is 5 km as the crow flies but the road winds a lot and you will have to drive 7 km”. If you saw a signpost indicating that the distance to a city was 156 km you would know that meant that there were 156 km of road to cover. The actual length of the shortest line segment may be only 120 km. As you see, “distance” means what we want it to. Using Hamming codes for error correction only the digits 0 and 1 are used. A word is defined as a particular sequence of these digits. We regard each word as a point in a space. The “distance” between two words is defined as the number of digits that differ between the two words. Thus the distance between 1100101 and 1010110 is 4 since there are exactly four positions where the strings of digits differ. The geometric properties of the space where the words live are then used in error detection and correction. The mathematics required is quite advanced and uses tools from Linear Algebra, especially vectors and matrices. Vectors can be thought of as line segments pointing in specific directions. Matrices can be thought of as objects that do things to vectors.

Figure 12. Vectors. 78

CODING THEORY

The blue vector (closest to horizontal axis) in Figure 12 has been transformed by a matrix to the green one. This is a rotation matrix. In a certain sense, there are only specific directions these vectors can point. This makes error detection possible. The distance function then makes error correction possible also. We will illustrate this with a simple example. We will use just two valid code words: 000 and 111. It is not too difficult to imagine a situation where just two words are needed. Suppose that 000 signifies Off and 111 signifies On then we have our binary system back again. There are three digits being used here which gives us eight possible words: 000 001 010 011 100 101 110 111 Only the first and last of these are valid. The other six are errors. We can regard each of these words as points in 3D space where they form a cube. In two dimensions you are familiar with representing points with two coordinates. We refer to these pairs as (x, y) where the number x tells how far to move in a horizontal direction and the number y gives the distance in the vertical direction. Similarly once we move into three dimensions another direction becomes possible and so we need three numbers to represent each point. The triplet is usually referred to as (x, y, z). As we increase the number of dimensions we increase the number of coordinates. We lose the ability to draw pictures but the principles are the same and we can work out distances and equations of curves as easily in seven dimensions as in two. To see how error correction with Hamming codes works we will stay in three dimensions to get a feel for the process. In Figure 13 we show all the possible code words as the coordinates of a cube. Each side of the cube is one unit long as measured using the Hamming definition of distance. Here is how we use this geometry to correct words with one error. If the transmitted word is 000 then the possible errors lead to 001, 010, 100. These are the only words that can be got from 000 with just one error. You can think of them as being on a sphere in this space whose radius is 1 with centre 000 as in Figure 14. 79

BROPHY

Figure 13. Cube of words.

Figure 14. Errors on sphere.

All these words are also on a sphere of radius 2 centered at 111. The sphere of radius 1 centred at 111 is shown in Figure 15 and contains the triplets 110 101 and 011. The two spheres shown divide the error words into two sets with no elements in common as shown in Figure 16. To correct a word that has only one error we simply find the nearest code word to the error word. Other methods allow the detection and correction of more errors but are beyond the scope of this introduction. 80

CODING THEORY

Figure 15. More errors.

Figure 16. Spheres of the hamming code. COMPACT DISCS

Digital Music Compact discs (CD) are used to store many different types of information today. They were originally used to store sound, particularly music. How can music be encoded on the surface of a disc and how can it be retrieved? You will not be surprised to discover that, once again, trigonometry and binary digits form the key. All sound is transmitted as waves of some form. The sound itself needs a medium to carry it. The medium, usually air, will vibrate. This vibration is carried from one place to another as a wave. If certain regular patterns are present then humans call it music. From this point of view there is no difference between Grand Opera and Heavy Metal. Physically they will both be transmitted as waves with certain patterns.

Figure 17. A sound wave. 81

BROPHY

Figure 17 shows a pure sound wave. This is composed of a combination of various sin waves of different frequency and amplitude. For any instant between the start and end of the sound the wave will be at some particular point. This is what we mean by saying that the sound is continuous. Technically it is an analog signal. This type of signal cannot be represented just by using the two digits 0 and 1. The first thing to be done is sample the signal at various places. This will return the values of the sound at specific places: but not everywhere.

Figure 18. Sampling a wave.

Figure 18 shows a series of blue lines imposed on the sound wave illustrated in Figure 17. These are the values of the sound wave at specific points. To attain a sample that is close to the original sound there must be a large number of sample points. For a CD the sound is sampled 44,100 times each second. This gives rise to a sample such as that shown in Figure 19.

Figure 19. Sampled sound.

Each blue line represents a particular amplitude. The sampling rate of 44,100 Hz (Hz means per second) allows the use of 65536 different amplitudes to reconstruct the sound. What use are all these amplitudes if digital media can only store the digits 0 and 1? These digits can be used to build up numbers of any size. The two digits being used are 0 and 1. 0 represents a switch being in the Off position and 1 represents the switch in the On position. Each digit, therefore, distinguishes between two states. Hence a sequence of 16 digits can be combined to give the large number of 216 = 65536 values. Since one of these positions has all the switches turned off sets of sixteen bits (two bytes) can be used for all the numbers between 0 and 65535. The process is the same as we saw in constructing the smiley face from sets of eight bits. Physical Structure The CD itself is a plastic disc (Figure 20). During its manufacture the plastic is shaped with very small bumps arranged in a continuous spiral. This is next covered with a

82

CODING THEORY

Figure 20. Structure of a CD.

thin layer of highly reflective aluminium. An acrylic layer protects the aluminium. The label is printed onto the acrylic. A laser beam is reflected from the aluminium layer where the change in reflectivity caused by the bumps is interpreted as a sequence of the digits 0 and 1. Thus we have come via trigonometry to a bitstream yet again. A sophisticated piece of electronics then converts the digital signal back into analog form and this is used to convert the data back to sound. The very high sample rate means that the loss in quality is undetectable to most human ears although some musicians claim that they can distinguish between the sound on a CD and the analog sound of vinyl records. In Figure 21 because the sample rate is low either the green or red curve, or indeed many others, could be reconstructed.

Figure 21. Low sample rate.

In Figure 22, however, it would be very difficult to reconstruct the wrong wave as the high sample rate leaves hardly any freedom.

Figure 22. High sample rate.

This is the reason why such a high sampling rate is needed in the transformation of an analog sound wave to a set of digital data. Error Detection and Correction Errors, of course, can occur either in the manufacturing process or from physical damage to the CD. In the section on barcodes we saw how check digits can detect 83

BROPHY

and correct simple errors. A very sophisticated extension of this method is used in detecting and correcting errors on a CD. The methods used involve complex numbers. These are numbers of the form a + bi where i is defined by the equation

i 2 = −1 Using complex numbers it is possible to get various roots of the number 1.

Figure 23. Complex roots of 1.

The red points in Figure 23 are the eight eighth roots of 1. These are 1 2

+

1 2

i,i,−

1 2

+

1 2

i,−1,−

1 2

−

1 2

i,−i,

1 2

−

1 2

i,1

This means that any of those numbers raised to the power of eight will give the number 1. The data points can be regarded as the coefficients of a certain polynomial, p. The complex roots of 1 are used to create a second polynomial, g. The product of these two polynomials is analysed by the CD player which divides this result by g. If this division process leaves a remainder then there is an error in the received data. By evaluating this remainder at the complex roots of 1 the error can be corrected. All this happens far too quickly to be detected by the ear so the human listener hears the continuous sound of a melody thanks to some very complicated electronics and mathematics. This method is called a Reed-Solomon code and was first described in 1960. The particular roots of 1 to be used depend on the length of each word and the number of errors to be checked. It is almost incredible to think that nearly 50% of errors can be corrected by this method. This explains why a CD, unlike a vinyl record, does not exhibit gradual signs of decay. As long as the scratches on the CD remain below a critical level then the error correcting methods will be able to reconstruct the sound perfectly. If the faults in the CD rise above a certain level then correction is impossible and it seems to us that the CD has suddenly been corrupted. TO DO IN CLASS

Art Work In this activity the teacher will get different groups in the class to draw pictures on graph paper and transmit the information in code. Time constraints probably mean 84

CODING THEORY

that the grid on the paper should be fairly large. A (simple) piece of art should be drawn with each square clearly either blank or covered as in the smiley face drawn earlier. Divide the students into different groups. Each group will have two sheets of graph paper. After a discussion each group will draw a simple figure on one sheet of graph paper. Using binary arithmetic this figure should then be written as a binary number and then translated into decimal. The decimal number is written on the blank sheet of graph paper. The blank sheets are now passed to different groups and the figures reconstructed. Remember if a figure is drawn incorrectly the fault may lie with the translation into a decimal number or the translation from the decimal number to the corresponding binary number. How do these correspond to the errors discussed above? REFERENCES Cederberg, J. N. (2001). Axiomatic systems and finite geometries. A Course in Modern Geometries, 18–25. (2001). Speaking in phases. Technology Teacher [serial online (60, 12–17)]. Available from: Academic Search Complete, Ipswich, MA. Fitzpatrick, P., & Kingston, J. (2000). Error correcting codes and cryptography. Newsletter Irish Mathematics Teachers Association, (97), 45–58.

Tim Brophy National Centre for Excellence in Mathematics and Science Teaching and Learning (NCE-MSTL) University of Limerick

85

JEAN CHARPIN

4. TRAVELLING TO MARS: A VERY LONG JOURNEY Mathematical Modelling in Space Travelling

INTRODUCTION

Just over forty years ago, Neil Armstrong, Edwin ‘Buzz’ Aldrin and Michael Collins were the first people to travel to the Moon. This was, in the words of Neil Armstrong, ‘one small step for man, one giant leap for mankind’. The next big milestone in space travelling is to reach our next closest neighbour in the solar system: Mars. This planet is much further away than the Moon and there are a lot of challenges related to this trip. Mathematics will be key to solving them. This chapter introduces a few activities related to this trip focussing on two aspects of the school curriculum: 1. Geometry: circles and ellipses. The activities proposed in the first section of this chapter involve some simple geometry: drawing circles and ellipses, studying the distance between two points belonging to the circles, using the properties of aligned points and diameters to determine the minimum and maximum distance between the Earth and Mars. 2. Large numbers: Space travelling involves large numbers. Understanding what they represent is rather difficult for everyone. The simplest way to make sense of these values is to make a careful choice of units: large numbers are then transformed into much smaller ones which are much easier to interpret. The activity presented in the second section will show a simple way to achieve this. These activities offer mathematics teachers interesting and rewarding ways to engage secondary school students that are accessible to students and teachers alike. CIRCLES, ELLIPSES AND DISTANCE BETWEEN THE EARTH AND MARS

In this first part, the orbits of Earth and Mars will be studied. In the Solar system, the planets move around the Sun describing a curve known as an ellipse. The properties of orbits and ellipses will be briefly reviewed at first and two possible activities will then be presented. Some Background Some properties of ellipses and circles. Figure 1 shows some of the properties of ellipses. They are flattened versions of a circle and have a lot of common properties J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 87–98. © 2011 Sense Publishers. All rights reserved.

CHARPIN

with this well known curve. A point belongs to a circle if its distance to the centre is equal to the radius. An ellipse also has a centre, denoted O on the figure, but this is only a symmetry point. A point P belongs to the ellipse if the sum of its distance to the two points F1 and F2, known as the foci, equals a constant: P ∈ Ellipse ⇔ PF1 + PF2 = d1 + d 2 = C

y P b

d1 F1

B d2 F2

O

A x

f

a Figure 1. Geometry of an ellipse.

If the value of the constant distance d1+d2 is just above the distance between the two foci, F1F2, the ellipse will be very flat. Conversely, if d1+d2 ∠AK ' D '

Similarly let BK " meet the circle again at D ' . Then ∠AKB = ∠AD ' B (same segment) = ∠AK D + ∠K AD ' (external angle equal to opposite internal angles) "

212

'

"

TEACHING ASPECTS OF SCHOOL GEOMETRY

= ∠AK " B + ∠K " AD ' > AK " B

Thus K is the required position. Now let AB = a, BC = b and CK = x. CK² = CB.CA x² = b(a + b) x=

b( a + b )

which gives a formula for the distance x. AB is actually 5.6m. So for example if b = 20m then x=

20(5.6 + 20)

x = 22.63m (2 d.p.) and if b = 30 you get 32.68m (2 d.p.). So in practice the kicker should come out a bit more than the distance BC. A Different Approach (Using Calculus)

Figure 6. Using calculus interpretation.

Let K be an arbitrary point on the perpendicular at C to AB. Let the angle AKB = θ and angle BKC = φ (Figure 6). Let AB = a, BC = b and CK = x. a+b b Then tan ( θ + φ ) = and tan φ = x x 213

LEAHY

a+b tan θ + tan φ = 1 − tan θ tan φ x b tan θ + x = a+b ⇒ b x 1 − tan θ x

⇒

Let tan θ = t. So b x = a+b bt x 1− x ⎛ b⎞ ⎛ bt ⎞ ⇒ x⎜ t + ⎟ = (a + b) ⎜1 − ⎟ x x⎠ ⎠ ⎝ ⎝ ⎛ bt ⎞ = (a + b) ⎜ − ⎟ (a + b) ⎝ x⎠ b ⎛b⎞ ⇒ t [x + ⎜ ⎟ (a + b)] = (a + b) - x ⎛⎜ ⎞⎟ ⎝ x⎠ ⎝ x⎠ = a +b−b = a ⎞ ⎛ ⎟ ⎜ a ⎟ ⇒t =⎜ ⎜ x + b ( a + b) ⎟ ⎟ ⎜ x ⎠ ⎝ ⎞ ⎛ ⎟ ⎜ a ⎟ =⎜ 2 ⎜ x + b(a + b) ⎟ ⎟ ⎜ x ⎠ ⎝ ⎞ ⎛ ax ⎟ = ⎜⎜ 2 ⎟ ⎝ x + b( a + b) ⎠ t+

Differentiating t with respect to x yields

dt [ x 2 + b(a + b)]a − ax.2 x AB(a + b) − ax 2 = 2 = dx [ x 2 + b(a + b)]2 [ x + b(a + b)]2 Since the denominator on the right hand side is positive 214

TEACHING ASPECTS OF SCHOOL GEOMETRY

dt = 0 if and only if AB(a + b) − ax 2 = 0 . dx ⇒ AB(a + b) = ax 2 ⇒ b(a + b) = x 2 since a ≠ 0

Since x > 0, x =

b( a + b) .

To show this turning point is a maximum note: x

0 ⇒ a x 2 < AB(a + b) ⇒ AB(a + b) - a x 2 > 0 dt ⇒ >0 dx

Similarly x >

b( a + b) ⇒

tan θ is a maximum at x =

dt < 0 i.e., t is a maximum at x = dx

b(a + b) or

b( a + b) .

⇒ θ is a maximum at x = b(a + b) since θ increases when tan θ increases. ANGLES IN SNOOKER

The game of Pool, popular among young people, and similar games like Snooker and Billiards lend themselves to interesting uses of elementary geometry. The main objective in these games is to strike a billiard ball with another ball in order to direct it into one of the pockets at the side of the table. Problems arise when the ball to be struck is hidden behind another ball so that the cue ball must be deflected off the side of the table into the path of the ball to be struck. The cue ball may need to be deflected off more than one side depending on the configuration of the balls on the table. Any text in transformation geometry would prove useful during this phase e.g. Jeger (1968). The simplest case is illustrated in Figure 7. The sides of the table are labelled a, b, c, d. C is the cue ball and O is the ball to be struck by the cue ball. The other balls are not shown in the diagram. The cue ball C must strike the side a at some point P so that when deflected it strikes the object ball O. Now in physics there is a law of reflection which states that under ideal conditions when an object is reflected than the angle of incidence is equal to the angle of reflection. 215

LEAHY

Figure 7. Snooker shot off one side cushion.

Figure 8. Law of reflection.

In Figure 8 the angle of incidence is ∠ APC and the angle of reflection is ∠ BPC where CP ⊥ E so that ∠ APC = ∠ BPC. This implies that ∠ APD = ∠ BPE which is more useful for our purpose.

Figure 9. Construction to locate position of P. 216

TEACHING ASPECTS OF SCHOOL GEOMETRY

The problem now is to find P given that ∠ CPA = ∠ BPO (Figure 9). Extend CP to meet the perpendicular from O to the side a at Oa meeting AB at E. Then

∠Oa PB = ∠CPA = ∠BPO

⇒ ΔOa PE ≡ ΔOPE ⇒ Oa E = OE i.e., Oa can be thought of as the reflection of O in side a. This then gives another method to find P. Simply find Oa first and then join C to Oa meeting AB at the required point P. In practice you can imagine a virtual ball at Oa which must be struck by the cue ball C. This will result in striking ball O. An alternative analysis is as follows. Draw CF ⊥ AB . Then ΔCFP is similar to ΔOEP so that FP CF = PE OE

i.e., FE is divided in the ration. CF : OE This method is not practical in actual play but for teaching purposes it does introduce the relationships between similar triangles. Exercise: In Figure 9 if CF = 96cm, OE = 75cm and FE = 228cm calculate the distance FP. Teaching Note Professional snooker players often make shots using multiple cushions. ‘What if ’ considerations may now be employed by students to extend the analysis in interesting ways e.g. consider similar shots involving 2, 3 or more cushions. Other questions may occur to students that can be investigated. For example students might come to conjecture that the cue ball travels the least possible distance in all such cases. Is this true? Can you prove it? These situations are dealt with in the following sections for the benefit of the mathematics teacher. Two Cushion Shots Now consider the case where the cue ball must be reflected off two sides a and b (Figure 10). 217

LEAHY

Figure 10. Snooker shot of two side cushions.

To analyse this case we reduce it to two cases of reflection off one side. First imagine the cue ball is at P1 (yet to be determined) needing to strike ball O by reflecting off side b. As in section 1 above find the reflection Ob of O in side b. Then joining P1 to the virtual ball at Ob gives the point P2 . But of course you do not know point P1 on side a. However, the problem is now reduced to reflecting the cue ball C off side a to strike the virtual ball at Ob and again this has been solved in section 1. So find the image Oa by reflection the virtual ball Ob in side a. Then joining C to Oa gives the point P1 which in turn will reflect to the point P2 and thence to O. To construct the path of the cue ball C therefore first find Ob , the reflection of O in side b, then find Oa , the reflection of Ob in side a. Join C to Oa meeting side a at P1 , then join P1 to Ob meeting side b at P2 and finally joining P2 to C completes the path. Exercise: Prove CP1 // OP2 n Cushions Shots The method of the proceeding section can now be extended to deal with reflections off any number of sides. The path for reflections off three sides a, b and c in turn is shown in Figure 11. In order to find Oc , Ob and Oa , join C to Oa giving P1 , P1 to Ob giving P2 and P2 to Oc giving P3 . Exercise: Sketch the path for reflections in turn off sides a, b, c, and d. 218

TEACHING ASPECTS OF SCHOOL GEOMETRY

Figure 11. Snooker shot off three side cushions.

Shortest Path (Cue Ball Travels Least Possible Distance) An interesting aspect of the problem is that in any given situation the cue ball C travels the least possible distance to reach the object ball O. In Figure 12 let P ' be any point on side a other than P. Join C P ' , O P ' and Oa P ' .

Figure 12. Shortest path calculations.

From section 1,

ΔO a PE ≡ ΔOPE ⇒ Oa P ≡ OP 219

LEAHY

Similarly, Oa P' ≡ OP' So CP + PO = CP + O A P

< CP '+O A P' (triangle inequality) = CP'+ P' O i.e., CP + PO < CP'+ P' O Further Investigations Students will find that GeoGebra is a very suitable vehicle for pursuing these and other geometric investigations. 1. Sketch the path for reflecting off side a and c in turn 2. Sketch the path for reflecting off sides a,c,a, and c, in turn. 3. Try sketching the path that will return the cue ball to its starting position (ignoring O) under different sequences of reflections 4. Investigate if it is always possible to construct a path from C to O under specific given initial conditions. FINAL REMARKS

These problems and investigations offer real opportunities for student engagement in the mathematics classroom. I have no doubt that mathematics teachers and students who use them will find interesting extensions and a lot of enjoyment working with them. This approach exploits students’ cultural propensity to play and after a fashion engage in mathematical activity, together with the motivational power of context for teaching and learning mathematics. By working through the investigations and problems students are invited to marshal their resources, investigate, experiment, specialize and generalize, conjecture and prove and in other ways experience mathematics as mathematicians do. And there were surprises – an optimum angle for the kicker, and a path of least possible length for the cue ball! REFERENCES Bishop, A. J. (1991). Mathematical enculturation: A cultural perspective on mathematical education. Dordrecht: Kluwer Academic Publishers. Jeger, M. (1968). Transformation geometry. London: Allen and Unwin.

Jim Leahy Department of Mathematics and Statistics University of Limerick

220

JUERGEN MAASZ

13. INCREASING TURNOVER? STREAMLINING WORKING CONDITIONS? A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES AS A TOPIC IN MATHEMATICS LESSONS

INTRODUCTION

Mathematics is applied in various ways in both everyday life as well as the working world. In this chapter, I aim to bring parts of the working world into the classroom by means of the Multi-Moment Recording. More precisely, the students should be acquainted with Multi Moment Recording as a tool to understand and record more effectively production processes. The main focus of this chapter is not on deducing the formula used, but simply on using it. For teachers that do not like this grade of reality in a mathematics lesson there are two helping excurses, one on checking its plausibility and practicality, and one on its mathematical origin. Furthermore, I plan on simulating and optimizing a simple production process (production of paper planes) in order to learn about mathematics as a means of presenting and communicating facts – aspects that are rarely addressed at school. OUTLINE: SUGGESTED LESSON

I will start this contribution with a short outline of the proposed teaching sequence and some preliminary remarks. Preliminary Remarks

First preliminary remark: Since it has often been shown that learning is facilitated by active participation in class, I would like to suggest a lesson that might seem somewhat unusual at first glance. The following description is meant to provide stimulus and support for the actual implementation of the suggested lesson. My proposal includes hints for teaching methods. I think they are appropriate. This should not mean that I know better than you how to teach. You, as a teacher, are the best expert for your teaching. I just like to show how this unusual thing (=my proposal) could really happen. The second preliminary remark refers to the question concerning the appropriateness of the topic for the classroom. Should the optimization of production processes be addressed at school? Wouldn’t such a topic mean meddling with social or trade-union related affairs? The answer to the first question is a clear YES, J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 221–237. © 2011 Sense Publishers. All rights reserved.

MAASZ

because practical relevance is a basic requirement in all current school curricula. The second question relates back to a crucial issue of lessons which are close to reality: One of the major curricular objectives concerns the development of students’ ability to judge, that is, the ability to form a personal opinion based on critical reflection and the ability to justify these opinions successfully. Effective instruction meets these objectives. In other words: “Effective” does not mean that the teacher summarizes his or her opinion in short words to make this non mathematical part of the mathematics lesson very short. The teacher helps the students to learn to make their own decision using mathematics for a more rational decision. The third preliminary remark concerns an objection frequently voiced by teachers saying that instruction close to real life is useful but time-consuming and hardly practical; it does not fit into day-to-day school life, which is dominated by time pressure and other curricular demands. However, traditional mathematics lessons, which dedicate much time to operative work including practicing calculation methods, have proven ineffective. That is, fostering active project work during lessons is time well spent indeed. OUTLINE OF THE LESSON

Presenting the development of industrial production, i.e., the transition from handicraft to manufacture and current car production by means of industrial robots and production islands is a good lesson opener and motivation. A film such as Charlie Chaplin’s ‘Modern Times’, which presents assembly line work from a very specific perspective, could complement such a lesson opener. If an interdisciplinary approach in combination with (economic) history is adopted, the students will get an insight into the development of productivity. At the same time, students will realize that today assembling a car is impossible for an individual. Mass production of automobiles is impossible without division of labour and specializations are fundamental prerequisites. The second step is a paper plane construction contest. Who can produce the best paper plane? Who is fastest? How long does it take? Of course, other simple objects can be produced as well. Paper planes, however, are particularly suitable because they require very little material (a piece of waste paper or spare photocopies, which are easily available in schools), they are easy to fold (no scissors, glue or other material to fix component parts are required) and students can discuss the best folding instructions. Google offers 12600 sites related to paper plane construction, one of which, for example, shows a youtube video: http://www.tippsundtricks24.de/heimwerken/ do-it-yourself/anleitung-papierflieger-saebelzahntiger-basteln.html. Furthermore, there is a simple quality check for every paper plane: a test flight! At this point is seems useful to note that students should decide on a code of conduct, for example, how to cooperate with others, how to use the finished products, where in the classroom to establish the designated zone to test the paper planes. After evaluating the results, the teacher sets an additional assignment: How can we all together produce the largest number of paper planes in the shortest possible time? Imagine we want to produce and sell them. The knowledge gained in the initial phase of the lesson suggests that the key to success is division of labour. However, another important precondition needs to be 222

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

addressed first. What might an optimal design look like? What is optimal under the given circumstances? First, we need to set appropriate criteria. Such criteria might include: 1. 2. 3. 4.

The paper plane needs to be capable of flying. It should look good. Folding it should be easy. The production requires only folding but not gluing.

From this, the first conclusions can be drawn, which can in turn serve as arguments for such criteria. For example, crumpled paper is not effective because neither does it look nice nor will the plane have the desired flight qualities. How do the students in the class find the optimal design? Let’s organize a competition. Groups of three develop a proposal. All proposals are gathered, presented in class and discussed. Finally, students try to reach a consensus and jointly agree on one proposal (however, nobody is allowed to vote for their own proposal). In order to illustrate the suggested lesson, I have decided to describe the following part of the lesson by means of a specific model of a paper plane, using the internet, which offers a large number of step-by-step instructions as well as videos on youtube, for example this one: http://www.youtube.com/watch?v=y5ebnviXegc&feature= player_embedded My understanding is that the paper plane model ‘arrow’ meets the four criteria mentioned above. Ultimately, however, I have chosen this model because it requires a relatively small number of work steps. The video, which runs 1 minute and 26 seconds, is considerably shorter than other videos, which usually run for three minutes. This new criterion (length of the video) is mentioned here to indicate that considerations taken in the initial phases of the lesson might need revision in the course of the project. Let’s consider the chosen paper plane in more detail (Figure 1):

Figure 1. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded 223

MAASZ

The third step is a test run: How many paper planes of this type can we produce together in five or ten minutes? The test run probably brings about a problem of quality control: Do the folded paper planes actually resemble the model? At this point we have reached the question of improving the speed and quality of the production process. That is, we are back to organization and division of labour. Let’s do a simple test in class. After having agreed on a model and construction plan, we fold as many planes as possible – for about ten to fifteen minutes. Then, we will plan the division of labour: Every student does one bend and hands over the paper to the next station, where the next bend will be produced – until the plane is finished at the last workstation. Then another test run and we’re done! In order to plan the division of labour, a precise analysis of the instructions is needed. I will demonstrate this using the ‘arrow’ model. We’ll watch the video carefully, take a piece of paper and fold along with the video. 1) 2) 3) 4) 5)

Take a piece of paper. Fold down the centre. Make sure the crease is straight and sharp. Open it out again. Fold in the top left hand corner to the centre line.

Figure 2. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded

1) Fold in the second top corner to the centre line (Figure 2). 2) Fold the first corner again. 224

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

Figure 3. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded

3) Fold the second corner again. 4) Fold the two sides again along the centre line (Figure 3). 5) Indicate web width. 6) Fold the left wing (Figure 4). 7) Fold the right wing.

Figure 4. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded 225

MAASZ

8) Fold the edge of left wing. 9) Fold the edge of right wing. 10) Flight test. Why does this plane fly but others don’t? Or at least not as well as this design? At this stage, I would like to point out that interdisciplinary and cross-curricular lessons in cooperation with physics might be useful. We could, for example, investigate why we should fold the front section of the paper more than once. The lesson is now beginning to get exciting. Once the division of labour has been discussed and organized by the students (who is sitting where doing what?), another five-minute production phase follows. Can we produce a larger number of airworthy planes? In step four, the optimization of the production based on division of labour begins. How can we become even faster and more efficient? This question is raised in every real assembly hall every day. We are looking for answers and find many of them on the internet. As we’re talking about mathematics lessons here, I have chosen those approaches which are strongly connected to mathematics. (Motivation or pressure would be examples of alternative solutions.) I am going to focus on Multi Moment Recording here, which is a process to investigate how long every work step takes in relation to the total labour time. Multi Moment Recording is a sampling procedure to determine the frequency of occurrence of predefined phenomena. A number of temporary observations of work systems are collected without involving the observed person in an active way, for example, by asking for information or interruptions of other kinds. The organization manual of the German Federal Ministry of the Interior describes the Multi Moment Method (http://www.orghandbuch.de/ nn_414926/Organisations Handbuch/DE/6__MethodenTechniken/61__Erhebungstechniken/616__Multimom entaufnahme/multimomentaufnahme-node.html?__nnn=true). I translate and draft it in the following way: The Multi Moment Method is a statistical method to find out how often different parts of a working process happen. This is done by viewing the working process after defining and separating single work steps. If viewing the work in progress happens several times (often enough) it gives an exact image of what the working process is. This is what you need to optimize it. This organisation manual provides a process description which can be used for a research design in the classroom based on the Multi Moment Method to investigate the production of paper planes. The following plan could be implemented. The class is divided into several groups, most of which produce paper planes. Two groups conduct Multi Moment Recordings of the ongoing production by describing in detail the various work steps. Then, the students create an observation plan, define the number of observations and plan the observation tour, i.e., a list of times at which the various work stations will be observed. What does that mean in our example, i.e., the production of ‘arrow’ paper planes? I have identified and listed a total of 15 work processes. It seems obvious to take 226

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

these 15 processes as a starting point. One question that arises fairly quickly is how to record the handing over of the unfinished planes. Let’s assume that the workstations are arrayed in line (the tables should be arranged in a row, if possible). In this way, the planes can readily be handed over to the adjacent workstation. The usefulness of this arrangement becomes obvious when the distances between two tables are increased so that transporting the planes to the nearest workstation takes a minimum of 10 seconds. 14 transports would amount to 140 seconds or 2 minutes and 20 seconds of transit time. This is significantly longer than the time needed to fold a plane. We would like to keep the experiment rather simple and assume a transit time of 0 seconds. All students working at the assembly line hand over the plane to the next student. However, another problem will soon become apparent: Not all processes take equally long. The station that takes longest will cause a blockage. Which one is it? Are the differences so great that we need to provide intermediate storage facilities or a different plan of work steps? We are going to solve this problem by recording the video time: Process number 1) Take a piece of paper 2) Fold down the centre 3) Make sure the crease is straight and sharp 4) Open it out again 5) Fold in the top left hand corner to the centre line 6) Fold in the second top corner to the centre line 7) Fold the first corner again 8) Fold the second corner again 9) Fold the two sides again along the centre line 10) Indicate web width 11) Fold the left wing 12) Fold the right wing 13) Fold the edge of the left wing 14) Fold the edge of the right wing 15) Flight test Total:

time in seconds 6 7 3 2 6 7 6 9 6 1 6 8 5 5 10 (?) 77 seconds

plus 10 seconds testing time for trial run. The remaining time adding up to the total length of the video is used to present the current state of production. Now it’s about time for a moment of reflection. The various work steps obviously take different amounts of time. Steps 4 and 10, for example, are relatively short. In addition, even similar work steps differ in the required amount of time, which can also be due to measurement error (imprecise observation). I will, therefore, conflate some work steps in order to improve the conditions for the first attempt of Multi Moment Recording. The merging of individual work steps aims to approximate equal production times for every work station. 227

MAASZ

Station new I II III IV

Process old 1 to 5 6 to 8 9 to 12 13 to 15

Duration 24 seconds 22 seconds 21 seconds 20 seconds

In this first attempt I have merged subsequent processes in order to have work phases of equal length. I hope that the differences remain small and suggest students to have a first trial run. This corresponds with the standard procedure of modelling. After all, the model assumptions can easily be modified or refined in a second attempt (Siller 2008). For step five we need more information about the method and the procedure of the Multi Moment Recording. We have four stations and plan to measure the proportion of working time at each station in relation to the total working time. This relative proportion should be determined with precision e with a probability of 95% (that is, a confidence interval of 95% with a range of 2e). How many measurements (observations) do we need? The following formula has been found:

http://bios-bremerhaven.de/cms/upload/Dokumente/Erfa-PBE/mma_methode.pdf

At this point we have arrived at the need for a very important decision: – If we do teach as usual we now start to analyze this formula and go into a statistics lesson. – If we behave more like people in reality do we make some plausibility tests with this formula and use it afterwards. – If we do like most people do in reality we simply believe that this formula is correct and use it. – I ask you to choose the third way: Teaching real world mathematics should include this way, too. This is not an argument to always do this. My didactical argument is that students should learn at school how to work with mathematics in reality, too. This will give them a better view on mathematics itself and the relation of mathematics and the world around. 228

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

Excursion 1: Some Hints About the Mathematical Background For all teachers that prefer to go into a statistic lesson I now will give some hints about the background of this formula. Where has this formula been taken from and what does it consist of ? It is a formula used in probability calculations, or more precisely the normal distribution. Let’s take a specific work process or a work station (I will use the term work process from now on). Suppose we want to determine the current percentage of time that a specific work process takes. How many times should we observe the process in order to be 95% confident that the estimated (sample) proportion is within e percentage points of the true proportion of time that a specific work process takes? To do so, we make random observations of the construction process of n different planes and check whether what we observe is the process we have defined as our focus or a different one. Let’s translate the variables: – p …. is the true proportion of time a specific work process takes in relation to the whole production (= the probability of success in a single trial ) – n …. is the total number of observations (number of repeated trials) – X … random variable, measures the number of times out of n observations we find the paper plane in this particular process – The random variable X is a binomial random variable and has a binomial distribution with parameters n and p. X ~ B(n, p) – X….is approximately normally distributed; this means an approximation to B (n, p) is given by the normal distribution: E(X) = n.p = μ Mean value and Standard deviation σ(X) = n.p.q

Confidence intervals can be calculated for the true proportion. The underlying distribution is binomial. Estimate p (the area under the Normal Distribution Curve) with 95% confidence (95% confidence interval). For ρ = 0.95 it follows that α = 1 − ρ = 0.05 so we get the area α 1−

2

= 0.975

For a value of 0.975 the z-score is equal to 1.96. 1.96 is the approximate value of the 97.5 percentile point of the normal distribution. 95% of the area under a normal curve lies within roughly 1.96 standard deviations of the mean. This number is therefore used in the construction of approximately 95% confidence intervals.

229

MAASZ

To form an estimated proportion, take X, the random variable, and divide it by n, the number of trials(observations). We derive an approximation for p, by dividing X through n, the number of trials. If we divide the random variable X by n, the mean by n, and the standard deviation by n, we get a normal distribution of proportions called the estimated proportion, as the random variable: The random variable is that estimated proportion: 1 . X n

1 1 1 For a proportion, the mean is: E ( . X ) = E ( x) = .n. p = p n n n 1 denoted as E ( . X ) = p = μˆ n For a proportion, the standard deviation is: 1 n

1 n

σ ( .X ) = σ ( X ) =

1 n. p.q = n

1 n. n

n p.q =

p.q n

where σ (c. X ) = c.σ ( X ) , if c ≥ 0 denoted as

1 n

σ ( .X ) =

p.q = σˆt n

1 .X n follows a normal distribution for proportions: 1 1 . X ~ N ( . X p.q ) n n , n

For the true proportion of p= 95% the confidence interval has the form:

μˆ

σˆ z

σˆ z

μˆ - σˆ z P( μˆ - σˆ z

μˆ + σˆ z

≤ 1 . X ≤ μˆ + σˆ z )= Φ ( z ) − Φ (− z ) =2 Φ (z ) − 1 = 1 − α

range = 2 σˆ z

n

μˆ e

σˆ z =e 230

e

=ρ

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

Solving for n gives us an equation for the sample size:

σˆ z =e p.q .z = e n p.q e = n z p.q e 2 = 2 n z n=

z 2 pq e2

For the true proportion of p= 95%: z= 1.96 n=

1.96 2 pq e2

Suppose we want to determine the current percentage of time that a specific work process takes. How many times should we observe the process in order to be 95% confident that the estimated (sample) proportion is within e percentage points of the true proportion of time that a specific work process takes. This gives us a large enough sample so that we can be 95% confident that we are within e percentage points of the true proportion of time that a specific work process takes. The sample size should be n observations in order to be 95% confident that the estimated proportion is within e percentage points of the true proportion of time that a specific work process takes. Excursion 2: Some Ideas About the Plausibility of the Formula Did we find a suitable formula? How can we examine the formula in more detail? We will study the formula with a software program. As we are working with percentages, we take p, the true proportion of time a specific work process takes in relation to the whole production, times 100. 100.p = x 231

MAASZ

We plot the graph to see how it changes as p increases from 0 to 1 or x increases from 0 to 100 (we know, of course, that less than 0% or more than 100% would not make sense): 1. Axis… percentage x 2. Axis… number of observations

We obtain a parabola. Maximum value p=50%. If a work process takes a lot or very little time, the number of needed observations shrinks to zero ( p=0% or 100%). This is obviously correct. If a work process takes no or the whole time, we don’t need any observations. What effect does our claim for more or less accuracy of measurement have? For example, in our video we measured a time of 6 seconds for work process number 5. 6 out of 87 seconds is 6.9 %. p is therefore 0.069. For accuracy of measurement we claim a 5 % margin of error (or 10 % or1 %). That is, e = 0.05 (or 0.1 or 0.01). Now we use these numbers in our formula: n = 1.96^2 * 0.069 * (1 – 0.069) /0.05^2 = 98.7 (for accuracy of measurement of 5 %) n = 1.96^2 * 0.069 * (1 – 0.069) /0.1^2 = 24.68 232

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

(that is 25 for accuracy of measurement of 10 %) n = 1.96^2 * 0.069 * (1 – 0.069) /0.01^2 = 2468 (for accuracy of measurement of 1 %) We plot the graph again: We chose p=0.069 and let e vary. 1. Axis… accuracy of measurement e 2. Axis… number of observations

Again, our graph seems accurate: If we ask for greater accuracy, the number of needed observations grows – ad infinitum. We end our excursion with the assumption that our formula seems accurate. ad 3) Back to Step Five: What now? Almost 2500 observations in one lesson – that’s impossible. Even 25 observations, which allow a precision of only 10%, are quite a lot. It seems we have to plan more precisely! How long does an observation take? We have to consider what is to be observed in which manner. One group of students should plan observations of the work stations. If, for example, there are 26 students in a class and four students work at four stations each, 2 students remain to make observations from a suitable position. One of the two observes and dictates (for example, “group 6, station III”) and the other 233

MAASZ

one takes notes. In a class of 28 students, two teams can make observations independently from each other. If more teams observe simultaneously, higher precision can be achieved. Furthermore, the interval between two observations should be long enough to allow the completion of one paper plane. However, as the brevity of one lesson does not allow doing so, we will choose a shorter interval. At the end of step five we make a first attempt: paper plane production and observations. This means 10 minutes of production, every 10 seconds an observation (i.e., 6 times per minute), 6 times 10 in total, which makes 60 observations, then discussion of results. That’s the plan. In reality, however, there will always be complications and errors, which have to be considered as yet. For the next step we obtain the following measurement report: Number of measurement 1 2 3 4 ...

group 1 2 3 4

station I I II II

work process 1 2 6 7

In step six, the results of the first measurement will be analyzed. For the purpose of this chapter I have compiled the following table and chosen the values in such a way that they can be used effectively in the next step. Work process number

Duration in seconds (video)

1 Take a piece of paper 2 Fold down the centre 3 Sharpening the crease 4 Re-opening 5 Fold the first corner 6 Fold the second corner 7 Fold the first corner again 8 Fold the second corner again 9 Fold up the two sections 10 Determine web width 11 Fold the left wing 12 Fold the right wing 13 Fold the edge of the left wing 14 Fold the edge of the right wing 15 Flight test? (not shown in the video) TOTAL 234

Percent Percent (video) (measure)

6 7 3 2 6 7 6 9 6 1 6 8 5 5 10

8 8 3 2 7 8 7 10 7 1 7 9 6 6 11

7 8 2 2 6 8 9 10 8 3 8 8 6 6 8

87

100

100

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

What does the table indicate? The results correspond approximately to the values obtained from the video as well as the expected values. Their meaning is in the detail. Step 10, for example, takes three seconds instead of one. Such seemingly small differences can have severe effects on production. The measurement should be as precise as possible because the results will have implications for work organization and improvement of the output. If such measurement and reorganisation in real-life production can lead to an increase in profit by one or more percent, it is well worth the measuring effort. There are a number of options for the remaining lesson: One option is to end the lesson with the conclusion that the division of labour obviously results in an increase in profit; production becomes faster and more efficient. To increase the efficiency it is important to know exactly how long each work process takes. One suitable method to find out exact length of each working step in the process makes use of statistics and is called Multi Moment Recording. If the students decide on yet another step (step seven), the question of how this instrument is used in practice to determine labour time, could serve as the motivation for this step. The starting point for such considerations using internet research (discussion forum) could be the question of how to deal with breaks during work. In our example, breaks have been ignored as yet; everyone works as fast as possible, right? However, everyone knows that in real life breaks are essential – and there is often disagreement about when and how long breaks should be. I would like to include these considerations in our suggested lesson. One possible option is to take a 30 (or 10) second break after completion of every single paper plane. Thus, work process number 16 is a 30-second break. This will probably be reflected in another round of measurement. Let’s check the paper plane production in class. What’s going to happen? As before, the work processes are measured again – this time including breaks. One thing is crucial: the workers, whose work processes are measured, can influence the measurement results by intentionally taking (or not) a break exactly while measuring. However, this option of intentional influence is possible only if the time of measurement is known or predictable. What can be done if this form of influence is to be avoided? After all, the purpose of this measurement is to increase the efficiency of production. Do you think students will consider a random measurement? This idea does not seem too far-fetched – it is one of many opportunities for students to discover and try out for themselves. How can this idea be realized? Obviously, the moments of measurement play an important role. If measurement takes place at irregular intervals, chance comes into play: The moments of measurement will be determined by a random generator (such as a dice). If initially observations were made every 10 seconds, you can try the following this time: Throw a dice and alternately add and subtract the scores to or from 10, respectively. Furthermore, we agree to take the value 10 every time the dice shows a score of 6. In this way we obtain a sequence of numbers such as, for example, 12, 8, 15, 11, ... Now the class can discuss, try out and decide. Finding a series of random measurement moments can be exciting. Will the measurement of work processes be effective 235

MAASZ

this time? Will the measurements be accurate? And to what extent will attempts to influence the process actually be constricted? The practice of real-life measurement in manufacturing shows that random variables are indeed used for this purpose. This provides a further argument for reflection and discussion in class. In the final eighth step, the entire teaching sequence ought to be reflected. What have we learned? Which questions remain unanswered? What follows from it? In any case, students have gained a brief insight into the real working world. The paper plane production has given the students some understanding of the historical development of the production of goods, ranging from manual production to manufacturing and assembly line production supported and optimized by mathematics. Perhaps students or parents have had critical questions concerning this interdisciplinary and crosscurricular mathematics lesson: Is this really mathematics? There are hardly any calculations! My answer is YES! Mathematics is much more comprehensive than just solving a given arithmetic problem; it is a means of describing and shaping the world. Mathematics can help to organize and communicate interrelations (see R. Fischer). I consider it essential that mathematics lessons at school address these socially relevant aspects of mathematics. The application of mathematics in social contexts clearly shows that it is not at all neutral and value-free, particularly with respect to the selection of aspects to be modelled and goal-setting as well as the application of mathematics (Maaß 1990 and 2008). Maybe the reflection in class takes a different course and there is disagreement on whether a production process should be optimized in the first place. From my point of view, increase in productivity, in the number and quality of manufactured goods per time unit are the basis for growing social prosperity and wealth. How this prosperity is distributed and who benefits from it is a different question. The evaluation of this increase in productivity will depend on whether or not somebody benefits from it. Those who believe that this kind of progress can only be achieved at the expense of personal health due to the high intensity of labour, the kind of work itself or staff reduction resulting in unemployment will certainly consider this increase in productivity negatively. REFERENCES http://en.wikipedia.org/wiki/Methods-time_measurement http://www.rsscse.org.uk/ts/ Roland, F. (2007). Technology, mathematics and consciousness of society. In U. Gellert & E. Jablonka (Hrsg.), Mathematisation and demathematisation. Social, political and ramifications (pp. 67–80). Rotterdam: Sense Publishers. Roland, F. (2006). Materialisierung und Organisation. Zur kulturellen Bedeutung der Mathematik. (p. 292 S). Wien München: Profil. Juergen Maasz. (1990). Mathematische Technologie = sozialverträgliche Technologie? Zur mathematischen Modellierung der gesellschaftlichen “Wirklichkeit” und ihren Folgen. In R. Tschiedel (Hrsg.), Die technische Konstruktion der gesellschaftlichen Wirklichkeit, Profil-Verlag München.

236

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES Maasz, J. (2008). Manipulated by mathematics? Some answers that might be useful for teachers. In V. Seabright, et al. (Ed.), Crossing borders - Research, reflection and practice in adults learning mathematics. Belfast/Limerick. Hans-Stefan, S. (2008). Modellbilden - eine zentrale Leitidee der Mathematik. Aachen: Sahker Verlag. ISBN-Nr.: 978-383227211.

A. Univ. Prof. Univ. Doz. Dr. Juergen Maasz Universitaet Linz Institut fuer Didaktik der Mathematik Altenberger Str. 69 A - 4040 Linz

237

JUERGEN MAASZ AND HANS-STEFAN SILLER

14. MATHEMATICS AND EGGS Does this Topic Make Sense in Education?

INTRODUCTION

Eating “Easter eggs” is something many young people like very much. When we enjoyed eating eggs at Easter in the year 2010 we started to think about eggs from a mathematical point of view: Eggs are very simple objects but it is not simple to calculate measures of interests. What is the volume and the surface area of a given egg? This is not typical mathematics lesson content in Austrian schools. Our first approach to calculate eggs was a “scientific” one. We went to the library and found a very nice book by Hortsch (1990) with a lot of formulas. We tried to understand them, and then we used them. This is the content of the first part of this chapter. The second part of the chapter gives a much easier approach to a more practical solution. We started with the old and simple idea of Cavalieri. We divided the egg into several slices and made a model of the volume and the surface area. Therefore each slice is similar to a cylinder. The tower of several cylinders is approximating an egg. With the help of a spreadsheet we think that younger students should be able to find a good approximation for the volume and the surface area. Our first question is: Why should students be motivated (and not forced) to do this? Thinking about motivations leads us to some research in the internet about biggest eggs and other events involving eggs. This is the third part of our chapter, answering didactical question around our proposal to learn modelling by looking at objects of daily life, like eggs. LEARNING MATHEMATICS AND MODELLING BY CALCULTING EGGS - SOME ARGUMENTS FROM MATHEMATICS EDUCATION

Modelling of daily life objects is a topic which could be a motivating and fascinating access point to mathematics education. For this reason this chapter presents one such possibility, how an object which is common to all students in school, the egg, can be an item for an exciting discussion in schools. Based on two mathematical definitions a figure is constructed. Based on those the description in polar-coordinates and Cartesian implicit equations is developed. This stepwise modelling cycle shows the way how modelling of daily objects could be done in class while satisfying the demands of the curricula. The concept of modelling for education has been discussed for a long time. It is a basic concept in all parts of sciences and in particular in mathematics. This concept is a well accepted fundamental idea (cf. Siller, 2008), in this case the preliminary-definition J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 239–256. © 2011 Sense Publishers. All rights reserved.

MAASZ AND SILLER

of Schweiger (1992) is used: “A fundamental idea is a bundle of activities, strategies or techniques, which 1. can be shown in the historical development of mathematics, 2. are sustainable to order curricular concepts vertically, 3. are ideas for the question, what is mathematics, for communicating about mathematics, 4. allows mathematical education to be more flexible and clear, 5. have a corresponding linguistic or activity-based archetype in language and thinking.” Therefore it is not remarkable that the concept of modelling can be found in a lot of different curricula all over the world. For example in the Mathematics curriculum for Austrian grammar-schools one can find a lot of quotations for it (cf. BMUKK, 2004). By interpreting those quotations modelling can be seen as a process-related competence. That means – translating the area or the situation to be modelled into mathematical ideas, structures and relations, – working with a given or constructed mathematical model, – interpreting and testing results. Those process-related competencies have been described by many people, e.g. Pollak (1977), Müller and Wittmann (1984), Schupp (1987), Blum (1985). With regards to all the developments in modelling, Blum and Leiß (2007) have designed a modelling cycle which observes a more cognitive point of view. PROBLEMS IN THE REALM OF STUDENTS’ EXPERIENCES IN MATHEMATICS EDUCATION

Problems of real-life, like problems related to the environment, sports or traffic, are often a starting point for calculations and applications of mathematics. But before using mathematics in such fields it is necessary that the problem is well understood. This asks for a lot of time and dedication, because it is necessary to translate the problem from reality to mathematics and back to reality. Therefore models are used as an adequate description of the given situation. Modelling problems based on students’ experiences means creating a picture of reality which allows us to describe complex procedures in a common way. Creating such an image has to observe two directions as Krauthausen (2003) suggests: – Using knowledge for developing mathematical ideas. – Developing knowledge about reality by its reliance on mathematics. If such problems are discussed in mathematics education it is possible that students will be more motivated towards mathematics. But there are a lot of other arguments why such problems should be discussed. They – help students to understand and to cope with situations in their everyday life and in environment, – help students to achieve necessary qualifications, like translating from reality to mathematics, 240

MATHEMATICS AND EGGS

– help students to get a clear and straight-forward picture of mathematics, so that they are able to recognize that this subject is necessary for living, – motivate students to think about mathematics in an in-depth manner, so that they can recall important concepts even if they were taught them a long time ago. If a teacher is concerned with the listed points, then they will be able to find a lot of interesting topics which he/she is able to discuss with students. As a useful example we want to show a problem in realm of students’ experiences by observing an egg. THE EGG - MOTIVATION AND STARTING EXAMPLES

The teaching and learning of mathematics works much better if students are intrinsically motivated to learn. Real world problems should bring this type of motivation into the classroom. So we start with a little collection of real world questions concerning eggs. You find the answers at the end of this chapter shown as examples for school. An easy approach can be done by looking at eggs made of chocolate. If we get an egg that seems to be big as a hen’s egg - how much chocolate is it made of ? Is it cheaper to buy a normal chocolate bar if we have to pay for it? What is the price of the same type of chocolate in different forms? If teachers and students open their ears and eyes for “egg” questions they will find many of them. Here is one we heard in the news. A German company produces “Children Surprise Eggs” (Figure 1). Inside each of these eggs is a hidden toy, the surprise. The shell consists of chocolate.

Figure 1. “Children Surprise Egg”.

How much chocolate is the shell made from? If we compare costs of chocolate and toys: Is it a good idea to buy such surprise eggs? A lot of real world problems concerning eggs that are good for motivation in class can be found in daily media, like the internet. For example: What is the biggest egg at the world? (Figure 2) Where can you find it? Is there any mathematics involved in its construction? 241

MAASZ AND SILLER

Figure 2. Biggest Easter-egg 2008 (http://www.vol.at/news/tp:vol:special_ostern_aktuell/artikel/ das-groesste-osterei-der-welt/cn/news-20080312-03572171)

Another question which could be interesting might be the following: What is the biggest egg made of glass? (Figure 3) What is its weight? Is its form solid?

Figure 3. Biggest egg made of glass (http://www.joska.com/new s/news_ostern_2008) THE EGG - STARTING POINTS FOR CALCULTIONS

If we have a closer look at an egg, we will see that its shape is very harmonic and impressive. Considering a hen’s egg it is obvious that the shape of all those (hen’s) eggs is the same. Because of the fascinating shape of eggs we tried to think about a method to describe the shape of such an egg with mathematical methods. Searching the literature we found some material by Münger (1894), Schmidt (1907), Malina (1907), Wieleitner (1908), Loria (1911), Timmerding (1928) or Hortsch (1990). The book of Hortsch is a very interesting summary about the most important results of ‘egg-curves’. He also finds a new way for describing egg-curves by experimenting with known parts of ‘egg-curves’. The approach used by the authors to derive the ‘egg-curves’ is very fascinating. But none of the above listed authors has thought 242

MATHEMATICS AND EGGS

about a way to create a curve by using elementary mathematical methods. The way how the authors describe such curves are not suitable for mathematics education in schools. So we thought about a way to find such curves with the help of well known concepts in education. Our first starting point is a quotation found in Hortsch (1990): “The located ovals were the (astonishing) results of analytical-geometrical problems inside of circles.” Another place to start is a definition of ‘egg-curves’ found by Schmidt (1907) and presented in Hortsch (1990). Definition 1 Schmidt (1907) quotes: “An ‘egg-curve’ can be found as the geometrical position of the base point of all perpendiculars to secants, cut from the intersection-points of the abscissa with the bisectrix, which divide (obtuse) angles between the secants and parallel lines in the intersection-points of secants with the circle circumference in halves. The calculated formula is r = 2⋅a⋅cos²ϕ or (x2+y2)3 = 4⋅a2⋅x4” In education the role of technology is more and more important. Different systems, like computer-algebra-systems (CAS), dynamical-geometry-software (DGS) or spreadsheets, are now commonly used in education. With the help of technology it is possible to design a picture of the given definition immediately. In the first part we use a DGS because with its help it is possible to draw a dynamical picture of the given definition (Figure 4). First of all we construe the one point P of such an egg as it is given in the definition.

Figure 4. Translating the definition of Schmidt to the DGS.

According to the given instruction we first construe a circle (centre and radius arbitrarily) and then the secant from C to A (points arbitrarily). After that a parallel line to the x-axis through the point A (= intersection point secant-circle) is drawn and the line that bisects the angle CAD can be determined, which crosses the x-axis. 243

MAASZ AND SILLER

So we get the point S. Now we are able to draw the perpendicular to the secant through S. The intersection point of the secant and the perpendicular is called P and is a point of the ‘egg-curve’. We activate the “Trace on” function and use the dynamical aspect of the construction. By moving A towards the circle the ‘egg-curve’ is drawn as Schmidt has described it. This can be seen in the following figure (Figure 5):

Figure 5. Egg-curve construed by DGS.

Now we have to find a way to calculate the formulas r = 2⋅a⋅cos²ϕ or (x2+y2)3 = 4⋅a2⋅x4 as mentioned above. Let us start with the following figure (Figure 6), which is the picture constructed through the given definition:

Figure 6. Initial situation for calculating the equations of the egg-curve. 244

MATHEMATICS AND EGGS

We know (from the construction) that the triangle CPS is right-angled. Furthermore we recognize that the distances CP and PS are the same and that the triangle CAB is also rectangular, because it is situated in a semicircle. This is shown in Figure 7 (where the real ‘egg-curve’ is shown as a dashed line).

Figure 7. Affinity of the triangles.

From the known points C (0, 0), A (x, y), and B (2⋅r, 0) and through the instructions provided the coordinates of the points S and P are calculated. Therefore only a little bit of vector analysis is necessary. The calculation itself can be done through CAS. First of all we have to define the points and the direction vector of the bisecting line w:

245

MAASZ AND SILLER

Now we calculate the equation of the normal form of the bisection line, cut it with the x-axis and define the intersection-point S.

Then we calculate the intersection point P of the secant and the perpendicular through S.

Now all the important elements for finding the ‘egg-curve’ are calculated. Lets have a closer look at Figure 7 again. It is easy to recognize that there are two similar triangles – triangle CPS and triangle CAB. The distance CP is r and the radius of the circle is a. So the distance CB = 2⋅a. The other two distances which are needed . are CA and CB. The (calculated) length of CA = CB = In a next step the similarity theorem (of triangles) can be applied:

246

MATHEMATICS AND EGGS

Transforming this equation produces . By using the characteristic of the right-angled triangle CAB and calling ϕ the angle ACB we know that

. Transforming this equation yields . Inserting this connection in the equation above yields

another equation . Simplifying this equation by cancelling common terms gives the result: By substituting r and cosϕ it is possible to get the implicit Cartesian form, mentioned in the definition:

respectively

As we have seen the ‘egg-curve’ is modelled through elementary mathematical methods. By using technology teachers and students get the chance to explore such calculations by using the pivotal concept of modelling. Definition 2 Another approach for constructing an egg-curve is formulated by Münger (1894). He quotes: “Given is a circle with radius a and a point C on the circumference. CP1 is an arbitrarily position vector, P1Q1 the perpendicular to the x-axis, Q1P the perpendicular to the vector. While rotating the position vector around C point P is describing an egg-curve. The equation of this curve is r = a⋅cos²ϕ in Cartesian form (x2+y2) 3 = a2⋅x4.” As it is stated in the construction instructions a circle (radius arbitrarily) and a point C on the circumference of the circle is constructed. Then an arbitrarily point P1 on the circumference of the circle is constructed. The perpendicular to the x-axis is constructed through P1. The intersection point with the x-axis, Q1, can be found. 247

MAASZ AND SILLER

After that the perpendicular to the secant CP1 through Q1 is constructed. All these facts are shown in Figure 8:

Figure 8. Translating the definition of Münger to the DGS.

If the point P1 is moved around the circle, P will move along the ‘egg-curve’. It will be again easier to see if the “Trace on” option is activated (Figure 9).

Figure 9. Egg-curve construed by DGS.

The formula given by Münger can be derived in a similar way as the other formula was found. The most important fact which has to be seen here is that in this picture two rectangular triangles CPQ1 and CP1A exist. Those triangles are similar (Figure 10). 248

MATHEMATICS AND EGGS

Figure 10. Affinity of the triangles.

The coordinates of the points can be found mentally – without any calculation: C (0, 0), P1 (x, y), A (2⋅a, 0), Q1 (x, 0). If the distance CP is called r, then the coordinates of P will not be used. Otherwise they can be calculated analytically. For the sake of completeness we write down the coordinates of P: P Using the fact that both triangles are similar, the following equation is obvious:

Through elementary transformation, the mathematical fact for the triangle CP1A and substituting the term following equation is found:

by 2⋅a⋅cosϕ the

2⋅a⋅x⋅cosϕ = 2⋅a⋅r This result yields r = x⋅cosϕ. Because of the assumption (in the calculation) that x is part of our circle – it is the x-coordinate of P1 – it can be substituted by x = a⋅cosϕ, with a as the radius of the starting circle. So the formula of Münger (r = a⋅cos²ϕ) is found in polarcoordinates. If the implicit Cartesian form is desired, another substitution has to be done. The result in this case is: (x2+y2)3 = a2⋅x4 249

MAASZ AND SILLER

CALCULATING EGGS WITH COMPUTERS

Our first approach to calculate the volume of an egg is guided by our academic education. If we try to find a solution for a mathematical problem that we cannot remember or we have not seen before, we will walk into a library and start looking for literature such as that of Narushin (2005) or Zheng et al (2009). In the first part of this chapter we explained how useful this way can be – like in many other situations. We found a lot of formulas to calculate eggs that were proofed in history. We tried to understand these formulas and to use them. Our mathematical knowledge was trained enough to search for these formulas (we knew what we were looking for and to decide which book or article could help us), and to understand the literature. Reflecting the approach outlined above we feel follows the typical and traditional academic way to solve problems. This is great und useful - but it needs a lot of mathematical training to start it and to bring it to a successful conclusion. Going on with this reflection we started to look for an easier way. We want to describe this way now by starting with a basic ideas going back to Bonaventura Cavalieri (1598–1647). He said that we can find the volume of an object if we divide it in slices and add each volume. If the slices become thinner and thinner we go into a typical process that we know from Analysis – the limit of this process should be the exact volume. Indeed this is true under certain circumstances or conditions for the volume. Thinking about this we made a decision. We want to calculate the volume of an egg more or less exactly, but we do not want to go into an infinite process. In simple words: We want to solve the questions without Analysis this time. What exactly is the volume of a hen’s egg or an egg produced of chocolate? Maybe 1 g is good, maybe 1 mg? We will get this grade of exactness after a while if we use a finite process. Hence we divide the egg in some (or many) slices, and we know that we can count the number of slices. We can use a computer to add the volumes of the slices. When this basic idea is clear (or found by the students themselves) the rest of the work is fun with a little bit of a trial-and-error-strategy. We will show this now.

Figure 11. Taking a photograph. 250

MATHEMATICS AND EGGS

We like to have a general idea about the volume at the beginning. Our first step is a simple method to estimate it. We take the chocolate egg and put it under water (Figure 11). We look at the scale at the wall of the container and see a volume of about 70 cm³. How is it possible to estimate the slice of an egg? It looks like a cylinder but we know that we make a little mistake by saying that. Let us have a look at Figure 12:

Figure 12. A picture of an “egg-slicer”.

Do you know an egg slicer like the one you see on the picture? All slices together have the same volume as the egg had before we use the slicer. Now we have to find the volume of one slice and in the next step the volume of the sum of all slices. A typical slice somewhere in the middle is similar to a cylinder. The volume of a cylinder is V = r²·π·h (r is the radius of the basic circle; h the height of the cylinder). Now we have to find r and h. If we cut the egg with a knife into halves and put one half on a sheet of paper we can take a pencil and draw a line around that half. By adding some coordinates a picture – like Figure 13 – can be drawn:

Figure 13. Profile of an egg.

We decide to start with six slices along the x-axis which is scaled in centimetres. Each slice should have 1 cm height. What is the radius r? We take a close look at 251

MAASZ AND SILLER

one slice. We select the second one from 1 cm to 2 cm. Looking at its profile we see one like Figure 14:

Figure 14. Finding one slice.

Looking at this draft we see at least two lines that could be the radius, one on the left side and one on the right side. In the left portion of the egg the left radius is always shorter whereas in the right portion the right part is shorter. Now we are facing an important point, a good chance for good ideas or inventions done by the students. Students should learn to invent or discover new mathematics. We propose to give the students the chance to learn or practice it here. What are possible ideas? We show two main ways – both well known: one is known from introducing integrals (upper-sum and lower-sum), the other one is known as interpolation. In the first approach we take the bigger and the smaller radius and estimate their difference. While the slices become thinner the difference will fall to the moment we have reached the grade of exactness we want to reach. For the second idea we take the radius of the point in the middle between left side and right side. In this case x is 1.5 cm. If the slices become thinner the radius in the middle will become closer and closer to one of the border radius. Again it is a question of the grade of exactness we want to have. Now we will explain both ideas with concrete numbers and results. We name the left radius r and the right radius R. Figure 15 shows the situation appropriately.

Figure 15. Looking for r and R. 252

MATHEMATICS AND EGGS

Looking for concrete numbers for the length of the radius for cm 0 to 6 we come back to Figure 13. A spreadsheet is used to calculate the volume of the cylinders and to sum it up (Table 1): Table 1. Calculating first results x

Y (measured)

Smaller radius

Volume cylinders

Bigger radius

Volume cylinders

0.0

0.0

0.0

0.00

1.7

9.08

1.0

1.7

1.7

9.08

2.1

13.85

2.0

2.1

2.1

13.85

2.3

16.62

3.0

2.3

2.3

16.62

2.3

16.62

4.0

2.3

2.0

12.57

2.3

16.62

5.0 2.0 6.0 0.0 Sum Arithmetic middle

0.0 0.0

0.00 0.00 52.12

2.0 0

12.57 0.00 85.36

68.74

The result is really surprising! We made a first trial with little effort and little exactness and we got a result that is near the empirical result we got with the measurement of water displacement at the beginning (68.74 is near 70.00). Are we ready now? NO – not at all! The way we got the length of the radius is not very exact. We made a draft on the paper and took a simple line to measure the distance from the x-axis to the outline of the egg. If we did this very well maybe the length is correct within a little measurement error – let us estimate 1 mm. What would happen if the real length is 1 mm more in each case? (see Tables 2 and 3) Table 2. Influence of inexact measurement 1: plus 1 mm x

y

0.0 0.0 1.0 1.8 2.0 2.2 3.0 2.4 4.0 2.4 5.0 2.1 6.0 0.0 Sum Arithmetic middle

Smaller radius 0.0 1.8 2.2 2.4 2.1 0.0 0.0

Volume cylinders 0.00 10.18 15.21 18.10 13.85 0.00 0.00 57.33

Bigger radius 1.8 2.2 2.4 2.4 2.4 2.1 0.0

Volume cylinders 10.18 15.21 18.10 18.10 18.10 13.85 0.00 93.53

75.43 253

MAASZ AND SILLER

Table 3. Influence of inexact measurement 2: minus 1 mm x

Smaller radius 0.0 1.6 2.0 2.2 1.9 0.0 0.0

y

0.0 0.0 1.0 1.6 2.0 2.0 3.0 2.2 4.0 2.2 5.0 1.9 6.0 0.0 Sum Arithmetic middle

Volume cylinders 0.00 8.04 12.57 15.21 11.34 0.00 0.00 47.16

Bigger radius 1.6 2.0 2.2 2.2 2.2 1.9 0.0

Volume cylinders 8.04 12.57 15.21 15.21 15.21 11.34 0.00 77.57

62.36

What is the result of our experiment with small errors? The length which was measured is about 20 mm. We estimated an error of 1 mm that means about 5 percent. The result differs from 68.74 to 75.43 (plus 9.7 percent) or to 62.36 (minus 9 percent). The border line itself is about 1 mm thick. So we are really in danger of getting wrong results because we cannot measure the length exactly. One result of this reflection is that it is not useful to make the slices thinner before the method of length measuring is improved. This is similar to many situations in physics. Without exact measurement is it often useless to think about more calculation. What about the second option we proposed – following the idea of interpolation? We take the arithmetic middle of r and R and measure the length for these points on the x-axis. In Table 4 the results are shown. Table 4. Calculation through interpolation x 0.5 1.5 2.5 3.5 4.5 5.5

y 1.2 1.9 2.2 2.4 2.2 1.6

Cylinder volume 4.52 11.34 15.21 18.10 15.21 8.04 72.41

We are satisfied to see that the empirical results are nearly the same as the results in the calculation. Reaching this result we think we have shown how we can use Cavalieri and a spreadsheet to reach the empirical results. Many further steps are possible to be more precisely and to go on further, like calculating the surface area. But we think it is not necessary to explain this here because teachers know what to do and how to do it. 254

MATHEMATICS AND EGGS

EPILOGUE

Looking at objects from our daily life through the lens of mathematics can show interesting and motivating (mathematical) results. The grade of complexity is not an essential attribute for discussing objects from the realm of students’ experiences. It is necessary to show students that mathematics allocates methods and instruments to analyse objects in daily life. For this reason Austrian and German mathematics educators founded the ISTRON group in 1990. This group has the aim to look at problems in daily life and to show teachers how they can improve their education by implementing such examples. A more detailed look at mathematical aspects of an egg can be found in the article of Siller, Maaß & Fuchs (2009). All in all it is necessary that adequate problems are shown for education. Problems which are already known in research should be adapted and constructed for education. The fields for educational research in this area should be expanded and strengthened. SOME LAST REMARKS AND HINTS

For the sake of completeness we want to advice hints and solutions for the listed possible examples (in this article) that shall motivate students: 1. “Children Surprise Egg” (Figure 1) The old version of this egg is similar to the egg of a chicken. The mass of the chocolate is about 20 g, with a volume of about 15 cm³. The EU has decided that the toys inside these eggs are so small that little children are in danger to swallow them. The next generation of the eggs is bigger. The mass of chocolate will be about 100 g and the estimated size of the egg is 12.3 cm for the height and 8.3 cm for the diameter. The volume of this egg can be calculated as V = 72.26 cm³ (cf. Siller, Maaß & Fuchs, 2009, p. 106). 2. Biggest Easter egg 2008 (Figure 2) It has a surface of about 130 m². 3. World’s biggest glass egg (Figure 3) The volume of this egg is about 0.15 m³. The information in the internet says that this egg weights about 20 kg. If it is made of glass without air inside it would weight about 375 kg (2,5 kg/cm³). The shell would have a thickness of about 0.58 cm. REFERENCES Blum, W. (1985). Anwendungsorientierter Mathematikunterricht in der didaktischen Diskussion. Math. Semesterber, 32(2), 195–232. Blum, W., & Leiss D. (2007). How do students and teachers deal with mathematical modelling problems? The example “Filling up”. In Haines, et al. (Eds.), Mathematical modelling (ICTMA 12): Education, engineering and economics. Chichester: Horwood Publishing. BMUKK: AHS-Lehrplan Mathematik (Oberstufe). Wien: Bundesministerium für Unterricht und Kultur. Retrieved from http://www.bmukk.gv.at/schulen/unterricht/lp/lp_ahs_oberstufe.xml Hortsch, W. (1990). Alte und neue Eiformeln in der Geschichte der Mathematik. München: Selbstverlag Hortsch, München. 255

MAASZ AND SILLER Krauthausen, G., & Scherer, P. (2003). Einführung in die Mathematikdidaktik. Heidelberg: Spektrum. Loria, G. (1911). Ebene Kurven I und II. Berlin. Malina, J. (1907). Über Sternenbahnen und Kurven mit mehreren Brennpunkten. Wien. Mathematik Lehrplan AHS-Oberstufe, Wien, 2004, abrufbar unter:http://www.bmukk.gv.at/schulen/ unterricht/lp/lp_ahs_oberstufe.xml (Stand: 27.08.2008) Müller, G., & Wittmann, E. Ch. (1984). Der Mathematikunterricht in der Primarstufe. Braunschweig Wiesbaden: Vieweg. Münger, F. (1894). Die eiförmigen Kurven. Dissertation, Universität Bern, Bern. Narushin V. G. (2005). Egg geometry calculation using the measurements of length and breadth. Poultry Science, 84, 482–484. Pollak, H. O. (1977). The interaction between mathematics and other school subjects (Including integrated courses). In Proceedings of the third international congress on mathematical education (pp. 255–264). Karlsruhe. Schmidt, C. H. L. (1907). Über einige Kurven höherer Ordnung, Zeitschrift für mathem. u. naturwiss. Unterricht, 38. Jg., p. 485. Schweiger, F. (1992). Fundamentale Ideen - Eine geistesgeschichtliche Studie zur Mathematikdidaktik. In JMD, Jg. 13, H. 2/3, pp. 199–214. Schupp, H. (1987). Applied mathematics instruction in the lower secondary level: Between traditional and new approaches. In B. Werner, et al. (Hrsg.), Applications and modelling in learning and teaching mathematics (Vol. 37). Chichester: Horwood. Siller, H.-St. (2008). Modellbilden – eine zentrale Leitidee der Mathematik. In K. Fuchs (Hrsg.), Schriften zur Didaktik der Mathematik und Informatik. Aachen: Shaker Verlag. Siller, H.-St., Maaß, J., & Fuchs, K. J. Wie aus einem alltäglichen Gegenstand viele mathematische Modellierungen entstehen – Das Ei als Thema des Mathematikunterrichts. In H.-St. Siller, J. Maaß, (Hrsg.), ISTRON-Materialien für einen realitätsbezogenen Mathematikunterricht, Franzbecker, Hildesheim (to appear). Timmerding, H. E. (1928). Zeichnerische Geometrie, Akad. Verlagsgesellschaft, Leipzig. Wieleitner, H. (1908). Spezielle ebene Kurven. Leipzig: Sammlung Schubert. Zhou, P., Zheng, W., Zhao, C., Shen, C., & Sun, G. (2009). Egg volume and surface area calculations based on machine vision. Computer and Computing Technologies in Agriculture II, 3, 1647–1653.

Juergen Maasz IDM University of Linz Hans-Stefan Siller IFFB – Dept. for Mathematics and Informatics Education University of Salzburg

256

THOMAS SCHILLER

15. DIGITAL IMAGES Filters and Edge Detection

INTRODUCTION

Interdisciplinary work based on examples with reference to reality is essential to improve the teaching of mathematics. Based on such examples pupils learn the proper use of mathematical methods and their motivation to learn increases. Therefore I have chosen to explain important fields of the topic “digital image processing” in this article, which can be dealt with in an interdisciplinary and project based way in mathematics and informatics. Pupils are increasingly confronted with digital images, e.g. by using their modern smartphones with integrated cameras. The quality of images, especially of digital photos, may need to be improved during or after taking the photo, so you can recognize the objects in the image better, e.g. by adjusting the contrast. Principles of digital image processing have been prepared for the classroom in this chapter. You will see that you don’t need many special requirements in class, often simple knowledge of spreadsheets is all that is required. Among the prepared examples there is the use of linear filtering, which can, for example, be used for smoothing images, and a short explanation, how you can find positions of possible edges in the image. Edges, rapid transitions between light and dark (or differently coloured) areas, play an important role in the automatic recognition of objects in the image, as they define objects. By working with these examples pupils can see that the basic ideas behind the processing of image data on the values representing the colour or brightness of individual pixels are relatively simple, but in order to actually handle the complex reality and so get useful results much more work is required. IMAGES IN INFORMATICS

In practice there are many types of digital raster images, such as photos, colour and greyscale images, screenshots, fax documents, radar images, ultrasound images etc. Raster images are usually rectangular and made of regularly arranged elements, the pixels (short for picture element). In general, raster images are rectangular, and differ mainly by the values stored in the pixels. In addition to raster images, there are vector graphics. [Burger et al., 2006, p. 5] This article only deals with raster images, so I will not talk in detail on vector graphics. One can interpret a recorded image as a two- dimensional matrix with numbers. A digital image I is, considered more formally, a two- dimensional function of J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 257–271. © 2011 Sense Publishers. All rights reserved.

SCHILLER

non- negative integer coordinates to a set of image values, therefore I: N x N → V (N … natural numbers; V … set of pixel values). In this way, images can be represented, stored, processed, compressed or transmitted using computers. It doesn’t matter which way an image has emerged actually, we simply understand it as numeric data. [Burger et al., 2006, p. 10]

Figures 1 and 2. Part of an image (magnified) and related greyscale function (comp. figure 2.5 in [Burger et al._2006, p. 11]).

The values of individual pixels are binary words of length k. Therefore a pixel can assume 2k different values. “k” is frequently called the “depth” of the image. The exact coding of the pixel values depends, for example, on the type of the image (RGB colour image, binary image, greyscale image etc.). [Burger et al., 2006, p. 12] Greyscale images consist of only one channel representing the image brightness. As the values of the intensity of the light energy, which cannot be negative, are represented, normally only non-negative values are stored. Therefore image data in greyscale images are usually made of integer values of the interval [0, 2k-1], e.g. values representing the intensity of the interval [0, 255] at a depth of 8 bits. It this case, 256 different grey levels are possible in which usually 0 represents a black pixel (= pixel with minimum brightness) and 255 a white one (= pixel with maximum brightness). [Burger et al., 2006, p. 12] FILTERS IN DIGITAL IMAGE PROCESSING

Many effects such as the sharpening or smoothing of an image can be realized with the help of filters. [Burger et al._2006, p. 89] Smoothing in a Spreadsheet Regions in images with locally strong intensity changes, so large differences between neighbouring pixels, are felt as sharp. In contrast the image looks blurry or fuzzy, if the brightness changes only a little bit. If one wants to smooth an image, one could replace each pixel value by the average of the neighbouring pixels. One creates a new image I’ starting from the source image I by assuming the pixel values I’(u, v) as the arithmetic means of the pixel values of the pixel I(u, v) and the 8 neighbouring pixels. [Burger et al., 2006, p. 89f] 258

DIGITAL IMAGES

Let’s try to implement the smoothing of a greyscale image in a spreadsheet. There are several methods to import pixel values into a spreadsheet. After manipulating the values, they can be exported and viewed as an image again. In Figure 3 the values of the individual pixels of the source image (consisting of 256 grey levels) are represented. Figure 4 represents the pixel values formed by the calculation of the arithmetic mean values.

Figure 3. Screenshot with individual pixels of the source image.

Figure 4. Screenshot with pixel values formed by the calculation of the arithmetic mean values.

The filter process using arithmetic mean values can be implemented relatively quickly in a spreadsheet using the mean function (such as AVERAGE) that is included in the program. For example in cell B2 this requires writing the name of the function for averaging and the range of cells from A1 to C3 used for the calculation of the mean value. Of course, you can also enter the sum of the values (and the subsequent division) without using a predefined function for calculating the mean, e.g. with help of a sum function (e.g. SUM) or through multiple additions. Regardless of the chosen way, the references to other cells have to be relative, because afterwards the formula has to be copied into (almost) all other cells that represent pixel values. They should access the neighbouring pixels from the actual cells’ point of view while calculating the actual mean value. Since only integer pixel values are allowed, the value afterwards has to be changed into an integer number, e.g. by using the function INT or ROUND. 259

SCHILLER

Figure 5 shows how the table with the results from Figure 4 looks like in formula view:

Figure 5. Figure 4 in formula view (Ausgangsbild = source image, Graustufen = greyscale, ganze Zahl = integer number, Mittelwert = mean value).

As you can see in Figure 5, the mean values on the fringes of the image are not calculated and the original image values are used instead. But why? The students will find out by implementing the filtering on their own, because either they will try to create the formula for calculating the mean value in cell A1 and fail because of the lack of the value on the left side of cell A1, or they will create the formula in another cell not lying on a fringe of the image. In this case they will face problems when copying the formula to the fringes of the image due to an error message (Figures 6, 7 and 8):

Figure 6. Error messages on the fringes (Bezug = reference).

Figure 7. Error messages on the fringes.

Figure 8. Error messages on the fringes.

As can be seen e.g. in Figures 4 and 5, the formulae are not written in the same worksheet in which the values of the pixels of the source image are. So the old values won’t be changed, but a new image with new values based on the pixel values of the old image is created, as described in the theory above. The fact that in each cases a new image, therefore a new number matrix, is created and the manipulations are not performed directly in the output matrix, is essential for such filter processes. 260

DIGITAL IMAGES

Pupils will realize that this is the only possible right way when trying the wrong one. They will fail, because the spreadsheet shows a circular reference warning, if iterative calculation of formulas is turned off. In our case this means that while calculating a value of a cell, this cell is being accessed. This can’t work properly, because at the time the value is calculated, the value of this cell changes. Due to the direct influence of the value of the cell on the calculation, the cell will be updated taking the new value into account. This could (at least in theory) continue endlessly, but doesn’t make any sense in our case. The error message (a circular reference warning) can be avoided by activating the iterative calculation (and in addition for example setting the desired number of iterations), as described in the appropriate help in the spreadsheet. It should be noted that this is an appropriate example for introducing the topic of “recursion and iteration” (both in mathematics and in computer science education), which I will not describe in detail here.1 The problem with the circular reference when working with filters and “accidentally” using the source image should be discussed in class, because if such a filter is not only implemented in a spreadsheet, but in (any) programming language, there will not be a circular reference warning that draws one’s attention to the error. The value of each pixel is only calculated and changed once and therefore no problems will occur during the calculation while executing the program, although wrong pixel values are used. It isn’t easy to find the problem if one isn’t warned what error occurred. In our case you don’t have to search for it in the obvious core of the programmed method in which it comes to the filtering, but before that, when a new (empty) image has not been created and the main image is set up for editing.

Figures 9 and 10. Before and after smoothing in a spreadsheet.

Figure 9 is a section of the source image before smoothing in a spreadsheet. Figure 10 represents the corresponding image after smoothing. You can (especially on the concrete slabs) clearly see that by smoothing, the transitions between the different shades of grey have been blurred and the image seems to be out of focus. Another important aspect: In this example, the pixel values on the fringes of the image have been taken from the source image. Depending on the function of the filter, e.g. differentiation2, that usually doesn’t make sense. The calculation of the arithmetic means in the presented smoothing could rely just on the neighbouring pixels that are available. In the corners you can use only four (instead of 9) pixels, on the fringes (without the corners) you can use the remaining six pixels. Even by using other additional filters, the treatment of the pixels on the fringes of the image 261

SCHILLER

has to be considered separately. In doing so, it is up to the pupils’ creativity to handle these special cases in a way that to make sense. In this article I am going to demonstrate the sense of (linear) filters on different examples. It is particularly interesting how the image itself changes, i.e., the large interior of the image and not the (thin) fringes.3 Pupils can later develop new creative filters themselves and visualize them on the computer to observe the effects they have achieved. When a new creative filter is discovered, pupils can still handle a necessary treatment of pixels on the fringes. Filters in General The discussed smoothing filter with the local averaging includes all the elements that are typical of a filter. This smoothing filter belongs to the group of linear filters. In general, filters use several pixels of the source image for calculating a new pixel value. The size of the filter, respectively the region of the filter, determines how many pixels are used to calculate a new pixel value. The most recently introduced filter for smoothing had a size of 3 x 3 pixels. Filters of sizes 5 x 5, 7 x 7 or 21 x 21 would also be possible and would produce a stronger smoothing. The filter needn’t necessarily have a square shape. For round filters, the filter effect would occur uniformly in all image directions. Additionally a different weighting of the involved pixel values is possible, so that some more distant pixels are not considered to be as strong as those lying next to the pixel whose value is being calculated. The filter size theoretically doesn’t have to be finite and certainly doesn’t have to include the original pixel. Because of the high number of possibilities, filters are classified systematically. They are for example divided into linear and nonlinear filters, which can be seen on the mathematical expression for calculating a new pixel value. [Burger et al., 2006, p. 90f] For linear filters, the values of the pixels are linked in a linear form, e.g. by a weighted sum. In the example of smoothing, in each case all 9 pixel values were allocated a weight of one ninth and added up. [Burger et al., 2006, p. 91] By using the AVERAGE- function, the pupils may not be directly aware of that. Therefore in the example above one can enter the 9 summands and the following division in the spreadsheet individually instead of using the AVERAGE- function. If the division by 9 is considered as multiplying by one ninth and brackets are multiplied out, pupils can see immediately that each summand, so each pixel value, is weighted by one ninth before the values are eventually added to a new one. Depending on the choice of the singular weights many different filters with a completely different behaviour are possible. Linear filters can be represented by matrices, where the weights are given. The matrix for the smoothing filter in size (3 x 3) is

. [Burger et al., 2006, p. 91] 262

DIGITAL IMAGES

Filters are applied to images by positioning the centre of the filter matrix over the actually considered pixel of the source image, multiplying the pixel values with their respective weights from the filter above the image and adding these products. Afterwards the result of this calculation has to be inserted at the corresponding position in the resulting image. [Burger et al., 2006, p. 92f] Sharpening an Image Also the sharpening of images is possible in the same way as smoothing is. The only difference is the matrix used for the filter operation. For sharpening an image in [Pehla, 2003] the matrix

is used.

Figures 11 and 12. Before and after sharpening.

Figure 12 is obtained by applying the above filter matrix on Figure 11. Pupils’ Experiments (further (linear) filters) Linear filters offer you the chance to experiment. Based on a commonly known filter pupils can change well- directed singular entries in the filter matrix. They can even design their own new matrix and consider the impact of new entries. In experiments one should first use simpler (greyscale) images, such as the little smiley in Figures 1 and 2. The image should be, especially if it is small, spatially extended by adding additional (empty) fringes. This way you can prevent it from happening that all the pixels of the (too small) image are on the fringes and no filtering happens at all, thus one would see no effects. The pupils can carry out experiments guided by specific questions. For example they should consider what will happen if they use a filter matrix consisting of only zeros. The pupils should not try to find out the answer4 just by experimenting, but 263

SCHILLER

by considering before what will happen, and then confirm or disprove their suspicion by trying. Due to the necessary considerations, the filter process can be internalized more easily and experimentation doesn’t degenerate into wild, untimed and thoughtless tinkering. EDGE DETECTION

What is it, What is it Needed for and How Can it be Done? In human vision, edges play an important role. Figures can be reconstructed as a whole from a few striking lines. Edges could be roughly described by the fact that in the image, which has an edge, the intensity will change greatly within a small neighbourhood and along a distinct direction. Stronger changes of the intensity are a stronger indication of an edge in the observed position. The strength of the intensity change corresponds to the first derivative, which is thus an approach to determine the strength of an edge. [Burger et al., 2006, p. 117] So the steepness of the greyscale function represents the strength of an edge. The steepness of the greyscale function is nothing but the amount of the gradient and the direction of the edge is perpendicular to the direction of the gradient. [Tönnies, 2005, p. 174f] Since edges also separate objects from the background, the knowledge of edges may be useful for example in the detection of objects (e.g. characters (OCR), faces). As mentioned above, a greyscale image can be interpreted as function I (u, v). In order not to overwhelm pupils with two-dimensional differentiation, it is useful first to look at the whole process in a one-dimensional way by restricting themselves to one image line.5 One might draw a function, which is holding a high value (e.g. 250) relatively constantly, then at some point relatively suddenly is falling down (e.g. to 10) within a short period and is afterwards again remaining relatively constant at the value to which it has just fallen. Additionally, the derivative function is needed. One can then see clearly that the derivative function, wherever the function itself is relatively constant, approximately remains at value 0 and only at the relatively rapid shift from 250 to 10 it suddenly gets a large negative deflection. So at the position, where the observed image line is changing from a nearly perfect white on an almost perfect black, at the position of an edge, the derivative function has a remarkably low level. In a change from a darker to a brighter area, however, the deflection would be positive, because the greyscale function increases. Such swings in the derivative functions can thus be interpreted as positions of edges. What the pupils will notice is that behind the images to be analyzed we do not have continuous functions and therefore the functions cannot be differentiated. But the derivatives can be approximated by differences. The differentiation in x-direction can be approximated by the difference of the grey value of the pixel afterwards and the grey value of the pixel before the pixel currently under consideration. This can be achieved by convolution with the matrix

. 264

[Tönnies, 2005, p. 175]

DIGITAL IMAGES

The differentiation in the y-direction can be approximated with the help of the matrix

.

[Tönnies, 2005, p. 175]

It is important to note that the background to the differentiation is not necessarily required. In class, it will do if the teacher explains that big changes of the values between closely spaced pixels are indications for edges. As mentioned above, according to [Burger et al., 2006, p. 117] an edge is a place of great change in intensity. Thus, this basic idea behind the edge detection can be implemented in the classroom much earlier, because the necessary differences are also suitable for the lower grades. Example: Differentiation in the x-direction and Catching Inadmissible Values Let’s look at filtering with the matrix Dx in a practical way using the following image consisting of three different grey values as a source (Figure 13):

Figure 13. Test image with three different grey values.

Differentiation in x-direction provides us with the following result (Figure 14):

Figure 14. Result of differentiation in x-direction (clipping to the range 0 to 255).

At first glance, the result corresponds to our expectations: In a change from a dark shade of grey to a light one, the difference is significant, so the edge pixels are displayed. The strengths of the edges can also be seen in the resulting image, 265

SCHILLER

because when grey changes to white, the difference is not as great as from black to white, so these edge pixels are plotted darker. This follows automatically because the difference between grey and white is not as great as that between black and white. At second glance, we find out that edge transitions from lighter to darker shades of grey don’t occur in the resulting image. This is not surprising, because of the used transformation of the matrix of pixel values into an image, the values that lay outside the interval of 0 to 255 were clipped to the range 0 to 255, since they are not valid grey values. Subtracting the value of a bright pixel from a dark one provides us with a negative value. Due to the formation of differences, we are outside the (for greyscale images) admissible value range of 0 to 255. Pupils should consider at this point, what is the range of values generated from the differentiation in this way. This is the basic requirement for finding a reasonable solution that also plots the edge transitions from light to dark areas in the resulting image. The answer can be easily found: The most negative number is the one in which we subtract from the smallest number (0) the largest number (255), i.e., -255. The largest positive number is obtained by subtracting nothing from the largest number (255) and it therefore remains 255. The new range after a single differentiation in the way discussed here thus is the interval from -255 to +255. A clipping of negative values by setting them to 0 doesn’t make sense in this case, because by doing this these existing edges are not displayed. Negative values must be caught in a different way to find all possible edge pixels and not leave the permitted range of values. But negative values could also be caught by the absolute value (see [TUChemnitz, 2008, p. 8]). Let’s examine that (Figure 15):

Figure 15. As Figure 14, but catching by the absolute value.

We actually find all the edges we were searching for. PUPILS’ EXPERIMENTS WITH A GENERAL SPREADSHEET

Implementation and its Problems As mentioned above, it makes sense to let the pupils experiment with filters independently. They can determine the entries of a filter matrix themselves and test the effects of the filter procedure on an image quickly. Therefore they can create a spreadsheet with three tables: a table with the (greyscale) pixel values of the source image, one for entering the filter matrix, and one for the pixel values of the resulting image. 266

DIGITAL IMAGES

To make the document suitable for experiments, it makes sense to keep it as general as possible. Therefore it is recommended to use a filter matrix that is as large as possible, e.g. the size of 21 x 21, as can be seen in Figure 16. If a smaller filter matrix is required, the external cells have to be filled with zeros, and then the entire spreadsheet can be used principally unchanged. In Figure 16 you can see two matrices. The reason for this is that it is often useful to divide all matrix entries by a certain value. If we calculate the mean, for example, each pixel has to be weighted with 1 / (number of entries). Also for the reason of clarity, it is useful to single out certain common factors. Therefore, in my file the divisor, by which all the values have to be divided, is entered between the two matrices. The right matrix is filled with the (not yet divided) entries. The left matrix is provided with formulas to take the values of the right matrix and divide them by the divisor. In the implementation in the classroom, make sure that the column widths are selected appropriately so that (for clarity) the matrix has full space on the screen and so that between the entries with less numerics there is not too much space that would tear the matrix optically. But the real challenge for the students when creating such a generally usable spreadsheet is not the handling of the matrix, but rather the actual filter procedure: In the source image the values of the pixels on the fringes of the image should be unaffected. Later on, of course, other boundary treatments can be introduced. For simplicity, you start with the largest fixed filter matrix, in my case a filter matrix of size 21 x 21, even if you work only with a matrix of size 7 x 7 later and the other matrix entries are set to 0. What I’m trying to show is that e.g. in my case a fringe with a width of 10 pixels is excluded from the actual image filtering process, even if in the case of filtering with a matrix of size 3 x 3 only one pixel width would be enough. As the fringes of the image are negligible, this doesn’t matter. But if one wants to, they can work for a more accurate dynamic determination of the fringes, e.g. with the help of appropriate IF- conditions. You can start with the actual image filtering process at the top left point that no longer belongs to the fringes. Here a (long) formula has to be entered in which the new pixel value is calculated. Then this formula can be copied to the other cells representing pixels, which have to be filtered, and we get the desired result. But for several reasons this is not as easy to handle as it seems to be. The formula for the filter process with a matrix of size 21 x 21 is very long. After all, the sum of 212 = 441 multiplications, i.e., 441 matrix entries with one pixel

Figure 16. Screenshot of worksheet with the filter matrix.

267

SCHILLER

value in each case, has to be built. For matrices of size 3 x 3 that would be accomplished relatively quickly, but in a formula which refers to 882 different cells (matrix entries and pixel values), the probability of input, typing or clicking errors is very high. But as in the preparation of the formula (because of iterating through a matrix) in principle you (almost) always have to access neighbouring cells, you can implement a small utility program for creating the formula. Then you can paste the formula created by this program into the cell in the spreadsheet. To write such a utility program quickly, it is important to get an idea of the formula and begin to write it down: =RUNDEN(‘source image (greyscale)’!A1*’filter matrix’!$A$1+’source image (greyscale)’!A2*’filter matrix’!$A$2+’source image (greyscale)’!A3*’filter matrix’!$A$3+’source image (greyscale)’!A4*’filter matrix’!$A$4+’source image […] (greyscale)’!A20*’filter matrix’!$A$20+’source image (greyscale)’!A21*’filter matrix’!$A$21+’source image (greyscale)’!B1*’filter matrix’!$B$1+’source image (greyscale)’!B2*’filter matrix’!$B$2+’source image […];0) The distinction between absolute and relative references is very important. While the references to the surrounding pixels have to change when copying the formula to neighbouring cells of neighbouring pixels, the entries in the filter matrix remain always in the same place and therefore must be determined absolutely. However, references to the surrounding pixels must be relative. If you want to copy the formula obtained by the utility program into the spreadsheet, you may face another problem. For example, you get an error message saying that the maximal number of characters for a formula (on the level of 8192) was exceeded. Here you must reflect on how to shorten the formula. A possible solution would be to reduce the names of the worksheets, because “source image (greyscale)” or “filter matrix” already needs a lot of space. But how long can the new short names of the worksheets be? Can the terms be ever so short that the formula matches within the maximum allowable number of characters? Here arithmetic provides the answer. As already mentioned, there are 882 references on other cells. Of these, 441 refer to cells of pixel values and therefore are relative references. For the chosen image size you need column names at the length of 2 characters (letters) to address the entire width of the image. For the rows 3 characters (digits) are required. Therefore such a reference needs (without names for the worksheets) up to 5 characters. With the 441 references to the cells of the filter matrix, it looks a bit different. Here the line numbers are only 2 characters (digits), and in order to include the column name only 1 more character (letter) is necessary to address the matrix of size 21 x 21. Since the references to the cells of the filter matrix must be absolute, 2 more characters (dollar signs) per reference will be added. In sum you obtain again up to 5 characters for each such reference (without the names of the worksheets). Under the last assumptions, there are up to characters needed, thus about 3782 characters are left for the names of the worksheets. When we consider that all the 441 references refer to the source image or to the filter matrix, there are characters remaining for the names of the worksheets. Therefore 268

DIGITAL IMAGES

I have chosen the names “src” (source) for the name of the worksheet with the pixel values of the source image and “m” representing the worksheet with the filter matrix. For consistency, I changed the name of the worksheet of the resulting image to “dest”. If you change the utility program that prints the formula according to that work, it is now easy to insert the formula into the spreadsheet. Another possible solution would be to reduce the number of references by reducing the size of the filter matrix. But this allows fewer experiments, since comparisons between smaller and larger matrices can be interesting. As mentioned earlier, you can paste the formula created by the utility program in the cell located at the very top left, which is no longer on the fringes of the image, and then copy it to the other cells. But that sounds easier than it really is. After all, the formula consists of several thousand characters that have to be copied into over 50000 cells! Depending on your computer, resource problems are unavoidable. It will take a lot of patience! I can only recommend not to copy the formula in all the other cells immediately, but to proceed in stages. You should also save the file after such a stage of copying. I recommend keeping the image size rather small. I created the file for images of size 335 x 224 myself. Originally I planned using an image size about four times this, but I failed because of lack of time, as the copying of formulas and saving the document took a very long time on my computer. My file was in effect over 100 MB and the opening on my computer took about 10 minutes. Here you can estimate the necessary resources with an image that is four times larger! But these are also important experiences for pupils, after all. The construction or use of such a spreadsheet demonstrates the complexity of an implementation of simple basic principles. Similar to possible duration studies in filtering with a self-implemented program, pupils can see that the basic principles work, although perhaps there must be significant improvements to use them rationally in real-time applications in reality. Example to Test the General Spreadsheet Here is a little test of the general filter procedure implemented in a spreadsheet. As the source image I used the following greyscale image (Figure 17):

Figure 17. Test image (greyscale). 269

SCHILLER

I entirely filled the (right) filter matrix with 1s and I set the divisor to the number of filter entries, i.e., 212. Thus the mean of the values lying under the filter (that is lying over the image) is calculated. We can see the smoothing effect mentioned above well in the resulting image (Figure 18):

Figure 18. Resulting image by smoothing using a filter matrix of size 21 x 21.

Due to the size of the filter matrix the smoothing effect is so strong that the source image is no longer really visible. Because of the differences in brightness you can see very rough structures, such as the bright area in the left down zone of the image containing some flowers or the bright area in the upper zone, in which the concrete slabs that are lighter than the underlying plants are lying. The fringes of the image are also striking. These are already 21 pixels wide, and this raw area is clearly visible. In all previous examples, in which the untreated fringes were only one pixel wide, this was not noticed because of its thinness. But just the thickness of the fringes could now provide an extra boost to deal with boundary treatment in detail. Comments With such a spreadsheet pupils can experiment with different filters, not only with different filter entries, but also with different sizes of the matrix. Additionally experience is gained in connection with the available resources, which is important, because many pupils believe that resource problems are solved automatically when they use the next computer generation, which normally is not true. It usually depends on the functioning of algorithms that can save a multitude on resources. By working in a spreadsheet, as I have presented in this article, the filter procedure can be implemented in general, not only for linear filters, in the mathematics classroom (without the computer science teacher). The mathematics teacher needs only one ready-produced program (e.g. implemented by a computer science teacher) to be able to import the pixel values into a spreadsheet and convert the new pixel 270

DIGITAL IMAGES

values back into an image. Thus, they can solve the examples represented in this article without the help of a computer science teacher or deeper knowledge of computer science. NOTES 1 2 3

4

5

suggestions for literature: [Weigand_1989], [Weigand_1993]. see section “EDGE DETECTION”. You can find different suggestions for the treatment of image fringes in [Burger et al._2006, S. 113] (e.g. the obtaining of the original pixel values or the assumption, the image would be continued over the fringes). In [Burger et al._2006, S. 91] you get the hint, that the treatment of image fringes could need more complexity as needed for the big inner part of the image. Since each pixel is weighted by zero, thus multiplied with zero, and the total sum of zeros again is zero, the result is (apart from the not treated fringes) a completely black image. Thus, the image was deleted. The way over the one- dimensional case is in dispute e.g. also in [Burger et al., 2006, S. 118ff ].

REFERENCES Burger, W., & Burge, M. J. (2005 und 2006). Digitale Bildverarbeitung, Eine Einführung mit Java und ImageJ (Vol. 2). Überarbeitete Auflage, Springer-Verlag Berlin Heidelberg; ISBN 978-3-54030940-6 (Print) bzw. 978-3-540-30941-3 (Online) (SpringerLink), ISBN-13 978-3-540-30940-6; (This book is also available in English: Burger, W., & Burge, M. J. (2008). Digital image processing, an algorithmic introduction using Java. Springer; ISBN 978-1-84628-379-6. Pehla, M. Beispiel aus dem Tutorial “Bildverarbeitung mit Java”, 25.03.2003. Retrieved October 19, 2008, from http://www.dj-tecgen.com/downloads/Beispiele/Beispiel5.java. Tönnies, K. D. (2005). Grundlagen der Bildverarbeitung. Pearson Studium, ISBN 3-8273-7155-4. TU-Chemnitz, Kapitel 1: Farbräume – Warum nachts alle Katzen grau sind. Retrieved December 13, 2008, from http://www-user.tu-chemnitz.de/~niko/biber/tutorial/tutorial.pdf. Weigand, H. G. (1989). Zum Verständnis von Iterationen im Mathematikunterricht. Dissertation, Verlag Franzbecker, Bad Salzdetfurth, ISBN 3-88120-183-1. Weigand, H. G. (1993). Zur Didaktik des Folgenbegriffs, überarbeitete Habilitationsschrift. BI, Mannheim, ISBN 3-411-16221-X.

Thomas Schiller Linz (Austria)

271

HANS-STEFAN SILLER

16. MODELLING AND TECHNOLOGY Modelling in Mathematics Education Meets New Challenges

INTRODUCTION

Mathematics is a part of our daily life. A lot of (problem-) solving techniques were discovered by mathematical discussion or through mathematical activities involving real objects. Since the end of the 1970’s educational researchers are looking at the impact of real real-life situations that can be solved by mathematical methods and applied in the area of mathematical education. For this reason it is necessary to show the possibilities of real-life problems to mathematics education. Hence it is necessary to think about didactical-reflected and approved principles for education, so that the implementation of real-life-situations in mathematics education can be treated successfully. By thinking about this subject, it is necessary that we ask ourselves certain questions: “Where am I able to find mathematics in our daily life? Which (particular) educational principle allows educating it? How is mathematics needed for the future?” By looking in various literature sources certain accepted proposals can be found (see Siller, 2008). Very important skill requirements for students are knowing and understanding basic competencies in mathematics, like – representing, representing mathematically, – calculating, operating, – arguing, reasoning, – interpreting, – creative working. These aims are not new! They can be found for example in Wittmann (1974). The author is describing general educational objectives and basic techniques which help to create the knowledge and understanding for basic mathematical principles and basic mental activities, like Winter (1972, S. 11–12) has stated it. Following these formulated targets, consequences for mathematics education arise. Maaß & Schlöglmann (1992) have formulated central postulations that allow combining the implementation of real-life-problems and the learning of basic mathematical principles: – Mathematics education should convey an extensive and balanced picture of mathematics itself. All aspects – historic, systematic as well as algorithmic – of this subject are important for students, so that they achieve a well known picture – of mathematics concerning reality as a particular science regarding its applicability to certain problems and its effect to society. J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 273–280. © 2011 Sense Publishers. All rights reserved.

SILLER

– Learning for life! Mathematics education should help students to be able to solve (easy) problems of daily life, without studying the subject itself. Mathematics should be seen as a construction of reality. – Learning for profession! Mathematics education should prepare students for higher education in certain extensiveness. These requirements raise a lot of issues for dedicated teachers. It is necessary that they have some guidelines which they can obey and which are prepared for practical reasons. The idea of mathematical modelling would be an adequate one for such purposes. Other reasons for taking the idea of mathematical modelling into account are arguments of general education, like – evolvement of personality, – exploitation of environment, – equal participation in (all aspects of ) society, – placement of rules and measurements. Following such arguments (mathematical) modelling can be seen as a powerful method for mathematics education. It can be found in a lot of paragraphs in curricula, too. For example in the Austrian curriculum for Mathematics (2004) it is stated: “Mathematics in education should contribute that students are able to enforce their accountability for lifelong learning. This can happen through an education to analytical-coherent thinking and through intervention with mathematical backgrounds which have a necessary fundamental impact in many areas of life. Acquiring these competencies should help students to know the multifaceted aspects of mathematics and its contribution to several different topics. The description of structures and processes in real life with the help of Mathematics allows understanding coherences and the solving of problems through a deepened resulting access which should be a central aim of Mathematics education. […] An application-oriented context points out the usability of Mathematics in different areas of life and motivates to gain new knowledge and skills. Integration of the several inner-mathematical topics should be strived for in Mathematics and through adequate interdisciplinary teaching-sequences. The minimal realization is broaching the issue of application-oriented contexts in selected mathematical topics; the maximal realisation is the constant addressing of application-oriented problems, discussion and reflection of the modelling cycle regarding its advantages or constraints.” The aims of modelling can be realized if a certain schedule – designed by Blum (2005) – is observed (Figure 1).

Figure 1. Circle for mathematical modelling by Blum (2005). 274

MODELLING AND TECHNOLOGY

MODELLING THROUGH THE HELP OF TECHNOLOGY

Using computers in education allows discussing problems which can be taken from students’ life-world. The motivation for mathematics education is affected because students recognize that this subject is very important in their everyday life. If it is possible to motivate students in a way like this it will be easy to discuss and to teach necessary basic or advanced mathematical contents. By using technology difficult (mathematical) procedures in the modelling-circle can be done by the computer. Sometimes the use of technology is even indispensable, especially in – computationally-intensive or deterministic activities, – working, structuring or evaluating large data-sets, – visualizing processes and results, – experimental work. By using technology in education it is possible to teach traditional contents in a manner that is different to conventional methods and it is very easy to pick up new contents for education. The centre of education should be a discussion with open, process-oriented examples. They are characterized by the following points. Open process-oriented problems are examples which – are real applications, e.g. betting in sports (Siller & Maaß, 2009), not vested examples for mathematical calculations, – are examples which develop out of situations, that are strongly analyzed and discussed, – can have irrelevant information , that must be eliminated, or information which must be found, so that students are able to discuss it, – are not able to be solved at first sight. The solution method differs from problem to problem, – need competencies not only in Mathematics. Other competencies are also necessary for a successful treatment, – motivates students to participate, – provokes and opens new questions for further, as well as alternative, solutions. The teacher achieves a new role in his profession. He is becoming a kind of tutor, who advises and channels students. The students are able to detect the essential things on their own. Major goals of education will be met, if technology is used in such a way (cf. Möhringer, 2006): – Pedagogical aims: With the help of modelling cycles it is possible to connect skills in problemsolving and argumentation. Students are able to learn application competencies in elementary or complex situations. – Psychological aims: With the help of modelling the comprehension and the memory of mathematical contents is supported. – Cultural aims: Modelling supports a balanced picture of Mathematics as a science and its impact in culture and society (cf. Maaß, 2005a & Maaß 2005b)

275

SILLER

– Pragmatically aims: Modelling of problems helps us to understand, cope and evaluate known situations. The integration of technology into the modelling cycle can be helpful by leading to an intensive application of technology in education. The way how it could be implemented can be seen in Figure 2.

Figure 2. Extended modelling cycle – regarding technology.

The “technology world” is describing the “world” where problems are solved through the help of technology. This could be a concept of modelling in mathematics as well as in an interdisciplinary context with computer-science-education. This extension meets the idea of Dörfler & Blum (1989, p. 184), who state: “With the help of computers which are used as mathematical additives it is possible to escape routine calculation and mechanical drawings, which can be in particular a big advantage for the increasing orientation of appliance. Because of the fact, that it is possible to calculate all common methods taught in school with a computer, mathematics education meets a new challenge and (scientific) mathematics educators have to answer new questions.” Example Technology enables students to reflect about certain modelling activities which take place in traditional tasks. Through the help of dynamic graphical software traditional solutions can be proved and visualized in a modern way. I want to show one possibility by using a traditional example, in the area of optimization: The owner of a Power-Drink-company is looking at the costs for producing tins in which the soft-drink is sold. He recognizes that the costs are rising to a limit which he is not able to fund. So he is looking for alternatives and asks you, a student of mathematics, for a competitive solution. Think about an answer which you could give to the owner of the company! First step – Constructing a suitable model for the given situation. The given situation is a “real situation” from reality. The owner of a certain company wants to optimize the production costs for tins. First of all the student will look at the volume of a certain tin. Therefore they have to walk to a supermarket and look for tins of a certain soft-drink. For this 276

MODELLING AND TECHNOLOGY

example I want to determine the volume of such a tin with V = 250 ml. After that students have to think about the situation and realize that the surface of such tins should be minimized. Second step – Reducing the parameters. Student shall recognizes that the form of the tin is too complicated for an elementary calculation. The seam at the top and the recess at the bottom are too difficult for an easy explanation. So they should decide to idealize the tin, which could look like a cylinder. There are no seams, no recesses and no break contact at the top. With the help of this idealized tin the surface can be optimized with the help of a mathematical model. Third step – Constructing a mathematical modelling. Because of the students knowledge they describe the surface of the tin with the help of a function in two variables S(r, h) = 2r²π + 2rπh. But the solutions of such a function with elementary calculation wills not succeed. So the constraint 250 = r²πh is needed. Substituting h in terms of r into the function S(r, h) returns a function in one variable S(r), which can be solved easily. Fourth step – Solving the function. The students will calculate the solution of the function with the help of differential calculation and by the help of technology. The found solution can be visualized in a graph (Figure 3).

Figure 3. Graphical solution.

The minimum can be found graphical at the cross position, which is marked by the ellipse. With the help of dynamic graphical software a tool was developed which is able to calculate the solution and to show the solution in one step. The student has to know the software well and then he is able to implement this problem with the help of such software. Students get the possibility to reflect about their mathematical 277

SILLER

solutions and are able to visualize it in a way which shows very impressively the results of such an optimization (Figure 4):

Figure 4. Visualizing the solution.

We know the gradient of the tangent is 0 when the minimal surface area is reached (Figure 5). Therefore the tangent to the point P is sketched and the minimal surface area can be determined. It can be found at the point P (3.41, 4.39). The same value can be found by a calculation.

Figure 5. The result of the optimization.

Fifth step – Interpreting and arguing the solution. Through a detailed view to the results of the radius and the height of the tin it is easy to see, that the minimal surface is reached when the height is equal to two times the radius (h = 2r), which can be shown trough a simple analogue calculation – let us name it a pseudo-proof. 278

MODELLING AND TECHNOLOGY

Sixth step – Validating the solution. Students could search for such tins for instance in a supermarket. They will find such tins quite easily. My hypothesis for finding such surface optimized tins in reality is the following: “If the design is not important to the producers the surface is optimized. If the design or maybe a company logo is important then the tins are not optimized.” I have not proofed this hypothesis but if we have a look at tins in the supermarket it gets obvious. Tins in which drinks are kept are not surface-optimized; tins where eatables are kept, like tins for vegetables, especially (sweet-) corn, are surface optimized, because the design is not important to the buyer. If we have a look at the pictured corn-tin (see Figure 6) my conjecture will be reasonable.

Figure 6. Corn-tin.

Advancements in the model. As we have seen the situation of modelling a tin and calculating the optimal surface area with a given volume has been extremely idealized. As a first approach cycle it is a proper method. But for sure students will 279

SILLER

ask what they have to do, if other parameters like the seam should be considered. For this reason a computer-algebra-system should be used, because it is difficult to implement such conditions with the help of dynamical graphical software. Even for a better understanding of tins in reality it is necessary to discuss it with seams, which was done by Deuber (2005). CONCLUSION

By integrating technology into education routine jobs, like differentiation or integration of functions or manipulating mathematical terms, the focus on conducting routine calculations is diminished. The importance of calculus, which is a typical attribute in common education, loses ground to the importance of interpreting and demonstrating. By including real-life-problems, through the aspect of modelling, those aspects can be enforced with the help of technology. So the role of modelling in education is enforced and students can become aware of the enormous prominence of modelling in Mathematics education. REFERENCES Blum, W. (2005). Modellieren im Unterricht mit der “Tanken”-Aufgabe. In Mathematik lehren, H. 128 (pp. 18–21). Seelze: Friedrich Verlag. Austrian curriculum for Mathematics. (2004). AHS-Lehrplan Mathematik. Wien: BMUKK. Retrieved from http://www.bmukk.gv.at Deuber, R. (2005), Lebenszyklus einer Weissblechdose – Vom Feinblech zur versandfertigen Dose. In http://www.swisseduc.ch/chemie/wbd/docs/modul2.pdf, Baden. Dörfler, W., & Blum, W. (1989). Bericht über die Arbeitsgruppe “Auswirkungen auf die Schule”. In J. Maaß, & W. Schlöglmann, (Eds.), Mathematik als Technologie? - Wechselwirkungen zwischen Mathematik, Neuen Technologien, Aus- und Weiterbildung (pp. 174–189). Weinheim: Deutscher Studienverlag. Maaß, J., & Schlöglmann, W. (1992). Mathematik als Technologie – Konsequenzen für den Mathematikunterricht. In mathematica didactica, 15. Jg. Bd. 2, pp. 38–58. Maaß, K. (2005a). Modellieren im Mathematikunterricht der S I. In JMD 26(2). Wiesbaden: Teubner, pp. 114–142. Maaß, K. (2005b). Stau – eine Aufgabe für alle Jahrgänge! In PM Heft 47(3). Köln: Aulis Verlag Deubner, pp. 8–13. Möhringer, J. (2006). Bildungstheoretische und entwicklungsadäquate Grundlagen als Kriterien für die Gestaltung von Mathematikunterricht am Gymnasium. Dissertation, LMU München. Siller, H.-St. (2008). Modellbilden – eine zentrale Leitidee der Mathematik. Schriften zur Didaktik der Mathematik und Informatik an der Universität Salzburg, Aachen: Shaker Verlag. Siller, H.-St., & Maaß, J. (2009). Fußball EM mit Sportwetten. In A. Brinkmann, & R. Oldenburg, (Eds.), ISTRON – Anwendung und Modellbildung im Mathematikunterricht (pp. 95–113), Hildesheim: Franzbecker. Winter, H. (1972). Über den Nutzen der Mengenlehre für den Arithmetikunterricht in der Grundschule. In Die Schulwarte 25, Heft 9/10. pp. 10–40. Wittmann, E. Ch. (1974). Grundfragen des Mathematikunterrichts. Wiesbaden: vieweg.

Hans-Stefan Siller IFFB – Dept. for Mathematics and Informatics Education University of Salzburg

280

LIST OF CONTRIBUTORS

Manfred Borovcnik

[email protected]

Ramesh Kapadia

[email protected]

Astrid Brinkmann

[email protected]

Klaus Brinkmann

[email protected]

Tim Brophy

[email protected]

Jean Charpin

[email protected]

Simone Göttlich

[email protected]

Thorsten Sickenberger

[email protected]

Günter Graumann

[email protected]

Ailish Hannigan

[email protected]

Herbert Henning

[email protected]

Benjamin John

[email protected]

Patrick Johnson

[email protected]

John O’Donoghue

[email protected]

Astrid Kubicek

[email protected]

Jim Leahy

[email protected]

Juergen Maasz

[email protected]

Hans-Stefan Siller

[email protected]

Thomas Schiller

[email protected]

281

Real-World Problems for Secondary School Mathematics Students Case Studies Edited by

Juergen Maasz University of Linz, Austria

John O’Donoghue University of Limerick, Ireland

SENSE PUBLISHERS ROTTERDAM/BOSTON/TAIPEI

A C.I.P. record for this book is available from the Library of Congress.

ISBN: 978-94-6091-541-3 (paperback) ISBN: 978-94-6091-542-0 (hardback) ISBN: 978-94-6091-543-7 (e-book)

Published by: Sense Publishers, P.O. Box 21858, 3001 AW Rotterdam, The Netherlands www.sensepublishers.com

Printed on acid-free paper Image – The Living Bridge, University of Limerick. © Patrick Johnson, 2008. “The Living Bridge – An Droichead Beo” The Living Bridge is the longest pedestrian bridge in Ireland and links both sides of the University of Limerick’s campus across the river Shannon. The bridge is constructed of 6 equal spans and follows a 350 metre long curved alignment on a 300 metre radius.

All Rights Reserved © 2011 Sense Publishers No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.

TABLE OF CONTENTS

Preface.................................................................................................................... vii 1. Modelling in Probability and Statistics: Key Ideas and Innovative Examples ......................................................................................... 1 Manfred Borovcnik and Ramesh Kapadia 2. Problems for the Secondary Mathematics Classrooms on the Topic of Future Energy Issues ................................................................................... 45 Astrid Brinkmann and Klaus Brinkmann 3. Coding Theory................................................................................................. 67 Tim Brophy 4. Travelling to Mars: A Very Long Journey: Mathematical Modelling in Space Travelling.......................................................................................... 87 Jean Charpin 5. Modelling the Storage Capacity of 2D Pixel Mosaics..................................... 99 Simone Göttlich and Thorsten Sickenberger 6. Mathematics for Problems in the Everyday World ....................................... 113 Günter Graumann 7. Political Polls and Surveys: The Statistics Behind the Headlines ................. 123 Ailish Hannigan 8. Correlations between Reality and Modelling: “Dirk Nowitzki Playing for Dallas in the NBA (U.S.A.)” ................................................................... 137 Herbert Henning and Benjamin John 9. Exploring the Final Frontier: Using Space Related Problems to Assist in the Teaching of Mathematics .................................................................... 155 Patrick Johnson 10. What are the Odds? ....................................................................................... 173 Patrick Johnson and John O’Donoghue 11. Models for Logistic Growth Processes (e.g. Fish Population in a Pond, Number of Mobile Phones within a Given Population) ................................ 187 Astrid Kubicek 12. Teaching Aspects of School Geometry Using the Popular Games Rugby and Snooker................................................................................................... 209 Jim Leahy v

TABLE OF CONTENTS

13. Increasing Turnover? Streamlining Working Conditions? A Possible Way to Optimize Production Processes as a Topic in Mathematics Lessons ........ 221 Juergen Maasz 14. Mathematics and Eggs: Does this Topic Make Sense in Education? ............ 239 Juergen Maasz and Hans-Stefan Siller 15. Digital Images: Filters and Edge Detection................................................... 257 Thomas Schiller 16. Modelling and Technology: Modelling in Mathematics Education Meets New Challenges .................................................................................. 273 Hans-Stefan Siller List of Contributors .............................................................................................. 281

vi

PREFACE

We should start by pointing out that this is not a mathematics text book – this is an ideas book. This is a book full of ideas for teaching real world problems to older students (15 years and older, Upper Secondary level). These contributions by no means exhaust all the possibilities for working with real world problems in mathematics classrooms but taken as a whole they do provide a rich resource for mathematics teachers that is readily available in a single volume. While many papers offer specific well worked out lesson type ideas, others concentrate on the teacher knowledge needed to introduce real world applications of mathematics into the classroom. We are confident that mathematics teachers who read the book will find a myriad of ways to introduce the material into their classrooms whether in ways suggested by the contributing authors or in their own ways, perhaps through miniprojects or extended projects or practical sessions or enquiry based learning. We are happy if they do! Why did we collect and edit them for you, the mathematics teachers? In fact we did not collect them for you but rather for your students! They will enjoy working with them at school. Having fun learning mathematics is a novel idea for many students. Since many students do not enjoy mathematics at school, students often ask: “Why should we learn mathematics?” Solving real world problems is one (and not the only one!) good answer to this question. If your students enjoy learning mathematics by solving real world problems you will enjoy your job as a mathematics teacher more. So in a real sense the collection of examples in this book is for you too. Using real world problems in mathematics classrooms places extra demands on teachers and students that need to be addressed. We need to consider at least two dimensions related to classroom teaching when we teach real world problems. One is the complexity (intensity or grade) of reality teachers think is appropriate to import into the classroom and the other is about the methods used to learn and work with real problems. Papers in this collection offer a practical perspective on each dimension, and more. Solving real world problems often leads to a typical decision situation where you (we hope together with your students) will ask: Should we stop working on our problem now? Do we have enough information to solve the real world problem? These are not typical questions asked in mathematics lessons. What students should learn when they solve real world problems is that an exact calculation is not enough for a good solution. They should learn the whole process of modelling from the first step abstracting important information from the complex real world situation, to the next steps of the mathematical modelling process. For example, they should learn to write down equations to describe the situation; do calculations; interpret the results of calculation; improve the quality of the model; calculate again (several times if needed); and discuss the results with others. Last but not least, they should reflect on the solution process in order to learn for the future.

vii

PREFACE

How real should real world problems be? More realistic problems are generally more complex and more complex problems demand more time to work them out. On the other hand a very simplified reality will not motivate students intrinsically to work for a solution (which is much better for a sustaining learning). Experience suggests starting with simple problems and simple open questions and moving to more complex problems. We think it is an impossible task for students without any experience of solving complex real problems to start by solving difficult real problems. It is better if you start with a simpler question and add complexity step by step. The second dimension of classroom teaching is concerned with methods of teaching real world problems. We are convinced that learning and teaching is more successful if you use open methods like group work, project planning, enquiry learning, practical work, and reflection. A lot of real world problems have more than one correct solution, and may in fact have several that are good from different points of view. The different solutions need to be discussed and considered carefully and this is good for achieving general education aims like “Students should become critical citizens”. Students are better prepared for life if they learn how to decide which solution is better in relation to the question and the people who are concerned. Finally we would like to counter a typical “No, thank you” argument against teaching real world problems. Yes, you will need more time for this kind of teaching than you need for a typical lesson training students in mathematical skills and operations. Yes, you will need to prepare more intensively for these lessons and be prepared for lot of activity in your classroom. You will need to change your role from a typical teacher in the centre of the classroom knowing and telling everything to that of manager of the learning process who knows how to solve the problem. But you need help to get started! We hope you will use this book as your starter pack We don’t expect you to teach like this every day but only on occasions during the year. It should be one of your teaching approaches but not the only one. Try it and you will be happy because the results will be great for the students and for you! ACKNOWLEDGEMENTS

We would like to thank all those who made this book possible especially the many authors who so generously contributed papers. This collaboration, sharing of insights, expertise and resources benefits all who engage in an enterprise such as this and offers potential benefits to many others who may have access to this volume. We are especially pleased to bring a wealth of material and expertise to an English speaking audience which might otherwise have remained unseen and untapped. The editors would like also to record their thanks to their respective organizations who have supported this endeavour viz. the Institut fur Didaktik der Mathematik, viii

PREFACE

Johannes Kepler University, Linz, and the National Centre for Excellence in Mathematics and Science Teaching and Learning (NCE-MSTL), at the University of Limerick. Juergen Maasz University of Linz, Austria John O’Donoghue NCE-MSTL University of Limerick and Linz Autumn 2010

ix

MANFRED BOROVCNIK AND RAMESH KAPADIA

1. MODELLING IN PROBABILITY AND STATISTICS Key Ideas and Innovative Examples

This chapter explains why modelling in probability is a worthwhile goal to follow in teaching statistics. The approach will depend on the stage one aims at: secondary schools or introductory courses at university level in various applied disciplines which cover substantial content in probability and statistics as this field of mathematics is the key to understanding empirical research. It also depends on the depth to which one wants to explore the mathematical details. Such details may be handled more informally, supported by simulation of properties and animated visualizations to convey the concepts involved. In such a way, teaching can focus on the underlying ideas rather than technicalities and focus on applications. There are various uses of probability. One is to model random phenomena. Such models have become more and more important as, for example, modern physicists build their theory completely on randomness; risk also occurs everywhere not only since the financial crisis of 2008. It is thus important to understand what probabilities really do mean and the assumptions behind the various distributions – the following sections deal with genuine probabilistic modelling. Another use of probability is to prepare for statistical inference, which has become the standard method of generalising conclusions from limited data; the whole area of empirical research builds on a sound understanding of statistical conclusions going beyond the simple representation of data – sections 6 and 7 will cover ideas behind statistical inference and the role, probability plays therein. We start with innovative examples of probabilistic modelling to whet the appetite of the reader. Several examples are analysed to illustrate the value of probabilistic models; the models are used to choose between several actions to improve the situation according to a goal criterion (eg., reduce cost). Part of this modelling approach is to search for crucial parameters, which strongly influence the result. We then explain the usual approach towards probability – and the sparse role that modelling plays therein by typical examples, ending with a famous and rather controversial example which led to some heated exchanges between professionals. Indeed we look at this example (Nowitzki) in some depth as a leitmotiv for the whole chapter: readers may wish to focus on some aspects or omit sections which deal with technical details such as the complete solution. In the third and fourth sections, basic properties of Bernoulli experiments are discussed in order to model and solve the Nowitzki task from the context of sports. J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 1–43. © 2011 Sense Publishers. All rights reserved.

BOROVCNIK AND KAPADIA

The approach uses fundamental properties of the models, which are not always highlighted as they should be in teaching probability. In the fifth section, the fundamental underlying ideas for a number of probability distributions are developed; this stresses the crucial assumptions for any situation, in which the distribution might be applied. A key property is discussed for some important distributions: waiting times, for example, may or may not be dependent on time already spent waiting. If independent, this sheds a special light on the phenomenon, which is to be modelled. In a modelling approach, more concepts than usual have to be developed (with the focus on informal mathematical treatment) but the effort is worthwhile as these concepts allow students to gain a more direct insight to understand the inherent assumptions, which are required from the situation to be modelled. In the sixth section, the statistical question – (is Nowitzki weaker in away than in home matches?) – is dealt with thoroughly. This gives rise to various ways to tackle this question within an inferential framework. We deal informally with the methods that comprise much of what students should know about the statistical comparison of two groups, which forms the core of any introductory course at university for all fields, in which data is used to enrich research. While the assumptions should be checked in any case of application, such a crucial test for the assumptions might be difficult. It will be argued that the perspective of probabilistic and statistical applications is different and linking heuristic arguments might be more attractive in the case of statistical inference. Probabilistic distributions are used to make a probability statement about an event, or to calculate expected values to make a decision between different options. Or, they may be used to describe the ‘internal structure’ of a situation by the model’s inherent structure and assumptions. Statistical applications focus on generalizing facts beyond available data. For that purpose they interpret the given data by probabilistic models. A typical question is whether the data is compatible with a specific hypothesized model. This model as well as the answer is interpreted within the context. For example, can we assume that a medical treatment is – according to some rules – better than a placebo treatment? The final two sections resume the discussion of teaching probability and statistics, some inherent difficulties, and the significance of modelling. Conclusions are drawn and pedagogical suggestions are made. INNOVATIVE EXAMPLES OF PROBABILISTIC MODELLING

As we shall see in a later section, a key assumption of independence is not justified in many exercises set in probability. Indeed the key question in any modelling is the extent to which underlying assumptions are or are not justified. Rather than the usual approach of mechanistic applications of probability, a more realistic picture of potential applications will be developed in this section by some selected innovative examples. They describe a situation from the ‘real world’ and state a target to improve or optimize. A spectrum of actions or interventions is open for use to improve a criterion such as reduction of costs. A probability distribution is chosen to model the situation, even though the inherent assumptions might not be perfectly fulfilled. 2

MODELLING IN PROBABILITY AND STATISTICS

A solution is derived and analysed: how does it change due to changes in parameters involved in the model, how does it change due to violations of assumptions? Sensitive parameters are identified; this approach offers ways of making the best use of the derived solutions and corroborating the best actions to initiate further investigations to improve the available information. The examples deal with novel applications – blood samples, twin light bulbs, telephone call times, and spam mail. Simulation, spreadsheets and other methods are used and illustrate the wide range of ideas where probability helps to model and understand a wide and diverse range of situations. Blood Samples Modelled with Binomial Probabilities The following example uses the binomial model for answering a typical question, which might be studied. The ‘outcome’ might be improved even if the model has some drawbacks: the cost situation is improved and hints for action are drawn from the model used though the actual cost improvement cannot be directly read off the solution. Crucial parameters that strongly influence the solution are identified, for which one may strive to get more information in the future. Example 1. Donations of blood have to be examined as to whether they are suitable for further processing or not. This is done in a special laboratory after the simple determination of the blood groups. Each donation is judged – independently of each other – ‘contaminated’ with a probability p = 0.1 and suitable with the complementary probability of q = 0.9. a. Determine the distribution of the number of non-suitable donations if 3 are drawn at random. b. 3 units of different donations are taken and mixed. Only then are they examined jointly as to whether they are suitable or not. If one of those mixed was already contaminated then the others will be contaminated and become useless. One unit has a value of € 50.-. Determine the loss for the various numbers of non-suitable units among those which are drawn and mixed. Remember: If exactly one is non-suitable then the two others are ‘destroyed’. c. Determine the distribution of the loss as a random variable. d. Calculate the expected loss if 3 units of blood are mixed in a ‘pool’. e. Testing of blood for suitability costs of € 25.- per unit tested; the price is independent of the quantity tested. By mixing 3 units in a pool, a sum of € 50.is saved. With the expected loss from d., does it pay to mix 3 different units, or should one apply the test for suitability separately to each of the blood units? A solution to this example is presented in a spreadsheet (Figure 1); the criterion for decision is based on the comparison of potential loss and benefit by pooling; pooling is favourable if and only if: Expected loss by pooling < Reduction of cost of lab testing

(1)

All blood donations are assumed to be contaminated with a probability of 0.1, independently of each other. That means we model the selection of blood donations, 3

BOROVCNIK AND KAPADIA

which should be combined in a pool and analysed in the laboratory jointly, as if it were ‘coin tossing with a success probability of p = 0.1’. While the benefit is a fixed number, loss has to be modelled by expected values. Comparison (1) yields 25.65 < 50, hence mixing 3 blood units and testing jointly, saves costs. Modelling of pooling and calculation of expected cost 0/1

Coding: Single unit is suitable = 0, is NOT suitable = 1

n

3

1-p

0.9

Probability that a single unit is suitable for further processing

p

0.1

Probability that a single unit is NOT suitable for further processing

a.

X ~ Bin(n, p)

b.

Y

Number of blood units to combine to a pool

Model for number X of NOT suitable single units in the pool: Draw with replacement Loss as a function of the number of not suitable units in the pool 50

c.

Cost of units destroyed by pool = loss P(Y = yi) - has to be determined

Distribution of Y

d.

E(Y)

e.

Comparison

Average cost of units destroyed by pooling = loss To compare reduction of cost in testing pooled units and expected loss by pooling 25

Cost of testing a unit in the laboratory

a.

b. Destroyed by pool

c. Loss Y

e.

d.

Distribution of loss P(Y = yi) Y = yi

Not suitable X=i

Probability P(X = i)

Expected loss yi pi

0

0.729

0

0

0

0.730

0

1

0.243

2

100

50

0.027

1.35

2

0.027

1

50

100

0.243

3

0.001

0

0

*

24.3 25.65

Cost of testing Single

25

Size of pool

3

separate

75

pooled

25

Reduction

50

These figures have to be compared.

Figure 1. Spreadsheet with a solution to Example 1.

Example 2. Explore the parameters in Example 1. a. For a probability of p = 0.1 of being contaminated what is a sensible recommendation for the number of blood units being mixed? b. How does the recommendation in a. depend on the probability of suitable blood donations? While a spreadsheet is effective in assisting to solve Example 1, it becomes vital for tackling the open questions in Example 2. Such an analysis of input parameters gives much more rationale to the questions posed. Of course, there are doubts on whether the suitability of blood donations might be modelled by a Bernoulli process. Even if so, how is one to get trustworthy information about the ‘success’ probability for suitable blood units? As this is a crucial input, it should be varied and the consequences studied. Also, just to mix a fixed number of blood units gives no clue why such a size of a pool should be chosen. It also gives no feeling about how the size makes sense relative to the expected monetary gain or loss connected to the size of a pool, which is examined jointly. In a spreadsheet, a slide control for the input parameters p and size n is easily constructed and allows an effective interactive investigation of the consequences (or the laboratory cost and the value of a unit). From a spreadsheet as in Figure 2, one may read off (q = 1 – p): Expected net saving (q = 0.9, n = 4) = 26.22 4

(2)

MODELLING IN PROBABILITY AND STATISTICS

This yields an even better expected net saving as in (1). Interactively changing the size n of the pool shows that n = 4 yields the highest expected value in (2) so that to combine 4 is ‘optimal’ if the proportion of suitable blood units is as high as q = 0.9. With a smaller probability of good blood units, the best size of the pool drastically declines; with q = 0.8, size 2 is the most favourable in cost reduction; the cost reduction is as low as 9 € per pool of 2 as compared to 26.22 € per each 4 units combined to a pool. Reduction of cost per unit is 4.5 with q = 0.8 and 6.6 € with q = 0.9. There is much more to explore, which will be left to the reader. Modelling of pooling - exploring the effect of parameter changes Lab cost of testing

Value of a unit

Proportion q of suitable units

Size of pool n

25

50

0.90

4

70

k suitable 0

P(k)

reduction of test cost by pooling

cost of destroyed units by pooling 0

net saving 75

xi * pi

Expected net saving

0.0001

75

0.0075

1

0.0036

75

-50

25

0.0900

2

0.0486

75

-100

-25

-1.2150

3

0.2916

75

-150

-75

-21.8700

4

0.6561

75

0

75

49.2075

5

#ZAHL!

75

0

75

0.0000

lines

10

#ZAHL!

75

0

75

0.0000

hidden

26.22

Figure 2. Spreadsheet to Example 2 – with slide controls for an interactive search.

It is merely an assumption that the units are independent and have the same probability of being contaminated. Nevertheless, the crucial parameter is still the size of such a probability as other measurements (of success) are highly sensitive to it. If it were a bit higher than given in the details of the example, pooling would not lead to decreasing cost of testing. If it were considerably smaller, then pools of even bigger size than suggested would lead to considerable saving of money. The monotonic influence becomes clear even if the exact size of decreasing cost cannot be given. Closer monitoring of the quality of the incoming blood donations is wise to clarify circumstances under which the required assumptions are less justified than usual. Lifetime of Bulbs Modelled with Normal Distribution Example 3. Bulbs are used in a tunnel to brighten it and increase the security of traffic; the bulbs have to be replaced from time to time. The first question is when a single bulb has to be replaced. The second is, whether it is possible to find a time when all bulbs may be replaced jointly. The reason for such an action is that one has to send a special repair team into the tunnel and block traffic for the time of replacement. While cost maybe reduced by a complete replacement, the lifetime of the still functioning lamps is lost. The time for replacement is, however, mainly determined by security arguments: with what percentage of bulbs still working is the tunnel still sufficiently lit? Here we assume that the tunnel is no longer secure if over two thirds of the bulbs have failed. 5

BOROVCNIK AND KAPADIA

Two systems are compared for their relative costs: Single bulbs and twin bulbs, which consist of two single bulbs – the second is switched on when the first fails. Which system is less costly to install? Lifetime of bulbs in hours is modelled by a normal distribution with mean lifetime μ =1900 and standard deviation σ =200. a. What is the probability that a single bulb fails within the 2300 hours in service? b. Determine the time when two thirds of single bulbs have failed. c. Determine an adequate model for the lifetime of twin bulbs and – based upon it – the time when two thirds of the twin bulbs have failed. Remark. If independence of failure is assumed then lifetime of twin bulbs is also normally distributed with parameters: mean = sum of single means, variance = sum of single variances. d. Assume that at least one third of the lamps in the system have to function for security reasons. The cost of one twin bulb is 2.5 €, a single lamp costs 1 €. The cost of replacing all lamps in the tunnel is 1000 €. For the whole tunnel, 2000 units have to be used to light it sufficiently at the beginning. Relative to such conditions, is it cheaper to install single or twin bulbs? The solution can be read-off from a spreadsheet like Figure 3. The probability in a. to fail before 2300 hours may be found by standardizing this value by the parameters to (2300–1900)/200 = 2 and calculating the value of the standard normal Φ (2). Parts b. and c. require us to calculate the ⅔ quantile of the normal distribution, which traditionally is done by using probability tables, or which may be directly solved by a standard function. As a feature of spreadsheets, such a quantile may be determined by a sliding control, which allows us to control x in the formula P(X ≤ x) = ∞ until the probability ⅔ is reached in the box. For twin bulbs, first the parameters have to be calculated: μT = 1900 + 1900 and σT = √(2002+2002). Such a relation between parameters of single and twin bulbs may be corroborated by simulation – a proof is quite complex. For the final comparison of costs in d., the calculation yields that single bulbs will be exchanged after 1986.1, twin bulbs after 3921.8 hours. The overall costs of the two systems have to be related to unit time. Of course, the lifetime of the bulbs is not really normally distributed. The estimation of the parameters (mean and standard deviation of lifetime) is surrounded with imprecision. As a perfect model, normal distribution might not serve. However, perceived as a scenario in order to investigate “what happens if …?” it helps to analyse the situation and detect crucial parameters. Thus, it might yield clues for the decision on which system to install. On the basis of such a scenario, we have a clear goal, namely to compare the expected costs per unit time E[C] /h of the two systems (see Figure 3): E[C] /h (single bulbs) – E[C] /h (twin bulbs) = 1.51 – 1.53 < 0

(3)

The comparison of expected costs gives 1.51 € per unit time for single as opposed to 1.53 € for twin bulbs. A crucial component is the price of twin bulbs: a reduction from 2.5 to 2.4 € per twin bulb (which amounts to a reduction of 4%) 6

MODELLING IN PROBABILITY AND STATISTICS

changes the decision in favour of twin bulbs. Hence this approach allows students to investigate assumptions, the actual situation and relative costs. It encourages them to use their own prior knowledge in the situation by varying parameters and costs. This gives practice in applying the normal distribution and then investigating consequences from the situation, some of which may be hard to quantify. Single bulbs

μ

σ

1900

x

200

z = (x - μ)/σ

2300

p

zp

x p = μ + z p *σ

0.6667

0.4307

1986.1

Lifetime of a single bulb

f(x)

Φ (z)

2

0.002

0.9772

a. survive 2300 hours

2/3 fail until then

0.001

b.

x

0.000

Twin bulbs

0

500

1000

1500

2000

2500

3000

Lifetime of single bulbs - - and twin bulbs ___

μT

σT

3800

p 0.6667

x

z = (x - μ)/σ

282.84

4600

2.83

zp

x p = μ + z p *σ

0.4307

3921.8

Φ (z)

0.002

0.9977 0.001

c. x

0.000 0

Comparison of cost per unit time D

e

v

i

c

e

s

C

o

s

t

1000

2000

T i m e

Cost per

€ per unit

Number

Light system

Exchange

total

working

unit time

Single

1.0

2000

2000

1000

3000

1986.1

1.51

Twin bulbs

2.5

2000

5000

1000

6000

3921.8

1.53

Variant

2.4

2000

4800

1000

5800

3921.8

1.48

3000

4000

5000

} d.

Figure 3. Spreadsheet with a solution to Example 3.

In fact, there is a systematic error in the normal distribution as for a lifetime no negative values are possible. However, for the model used, the probability for values less than 0 amounts to 10-21, which is negligible for all practical purposes. We will illustrate key ideas behind distributions in section 5; hazard rate is one of them, which does not belong to the standard repertoire of concepts in probability at lower levels. Hazard is a tendency to fail. However, it is different from propensity (to fail), which relates the tendency to fail for new items only. To look at an analogy: it is folklore that the tendency to fail (die) for human beings differs with age. The older we get, the higher the age-related tendency to die. Again, such a relation is complex to prove. However, a simulation study should be analysed accordingly and the percentage of failures can be calculated in two ways: one based on all items in the experiment, the other based only on those, which are still in function at a specific point of time. This enhances the idea of an age-related (conditional) risk to fail as compared to a general risk to fail, which implicitly is related to all items, which are exposed to failure in the experiment. The normal distribution may be shown to follow an increasing hazard. Such an increasing risk to fail with increasing age is reasonable with lifetime of bulbs – as engineers know. Thus, even if the normal model may not be interpreted directly by relative frequencies of failures, it may count as a reasonable model for the situation. The exact amount of net saving by installing the optimal system cannot be quantified 7

BOROVCNIK AND KAPADIA

but the size of it may well be read from the scenario. These are the sort of modelling issues to discuss with students in order to enhance their understanding and interest. Call Times and Cost – the Exponential and Poisson Distributions Combined Example 4. The length Y of a telephone call is a random variable and can be modelled as exponentially distributed with parameter λ = 0.5, which corresponds to a mean duration of a call of 1/λ = 2. The cost k of a call is a function of its duration y and is given by a fixed amount of 10 for y ≤ 5 and amounts to 2y for y > 5. a. Determine the expected cost of a telephone call. b. Calculate the standard deviation (s.d.) of the cost of a telephone call. c. Determine an interval for the cost of a single call with probability of 99%. The number of calls during a month will be modelled by a Poisson distribution with μ = 100; this coincides with an expected count of calls of 100. Determine d. the probability that the number of phone calls does not exceed 130 per month; e. a reasonable upper constraint for the maximum cost per month and a reasonable number depicting the risk that such a constraint does not hold – describe how you would proceed to determine such a limit more accurately. Cost of single calls and bills of 130 calls - a simulation study

λ = Nr.

c.

e.

Cost of call

Zi

Xi

c(X i )

Length of 130 Bill for period with 130 calls calls

empty column!

Many bills simulated *

Bill > 1387.5 ?

*

data tables in EXCEL

1329.84

0

1329.86

0

1325.67

0

lines

10.00

1337.23

0

hidden

10.00

1307.75

0

1

0.525

1.487

10.00

2

0.233

0.530

10.00

713

0.419

1.086

714

0.202

0.452

240.69

1329.84

E = Expected cost of single call

Single calls

b.

Exponential - length of call

10.25

σ = S.d.

Bill

a.

0.5

Random numbers from (0, 1)

mean

1.41

s.d.

Risk is

0.010

that single call cost more than:

18.25

cost

Risk is

0.010

that length of call longer than:

9.13

length

}

of simulated data on cost

}

99% quantile of simulated data

Risk is

0.020

that bill (with 130 calls) costs more:

1387.50

percentage rank in simulated bills

Risk is

0.010

that bill for the period is higher than

1393.78

99% quantile of simulated bills.

Calls in a period - Poisson with μ = 100

The simulation study is based on only 714 random numbers.

0.05

It fluctuates still, but would stabilize if more data were generated. d. Risk that number of calls > 130:

130

0.002 Risk seems 'negligible'

While for the single calls the cost cannot be approximated by a normal (as exponential distributions are heavily skewed), the cost of 130 calls (as a sum) may be approximated by the normal distribution, though it still is slightly skewed

0.00 0

(CLT!) single call

**

130 calls

130

expected cost

E

10.3283

En = n *E

1342.68

s.d.

σ

1.5871

σ n = √n * σ

18.10

** tricky integrals or simulation

50

100

150

Bills: Cost of 130 calls - artificial data 0.15

0.10 1387.5

standard normal e. cost of 130 calls

p quantile

0.99

zp

2.326

x p = E 130 + z p * σ 130

1384.78

0.05

99% of 130 calls cost less

0.00 1250

1300

Figure 4. Spreadsheet with a solution to Example 4. 8

1350

1400

1450

MODELLING IN PROBABILITY AND STATISTICS

The models suggested are quite reasonable. However, the analytic difficulties are considerable – even at university level. A solution to part e. may be found from a simulation scenario of the assumptions. The message of such examples is that not all models can be handled rigorously. The key idea here is to understand the assumptions implicit in the model and judge whether they form a plausible framework for the real situation to be modelled and interpret the result accordingly. In section 5, the model of the exponential distribution will be related to the idea of ‘pure’ random length and to the memory-less property according to which the further duration of a call has the same distribution regardless how long it already goes on. If such a condition is rejected for the phoning behaviour of a person the result derived here would not be relevant. The number of phone calls is modelled here by a Poisson distribution. The key idea behind this model is that events (phone calls) occur completely randomly in time with a specific rate per unit time; see section 5 for more details. The simulation is based on the following mathematical relation between distributions (which itself may be investigated by simulation): If Z is a random number in the interval (0, 1) then Y = – ln(1–Z)/λ is exponentially distributed with parameter λ

(4)

A further key idea used in this example is the normal approximation for the cost of the bill of 130 calls for a period as it is the sum of the ‘independent’ single call cost. The histogram of simulated bills ‘confirms’ that to some extent but shows still some skewness, which originates from the highly skewed exponential distribution. To apply the approximation, mean and s.d. of the sum of independent and identically distributed (called iid, a property analogue to (9) later) variables has to be known. An estimate of single-call cost may again be gained from a simulation study as the integrals here are not easy to solve. From single calls Xi to bills of n calls, the following mathematical relation is needed: iid

X i ~ X , Tn := X 1 + ... + X n , then E (Tn ) = n ⋅ E ( X ) , σ (Tn ) = n ⋅ σ ( X ) . (5)

The first part is intuitive – if E(X) is interpreted as fair prize of a game, then Tn describes the win of n repetitions of it; the fair prize of it should be n times the prize of a single game. The second part is harder and has to be investigated further. As it is fundamental, a simulation study can be used to clarify matters. Spam Mail – Revising Probabilities with Bayes’ Formula Example 5. Mails may be ‘ham’ or ‘spam’. To recognize this and build a filter into the mail system, one might scan for words that are contained in hams and in spams; e.g., 30% of all spams contain the word ‘free’, which occurs in hams with a frequency of only 1%. Such a word might therefore be used to discriminate between ham and spam mails. Assume that the basic frequency of spams in a specific mailbox is 10 (30)% 9

BOROVCNIK AND KAPADIA

a. If a mail arrives with the word ‘free’ in it, what is the probability that it is spam? b. If a message passes such a mail filter, what is its probability that it is actually spam? c. Suggest improvements of such a simple spam filter. The easiest way to find the solution is to re-read all the probabilities as expected frequencies of a suitable number of mails. If we base our thought on 1000 mails, we expect 100 (300) to be spam. Of these, we expect 30 (90) to contain the word ‘free’. The imaginary data – natural frequencies in the jargon of Gigerenzer (2002) are in Table 1, from which it is easy to derive answers to the questions: If the mail contains ‘free’, its conditional probability to be spam is 0.7692 (30/39 with 10% spam overall) and 0.9278 (90/97 with 30% spam). The filter, however, has limited power to discriminate between spam and ham as a mail, which does not contain the word ‘free’ still has a probability to be spam of 0.0728 (70/961) or 0.2326 (210/903) depending on the overall rate of spam mails. Table 1. Probabilities derived from fictional numbers – natural frequencies Spam overall ‘free’ Ham 9 Spam 30 all 39

10% ‘free’ 891 70 961

all 900 100 1000

‘free’ 7 90 97

30% ‘free’ 693 210 903

all 700 300 1000

This shows that the filter is not suitable for practical use. Furthermore, the conditional probabilities are different for each person. The direction for further investigation seems clear: to find more words that separate between ham and spam mails and let the filter learn from the user who classifies several mails into these categories. The table with natural (expected) frequencies – sometimes also called the statistical village behaviour – delivers the same probabilities as the Bayesian formula but is easier to understand and is accepted much better by laypeople. For further development, the inherent key concept of conditional probability should be made explicit. Updating of probabilities according to new incoming information is a basic activity. It has a wide-spread range of application such as in medical diagnoses or before court when indicative knowledge has to be evaluated. The Bayesian formula to solve this example is P( spam |' free' ) =

P( spam) ⋅ P(' free' | spam) P( spam) ⋅ P(' free' | spam) + P( ham) ⋅ P(' free' | ham)

(6)

Gigerenzer (2002) gives several examples how badly that formula is used by practitioners who apply it quite frequently (not knowing the details, of course). There are many issues to clarify, amongst them the tendency to confuse P(spam|'free') and P('free'|spam). Starting with two-way tables is advisable, as advocated by Gigerenzer (2002); later the structure of the process should be made explicit to promote the idea of continuously updating knowledge by new evidence. 10

MODELLING IN PROBABILITY AND STATISTICS

Summary Overall, these four examples show the interplay between modelling and probability distributions, with simplifying assumptions made but then analysed in order for students to develop deeper understanding of the role, probability may play to make decisions more transparent. Probability models are not only part of glass bead games but may partially model situations from reality and contribute to compare alternatives for action rationally. In each case, there is an interesting and key question to explore within a real-life context. Simplifying assumptions are made in order to apply a probability distribution. The results are then explored in order to check the validity of the assumptions and find out the sensitivity of the parameters used. We have only given an outline of the process in each example; a longer time would be needed with students in the classroom or lecture hall. THE USUAL APPROACH TOWARDS TEACHING PROBABILITY

The usual approach towards probability is dominated by games of chance, which per se is not wrong as the concepts stem from such a context, or from early insurance agreements, which are essentially loaded games of chance with the notable exception of symmetry arguments; probabilities that would otherwise arise from symmetry considerations in games are replaced by estimates for the basic (subjective) probabilities. The axiomatic rules of probabilities are usually discussed cursorily and merely used as basic rules to obey when dealing with probabilities. Thus, no detailed proofs of simple properties are done and if done are simplified and backed up by didactical methods like tree diagrams, which may be applied as one deals primarily with finite or countably infinite probability spaces. The link from an axiomatically ‘determined’ probability to distributions is not really established1 – so the many probability distributions develop their own ‘life’. Part of their complexity arises from their multitude. Normally, only a few are dealt with in teaching ‘paradigmatically’. At the secondary stage this is mainly the binomial and normal distributions; in introductory courses at universities hypergeometric, Poisson, or exponential distributions are also added. The various distributions are dealt with rather mechanistically. A short description of an artificial problem is followed by the question for the probability of an event like ‘the number of successes (in a binomial situation) does not exceed a specified value’. Hereby, the context plays a shallow role. Questions of modelling, e.g., what assumptions are necessary to apply the distribution in question, or, in what respect are such requirements fulfilled in the context, are rarely addressed. The following examples illustrate the usual ‘approach’. Example 6 illustrates an attitude towards modelling, which is not rare. Example 6. For the production of specific screws, it is known that ‘on average’ 5% are defective. If 10 screws are packed in a box, what is the probability that one finds two (or, not more than two) defective pieces in a box? The screws could be electric bulbs, or electronic devices, etc; defective may be defined as ‘lasting less than 2000 hours’. The silent assumption in all these examples 11

BOROVCNIK AND KAPADIA

is: ‘model’ the selected items as a random sample from the production. Sometimes the model is embodied by the paradigmatic situation of random selection from an urn with two sorts of marbles – marked 1 and 0 – predominantly with (sometimes without) replacement of the drawn marbles. The context is used as a fig leaf to tell the different stories for very similar tasks, namely to drill skills in calculating probabilities from the right distribution. Neither a true question to solve, nor alternative models, nor a discussion of validity of assumptions is involved. No clear view is given of why probabilistic modelling helps to improve one’s understanding of the context. The full potential of probability to serve as a means of modelling is missed by such an attitude; see Borovcnik (2011). Modelling from a Sporting Context – the Nowitzki Task The following example shows that such restricted views on probability modelling are not bound to single teachers, textbooks, or researchers in educational statistics. The example is taken from a centrally organized final exam in a federal state in Germany but could be taken from anywhere else. The required assumptions to solve the problems posed are clarified. This is – at the same time – a fundamental topic in modelling. The question, as we shall see later, aroused fierce controversy. Example 7 (Nowitzki task). The German professional basketball player Dirk Nowitzki plays in the American professional league NBC. In the season 2006–07 he achieves a success rate of 90.4% in free throws. (For the original task, which was administered in 2008, see Schulministerium NRW, n.d.2) Probabilistic part. Calculate the probability that he a. scores exactly 8 points with 10 trials; b. scores at the most 8 points with 10 trials; c. is successful in free throws at the most 4 times in a series. Statistical part. In home matches he scored 267 points with 288 free throws, in away matches the success rate was 231/263. A sports reporter [claimed that] Nowitzki has a considerably lower success rate away. At a significance level of 5%, analyse whether the number of scores in free throws away a. lies significantly below the ‘expected value’ for home and away matches; b. lies significantly below the ‘expected value’3 for home matches. This example will be referred to as the Nowitzki task and will be used extensively below to illustrate the various modelling aspects both in probability and statistics. From the discussion it will become clearer what assumptions we have to rely on and how these are ‘fulfilled’ differently in probabilistic and statistical applications. MODELLING THE NOWITZKI TASK

The Nowitzki task (Example 7) has a special history in Germany as it was an item for a centrally administered exam. The public debate provoked a harsh critique: its probabilistic part was criticized as unsolvable in the form it was posed; its statistical 12

MODELLING IN PROBABILITY AND STATISTICS

part was disputed as difficult and the ministerial solution was ‘attacked’ for blurring the fundamental difference between ‘true’ values of parameters of models and estimates thereof. The probabilistic task shows essentially the same features as Example 6; the context could be taken from anywhere, it is arbitrary. The statistical part, however, allows for more discussion on the fundamental problem of empirical research which has to deal with generalizing results from samples to populations. Various models are compared with respect to their quality to model the situation. Here we focus on modelling and then solving the probability part. Basic Assumptions of Bernoulli Processes This task is designed to be an exemplar of an application of the binomial distribution. It is worthwhile to repeat the basic features of the model involved. This distribution allows one to model experiments, with a fixed number of trials with two outcomes for each repetition of the experiment (trial); one is typically named ‘success’ and the other ‘failure’. The basic assumption is that – the probability p of success is the same for all trials, and – single trials do not ‘influence’ each other, which means – probabilistically speaking – the trials are independent. Such assumptions are usually subsumed under the name of Bernoulli experiments (Bernoulli process of experiments). If the results of the single trials are denoted by random variables X 1 , ..., X n with

X i = 1 (success) or X i = 0 (failure)

(7)

the random variables have to be independent, ie., P( X i = xi , X j = x j ) = P( X i = xi ) P( X j = x j ) if i ≠ j .

(8)

Such a product rule holds also for more than two variables X i of the process. The assumption usually is denoted by: iid

X i ~ X ~ B(1, p) ,

(9)

where the ‘iid ’ refers to independent, identically distributed random variables Xi. The model, however, is not uniquely determined by such a Bernoulli process. Still missing is information about the value of p. From the perspective of modelling, the decision on which distribution to apply is only one step towards a model for the situation in question. Usually, the model consists of a family of distributions, which differ by the values of one (or more) parameters. The next step is to find values for the parameters to fix one of the distributions as the model to use for further consideration. How to support the modeller to choose such a family like the binomial or normal distributions is discussed from 13

BOROVCNIK AND KAPADIA

a modelling perspective in section 5. The step of modelling to get values for the parameters of an already chosen model is outlined here. The process to get numbers for the parameters (p here) is quite similar for all models. In this section matters are discussed for Bernoulli processes. Some features arise from the possibility to model the data to come from a finite or an infinite population. The parameter p will be called the ‘strength’ of Nowitzki. The Nowitzki task was meant to go beyond an application within games of chance. The probability of success of a single trial of Nowitzki is not determined by a combinatorial consideration (justified by the symmetry of an experiment). The key idea to get numbers for the parameters is to incorporate some ‘knowledge’ or information. The reader is reminded that the problem involves 10 trials and the task will be treated as an open task to explore rather than just an examination question. So one might form a hypothesis on the success rate subjectively, for example. Model 1. The probability p could be determined by a hypothesis like Nowitzki is equally good as in the last two years when his success rate has been (e.g.) 0.910. Supposed that such a value holds also for the present season, the model is fixed. This value could well be a mere fiction – just to ‘develop a scenario’ and determine what would be its consequences. On the basis of this model, the random variable Tn = X 1 + ... + X n ~ B( n = 10, p = 0.910) ,

(10)

ie., the (overall) number of successes follows a binomial distribution with parameters 10 and 0.910. Model 2. The probability p is estimated from the given data, i.e., pˆ = 0.904 . In this case, the further calculations usually are based on Tn ~ B(n = 10, p = 0.904) .

(11)

This is somehow the ‘best’ model as the estimation procedure leading to pˆ = 0.904 fulfils certain optimality criteria like unbiasedness, minimum variance, efficiency, and asymptotic normality. Model 2 (variant). The approach in model 2 misses the fact that the estimate pˆ is not identical to the ‘true’4 value of p. There is some inaccuracy attached to the estimation of p. Thus, it might be better to derive a (95%) confidence interval for the unknown parameter p in a first approach leading to an interval [ p L , pU ] = [0.8792, 0.9284]

(12)

and only then calculate the required probabilities in Example 7 with the worst case pL and the best case pU. This procedure leads to a confidence interval for the required probability reflecting the inaccuracy of the estimate pˆ . 14

MODELLING IN PROBABILITY AND STATISTICS

Model 3. The value of p is equated (not estimated) to the success rate of the whole season, i.e., p:=0.904. This leads to the same model as in (11) but with a complete different connotation. The probability of success in a free throw may be – after the end of the season – viewed as factually known as the number of successes (498) divided by the number of trials (551). There is no need to view it any longer as unknown and treat the success rate of 0.904 as an estimate pˆ of an unknown p, which results from an imagined infinite series of Bernoulli trials. Investigating and Modelling the Unknown Value of p Three different ways may be pursued to provide the required information for the unknown parameter. With the binomial distribution, one needs to have information about the value of p, which may be interpreted as success rate in the Bernoulli process in the background. The information to fix the parameter has different connotation as described here. In due consequence, the models applied inherit part of this meaning. The cases to differentiate are: i. p is known ii. p is estimated from data iii. p is hypothesized from similar situations With games of chance, symmetries of the involved random experiment allow one to derive a value for p; eg., ½ for head in tossing a ‘fair’ coin – case i. Most applications, however, lack such considerations and one has to evoke ii. or iii. i. The assumption of equiprobability for all possible cases (of which one is called ‘success’) is – beyond games of chance – sometimes more a way of putting it, just to fix a model to work with. For coin tossing, this paves the way to avoid tedious data production (actually tossing the coin quite often) and work with such a model to derive some consequences on the basis ‘what would be if we suppose the coin to be fair (symmetric)’. Normally, for coins, a closer scrutiny would not deviate too much from the presupposition of p = ½ and yield quite similar results. With the basketball task, the value of p could be known from the past (model 1), which also refers to the further assumption that ‘things remain the same’5, i.e., there was no change from past to present. This is an assumption for which usually a substantial justification is lacking – those who do not rely too heavily on the information of the past might react more intelligently in the present.6 ii. Closer to the present situation in Example 7 is to use the latest data available (from the present season). The disadvantage of such a procedure is that the difference between the ‘true’ value of p and an estimate pˆ might be blurred, and thereby forgetting that the estimate is inaccurate. One possibility to deal with this is the variant of model 2. The inaccuracy of estimates is best illustrated by confidence intervals. To vary the size of underlying samples where the data stem from, gives a clearer picture of the influence of this lack of information. An important assumption for the data base from which the estimate is calculated, is: it has to be a random sample of the underlying Bernoulli process, which is 15

BOROVCNIK AND KAPADIA

essentially the same as the parent Bernoulli process. Clearly, the assumption of a sample to be random is rarely fulfilled and often is beyond scrutiny. Usually there are qualitative arguments to back up such an assumption. It is to be noted that the estimation of the probability is interwoven with the two key assumptions of Bernoulli experiments – the same success probability, and occurring independently in all trials. Otherwise, probabilities, such as a probability of success with a free throw, have no meaning. iii. A hypothesis about the success probability could be corroborated by knowledge about the past as in i. However, the season is completed and as a matter of fact, the success rate of Nowitzki was 0.904. To apply the factual knowledge about all games yields a value for the unknown parameter as p: = 0.904,

(13)

which amounts to much more concrete information than the estimation procedure leading to p ≈ pˆ = 0.904 . The success probability in (13) may be justified and clarified by the following: As the season is completed, one knows all data. There will be no more. A new season will have irretrievably different constellations. The success rate in 2006–07 is – as a matter of fact – 0.904. There could well be the question as to how to interpret this number and whether it is possible to interpret it as a success probability. The key question is whether it makes sense to interpret this 0.904 as a success probability. This interpretation is bound to the assumption that the data stem from an independent repetition of the same Bernoulli experiment. This requires – taken literally – that for each free throw of Nowitzki the conditions have been exactly the same, independently of each other and independent of the actual score and the previous course of the match, etc. With this point of view one might question whether the data really are compatible with the pre-requisites of a Bernoulli process. One could, e.g., inspect the number of ‘runs’ (points or failures in a series) and evaluate whether they are above or below the expected value for a Bernoulli process or not in order to check for the plausibility of its underlying assumptions. The way in which information is used to get numerical values for the unknown parameter influences the character of the model, which is fixed by it. From a modelling perspective, this has deep consequences as any interpretation of results from the model has to take such ‘restrictions’ into consideration. If the value of p is thought to be known – either by reference to a symmetric experiment, or by an unambiguous statement like ‘from long-standing experience from the past we know p to be 0.910’ in the wording of Example 7, the probabilistic part of it becomes trivial from a modelling perspective and a direct application of binomial probabilities is required. The solution may be found either by means of a hand-held calculator or a spreadsheet, or even by old-fashioned probability tables – the answer is straightforward and undisputed. The discussion about the various ways to deal with information about the success rate p might lead to the didactical conclusion that such questions have to be 16

MODELLING IN PROBABILITY AND STATISTICS

excluded from a final exam, especially if it is put forward centrally. The information in such tasks has to be conveyed clearly, the models have to be precisely and explicitly determined by the very text (not the context) of the task. The question remains – under such circumstances – would it still be worthwhile to teach probability as it would be reduced to a mere mechanistic application of the formulae in such exams? What is interesting is how the process of modelling used allows for an answer of the problem and in what respect such a model misses important features of the situation involved. To choose between various possible models and to critically appreciate the model finally chosen is a worthwhile standard to reach in studying about probability. In only rare cases is there one distinct answer to a problem in question. The assumptions of a Bernoulli process are not well fulfilled in sports and in many other areas where such methods are ‘blindly’ applied. Such assumptions establish (more or less well) a scenario (as opposed to a model that fits very well to the real situation), which allows an inspection of the situation on the basis of an ‘what would be – if we assume …’ Then of course, situations have to be set out where such scenarios may deliver suitable orientations despite their lack of fit to the situation (for the idea of a scenario instead of a model, see Borovcnik, 2006). If p is not known directly, there are various ways to fill in the gap of information – the scale ranges from hypotheses of differing credibility to estimates from statistical data of differing relevance (depending on the ‘grade of randomness’ of the sample). Clearly, a true value of p has to be distinguished from an estimate pˆ of p. The whole of inferential statistics is based on a careful discrimination between true parameters and estimations thereof. However, again, issues are not as easy and clear-cut. What may be viewed as a true parameter in one model may be viewed as an estimate in another model – see the ideas developed subsequently. If the option of a statistical estimate of the unknown parameter is chosen as in (11), then the data has to fulfil the assumption of a random sample – an independent repetition of the same basic experiment yielding each item of data. The accuracy linked to a specific sample may be best judged by a confidence interval as in (12). It might be tempting to reduce the length of such a confidence interval and to increase the precision of information about the unknown parameter by increasing the sample size. However, in practice, to obtain more data usually means a lower quality of data; ie., the data no longer fulfil their fundamental property of being a random sample, which involves a bias in the data with no obvious way to repair it. If the option of hypothesizing values for the unknown parameter is chosen as in (10), or in (13), one might have trouble in justifying such a hypothesis. In some cases, however, good arguments might be given. For the statistical part of Example 7 when it comes to an evaluation whether Nowitzki is better in home than in away matches, a natural hypothesis emerges from the following modelling. Split the Bernoulli process for home and away matches by a different value for p as pH and pA. The assumption of equal strength (home and away) leads to the hypothesis p A = p H , or p A − p H = 0 .

(14)

17

BOROVCNIK AND KAPADIA

Analysis of the data is then done under the auspices ‘as if the difference of the success probabilities home and away were zero’. However, it is not straightforward to derive the distribution for the test statistic pˆ A − pˆ H . More about Assumptions – A Homogenizing Idea ‘Behind’ the Binomial Distribution In the context of sports, it is dubious to interpret relative frequencies as a probability and – vice versa – it is difficult to justify estimating an unknown probability by relative frequencies. What is different in the sports context from games of chance where the idea of relative frequencies has emerged? It is the comparability of single trials, the non-dependence of single trials – that is the hinge for transferring the ideas from games to other contexts. For probabilistic considerations such a transfer seems to be more crucial than for a statistical purpose, which focuses on a summary viewpoint. In theory, the estimation of the success parameter p improves by increasing the sample size. Here, this requires combining several seasons together. However, the longer the series – especially in sports – the less plausible the assumptions for a Bernoulli process. And, if relative frequencies are used to estimate the underlying probabilities, condition (9) of a Bernoulli process has to be met. Only then do the estimates gain in precision by increasing the sample size. However, for Nowitzki’s scores, the assumptions have to be questioned. People and the sport change over time, making assumptions of random, independent trials as the basic modelling approach less tenable. Take the value of 0.904 as an estimate for his current competency to make a point with a free throw – formalized as a probability p. This requires an unrealistic assumption of ‘homogenizing’: Nowitzki’s capacity was constant for the whole season and independent of any accompanying circumstances, not even influenced by the fact that in one game everything is virtually decided with his team leading or trailing by a big gap close to the end of the match, or there is a draw in the match and this free throw – the last event in the match – will decide about winning or not. For statements related to the whole entity of free throws, such a homogenization might be a suitable working hypothesis. Perhaps the deviations from the assumptions balance for a longer series. To apply results derived on such a basis for the whole entity to a sequence of 10 specific throws and calculate the probability of 8 points at the most, however, makes hardly any sense, and even less so if it deals with the last 10 of an all-deciding match. To imbue p with a probabilistic sense, to apply the binomial distribution sensibly, one has to invent a scenario like the following: All free throws of the whole season have been recorded by video cameras. Now we randomly select 10 clips and ask: How often does Nowitzki make the point? ‘SOLUTION’ OF THE PROBABILISTIC PART OF THE NOWITZKI TASK

In this section, the solutions are derived from the various models and critically appraised. As the choice of model depends not only on assumptions but also on an 18

MODELLING IN PROBABILITY AND STATISTICS

estimation of unknown parameters, the question arises, which of the available models to choose – a fundamental issue in modelling. Several models are dealt with in the Nowitzki task and their relative merits are made clear. The methods of determining a good choice for the parameters also convey key features of pedagogy – some knowledge is taken from the context, some will be added by statistical estimation. While for parts a. and b. the results seem straightforward, part c. gives ‘new’ insights. This task was fiercely rejected by some experts as unsolvable. However, by highlighting a fundamental property of Bernoulli series a solution of part c. is easier. If the chosen model is taken seriously, then the modeller is in the same situation as in any game of chance. In such games, the player can start at any time – it does not matter. The player can also eliminate any randomly chosen games without a general change in the result. That is the key idea involved. Calculation of the Probability of Simple Events – Parts a. and b. With model 1 and the assumption that p = 0.910, the distribution is given in (10). Using a spreadsheet gives the solution with this specific binomial distribution as set out in Table 2. Model 3 is handled in the same way. Probability distribution for number of hits in 10 trials under various models

%

Probability distribution for number of hits in 10 trials - best and worst case

%

50

50

40

40

30

30

Model 2 or 3

20

Best case 20

Model 1

Worst case 10

10

0

0 0

5

10

hits

0

5

10

hits

Figure 5. Probability distributions for the number of hits under the various models.

With model 2 and the estimate p = 0.904, the distribution is fixed in (11). Using model 2 (variant), one derives the confidence interval (12) for Nowitzki’s ‘strength’ and uses the binomial distribution with parameters corresponding to worst and best cases for the playing strength. The distributions for the number of hits in 10 trials are depicted in Figure 5. While models 1 and 2 are similar, there is a huge difference between best and worst case in model 2 (variant). Table 2. Probabilities calculated under the various models

p = 0.910

Model 2 (3)7 pˆ = 0.904

Worst case pL

Best case pU

P (T10 = 8)

0.1714

0.1854

0.2345

0.1273

P (T10 ≤ 8)

0.2254

0.2492

0.3449

0.1573

Model 1

Model 2 (variant)

19

BOROVCNIK AND KAPADIA

It is remarkable, and worthy of pedagogical discussion in the classroom, that the solutions differ so much when the assumptions seem to be very similar. The inaccuracy as conveyed by the confidence interval (12) on p only reflects a margin of just under 5 percentage points. Nevertheless the variant of model 2 gives a margin of 0.1573 to 0.3449 for the probability in question b.8 Thus, it is crucial to remember that one has only estimates of the unknown parameter and imperfect knowledge. The Question ‘Nowitzki Scores at Most Four Times in a Series’ Task c. was disputed in a public discussion, in which statisticians were also involved. It was claimed that it can not be solved without an explicit number of observations given. Suggestions to fix the task are reported and a correct solution using a key property of Bernoulli processes is given below. The following passage is taken from an open letter to the ministry of education (Davies et al, 2008): “The media reported that one part of the [Nowitzki task] was not solvable because the number of trials is missing. This – in fact – is true and therefore several interpretations of the task are admissible, which lead to differing solutions.” Davies (2009, p. 4) illustrates three ways to cope with the missing number of trials: The first is to take n = 10 as it was used in the first part of the task. This suggestion comes out of ‘student logic’ but leads into an almost intractable combinatorial problem. One has to inspect all the 210 = 1024 series of 0’s and 1’s (for failures and successes) whether they have a single segment of five or more 1’s in it (indicating the complement of the event in question), or not. A second possibility is to take n = 5, which makes the problem very simple: The only sequence not favourable to the event in question is 1 1 1 1 1. Thus the probability for the complementary series, for which one is looking for here, equals 1 – p5

(15)

and the result is dependent on the chosen model (see Table 3). Again it is surprising that the change from model 2 to its variant reveals such a high degree of imprecision implicit in the task as the required probability is known only within a range of 0.31 to 0.47, if one refers to an estimate of the unknown probability of Nowitzki’s strength. But the reader is reminded that model 3 does not have this problem. Its result coincides with model 2 (without the variant) as the parameter is set to be known by (13). Table 3. Probabilities of the series ‘less than 5’ with n = 5 under the various models

5

p 1 – p5 20

Model 1 p = 0.910

Model 2 (3) pˆ = 0.904

Worst case pL

Model 2 (variant) Best case pU

0.6240 0.3760

0.6031 0.3969

0.5253 0.4747

0.6898 0.3102

MODELLING IN PROBABILITY AND STATISTICS

The third and final remedy to fill in for the gap of the missing number of trials, which Davies (2009) offers, refers to an artificial new gaming situation. “We imagine a game where two players perform free throws. One of the players begins and continues to throw as long as his ball passes correctly through the basket and he scores. If Nowitzki starts this game what is his probability that he scores at the most four times in a series in his first try?” Now, the possible sequences of this game are 0

10

110

1110

11110

(16)

The solution emerging from (16) coincides exactly with that where the number of trials is fixed by 5, which numerically was the officially accepted solution (though it was derived without the assumption of n = 5 trials). Regarding the common factor 1 – p in the single probabilities involved in (16), we get the solution by: (1 + p + p 2 + p 3 + p 4 )(1 − p) = 1 − p 5 .

(17)

The third attempt to solve task c. without the missing number of trials yields the solution. However, it implies an artificial new gaming situation, which makes things unnecessarily complicated. In fact, the task is solvable without supplementing the missing number of trials and without this artificial game. One only has to remind oneself of what really amounts to a Bernoulli process, what properties are fundamental to such processes. The property in question will – once recalled – lead to a deeper understanding of what Bernoulli processes are. The next section will illustrate this idea. If one agrees with the assumption (9) of a Bernoulli process for the trials of Nowitzki then part c. of the probabilistic task is trivial. If the conditions are always the same throughout then it does not matter when one starts to collect the data. Mathematically speaking: If X 1 , X 2 , ... is a Bernoulli process with relation (9), then the following two sub processes have essentially the same probabilistic features, i.e., they also follow property (9): – random start of data collection i0: Xi , Xi 0

iid

0+

1

, ... ~ X ~ B (1, p ) ;

(18a)

– random selection i0, i1, … of all data: iid

X i , X i , ... ~ X ~ B (1, p ) 0

1

(18b)

This is a fundamental property of Bernoulli processes in particular and of random samples in general. One may start with the data collection whenever one chooses, therefore, (18a) applies. One can also eliminate some data if the elimination is 21

BOROVCNIK AND KAPADIA

undertaken randomly as in (18b). While this key property of Bernoulli processes should be explained intuitively to students, it could also be supported by simulation studies to address ‘intuitive resistance’ from students. Statisticians coin the term iid variables. One needs to explain to students that each single reading comes from a process that – at each stage (for each single reading) – has a distribution independent of the other stages and which follows an identical (i.e., the same) distribution throughout (hence iid); this deceptively complex idea takes time for students to absorb. It has already been mentioned that in sports such an assumption is doubtful but the example was put forward with this assumption for modelling, which therefore will not be challenged at this stage. If it does not matter when we (randomly) start the data collection, we just go to the sports hall and wait for the next free throws. We note whether – Nowitzki does not score a point more than four times in a series – event A, or, – he succeeds in scoring more than four times in a series – event A Clearly it holds:

P ( A ) = p 5 ⋅1 and P( A) = 1 − p 5 .

(19)

The term p5 in (19) stands for the first five 1’s in the complementary event; this probability has to be multiplied by 1 as from the sixth trial onwards the outcome does not matter (and therefore is the certain event). The result coincides with solution (16). However, there is no need to develop this imaginary game with an opponent as Davies (2009) does in his attempt to ‘fix’ the task. The task is easily solved using the fundamental property (18a) of Bernoulli processes. If one just goes to a match (maybe one is late, it would not matter!) and observes whether Nowitzki scores more than four times in a series right from the start, then everything beyond the fifth trial is redundant and the solution coincides with solution (15) where the number of observations is fixed with n = 5. The Solution is Dependent on the Number of Trials Observed! If the number n of trials is pre-determined, the probability to have at the most four successes in a series changes. If one observes the player only up to four times, he cannot have more than four successes, whence it holds: P(A) = 1. The longer one observes the player, the more is the chance to finally see him score more than four times in a series. It holds: P( A | n) → 0, n → ∞ .

(20)

KEY IDEA BEHIND VARIOUS DISTRIBUTIONS

In this section, we explain the underlying key pedagogical ideas of the following seven distributions. a. Binomial distribution: repeated, independent trials, called Bernoulli process; b. Hypergeometric distribution: repeated dependent trials; 22

MODELLING IN PROBABILITY AND STATISTICS

c. d. e. f. g.

Poisson distribution: completely random events in time – Poisson process; Geometric distribution: waiting times in the Bernoulli process; Exponential distribution: Poisson and memory-less waiting; Weibull distribution: conditional failure rates or hazards; Normal distribution: the hypothesis of independent elementary errors;

For students’ understanding and also for good modelling reasons it is of advantage to have a key idea behind each of the distributions. Otherwise, it is hard to justify a specific distribution as a suitable model for the phenomenon under scrutiny. Why a Poisson distribution, or why a normal? The key idea of a distribution should convey a direct way of judging whether such a distribution could model the phenomenon in question. It allows one to check the necessary assumptions in the data generating process and whether they are plausible. Such a fundamental idea behind a specific distribution is sometimes hidden; it is difficult to recognise it from discrete probabilities or density functions, which might also have complicated mathematical terms. Other concepts related to a random variable might help to reveal to students ‘the’ idea behind a distribution. For example, a feature like a memory-less property is important for the phenomenon, which is described by a distribution. However, this property is a mathematical consequence of the distribution but cannot directly be recognized from its shape or mathematical term. In the context of waiting time, the memory-less property means that the ongoing waiting time until the ‘event’ occurs, has the same distribution throughout – regardless of the time already waiting for this event. Or, technical units might show (as human beings do) a phenomenon of wearingout, i.e., the future lifetime has, amongst others, an expected value decreasing by the age of the unit (or, the human being). To describe such behaviour, further mathematical concepts have to be introduced like the so-called hazard (see below). In technical applications continuous service of units might postpone wearing-out. For human beings, insurance companies charge higher premiums for a life insurance policy to older people. While further mathematical concepts might be – at first sight – an obstacle for teaching, they help to shed light on key ideas for a distribution that enhance ‘internal mechanisms’ lurking in the background, and also help to understand the phenomena better. On the contrary, the usual examination as to whether a specific distribution is an adequate model for a situation is performed by a statistical test on whether the data is compatible with what is to be ‘expected’ from a random sample of this model or not. Such tests focus on the ‘external’ phenomenon of frequencies as observed in data. a. Binomial Distribution – Repeated Independent Trials A Bernoulli process may be represented by drawing balls from an urn where there is a fixed proportion p of balls marked by a 1 (success) and the rest marked by a 0 (failure). If one draws a ball repeatedly from that urn n times always replacing the drawn ball, the number of successes follows a binomial distribution with parameters n and p. 23

BOROVCNIK AND KAPADIA

There are characteristic features inherent to the depicted situation: repeated experiments with the same success probability p, independent trials (mixing the balls before each draw), so that the success probability remains the same throughout. One could spin a wheel repeatedly with a sector marked as 1 and another marked as 0. This distribution was discussed at length with the Nowitzki task. The binomial distribution is intimately related to the Bernoulli process (9), which may also be analysed from the perspective of continuously observing its outcomes, until the first event occurs – see the geometric distribution below. b. Hypergeometric Distribution – Repeated Dependent Trials This distribution, too, is best explained by the artificial but paradigmatic context of drawing balls from an urn with a fixed number of marked (by 1’s) and non-marked (the 0’s) balls as in the binomial situation; however, now the drawn balls are not replaced. Under this assumption, the number of marked balls among the n drawn follows a hypergeometric distribution. The characteristics are repeated experiments with the same success probability p, but dependent trials, so that the success probability remains the same only if one does not know the history of the process, otherwise there is a distinct dependence. The context of drawing balls explains also that – under special circumstances – the hypergeometric may be well approximated by the binomial distribution: if the number n of balls drawn from the urn is small compared to the number N of all balls in the urn, then the dependence between successive draws is weaker and the conditions (9) of a Bernoulli process are nearly met. c. Poisson Distribution – Pure Random Events in Time It is customary to introduce the Poisson distribution as the – approximate – distribution of rare events in a Bernoulli process (p small); it is also advantageous to refer this distribution to the Poisson process even if this is lengthy and more complex. The process of generating ‘events’ (e.g., emitted radioactive particles), which occur in the course of time (or in space), should intuitively obey some laws that may compare to the Bernoulli process: – The start of the observations is not relevant for the probability to observe any event; see the fundamental property (18a) – this leads to A1 in (21) below. – If one observes the process in non-overlapping intervals, the pertinent random variables have to be independent, which corresponds to the independence of the single observations Xi in the Bernoulli process – this leads to A4. – The main difference in the processes lies in the fundamental frequency pulsing: in the Bernoulli process, there is definitely a new experiment Xi at ‘time’ i whereas in the Poisson process time is continuously flowing with no apparent performing of an experiment (leading to an event or not) – events just occur. 24

MODELLING IN PROBABILITY AND STATISTICS

– It remains to fix the probability of an event. As there is no distinct experiment with the outcome of the event (or its non-occurrence), we can speak only of an intensity λ of the process to bear events. This intensity has to be related to unit time; its mathematical treatment in A2 involves infinitesimal concepts. – Paradoxically, a further requirement has to be demanded: even if two or more events may occur in an interval of time, which is not too small, such a probability of coincidences should become negligible if the length of the observation interval becomes small – that leads to A3 below. Mathematically speaking (compare, e.g., the classic text of Meyer, 1970, p. 166), a Poisson process has to meet the following conditions (the random variable Xt counts the number of events in the interval ( 0, t ) ): A1 A2

If Yt counts the events in (t 0 , t 0 + t ) then Yt ~ Xt P ( X Δ t = 1) = λ ⋅ Δ t + o(Δ t )

A3

P( X Δ t ≥ 2) = o(Δ t )

A4

Xt and Yt are independent random variables if they count events in non-overlapping time intervals.

(21)

Assumptions (21) represent pure randomness; they imply that such a process has no preference for any time sequence, has no coincidences as they would occur by ‘intention’, and shows no dependencies on other events observed. The assumptions may also be represented locally by a grid square as is done in Example 8. The main difference of the Poisson to the Bernoulli process lies in the fact that there is no definite unit of time, linked to trials 1, 2, 3, etc., which may lead to the event (1) or not. Here, the events just occur at a specific point of time but one cannot trace when an ‘experiment’ is performed. The success probability p of the Bernoulli process associated with single experiments becomes an intensity λ per unit time. The independence of trials becomes now an independence of counting events in mutually exclusive intervals in the sense of A4. The Poisson process will have a further analogue to the Bernoulli process in terms of waiting for the first event – the Geometric and the Exponential distribution (which describe waiting times in pertinent situations) both have similar properties (see below). We present one example here to illustrate a modelling approach to the Poisson. This shows how discussions can be initiated with students on the theoretical ideas presented above, and help students to understand how and when to apply the Poisson distribution. Example 8. Are the bomb attacks of London during World War II the result of a planned bombardment, or may they be explained by pure random hitting? To compare the data to the scenario of a Poisson process, the area of South London is divided into square grids of ¼ square kilometres each. The statistics in Table 4 shows e.g., 93 squares with 2 impacts each, which amounts to 186 bomb hits. In sum, 537 impacts have been observed. 25

BOROVCNIK AND KAPADIA

Table 4. Number of squares in South London with various numbers of bomb hits – Comparison to the frequencies under the assumption of a Poisson process with λ = 0.9323 No. of hits in a square No. of grid squares with such a no. of hits Expected numbers under Poisson process

0

1

2

3

4

5 and more

229.

211.

93.

35.

7.

1.

226.74

211.39

98.54

30.62

7.14

1.57

all 576.

If targeting is completely random, it follows the rules of a Poisson process (21) and the number of ‘events’ per grid square follows then a Poisson distribution. The parameter λ is estimated from the data to fix the model by

λ=

537 576

= 0.9323 hits per grid square.

(22)

As seen from Table 4, the fit of the Poisson distribution to the data is extremely good. Feller (1968, pp 161) highlights the basic property of the Poisson distribution as modelling pure randomness and contrasts it to wide-spread misconceptions: “[The outcome] indicates perfect randomness and homogeneity of the area; we have here an instructive illustration of the established fact that to the untrained eye randomness appears as regularity or tendency to cluster.” In any case of an application, one might inspect whether the process of data generation fulfils such conditions – which could justify or rule out this distribution as a candidate for modelling. The set of conditions, however, also structures thinking about phenomena, which may be modelled by a Poisson distribution. All phenomena following internal rules, which come close to the basic requirements of a Poisson process, are open to such a modelling. d. Geometric Distribution – Memory-Less Waiting for an Event Here, a Bernoulli process with success parameter p is observed. In contrast to the binomial distribution, the number of trials is not fixed. Instead, one counts the number of trials until the first event (which corresponds to the event ‘marked’ by a 1) occurs. The resulting distribution ‘obeys’ the following ‘memory-less’ property: P (T > k 0 + k | T > k 0 ) = P (T > k ) .

(23)

This feature implies that the remaining waiting time for the first event is independent of the time k0 one has already waited for it – waiting gives no bonus. Such a characterization of the Bernoulli process helps in clarifying some basic misconceptions. The following example can be used to motivate students on the underlying features. 26

MODELLING IN PROBABILITY AND STATISTICS

Example 9. Young children remember long waiting times for the six on a die to come. As waiting times of 12 and longer still have a probability of 0.1346, see also Figure 6, this induces them to ‘think’ that a six has less probability than the other numbers on the die for which such a ‘painful’ experience is not internalised. Exponential distribution - with λ = 6

Geometric distribution - waiting times for the first 6 of a die 0.20

0.20

0.10

0.10

0.00

0.00

0

10

20

30

Figure 6. Waiting for the first six of a die – Bernoulli process with p = 1/6.

0

10

20

30

Figure 7. Exponential distribution has the same shape as the geometric distribution.

e. Exponential Distribution – Memory-Less Waiting for Events in Time The exponential distribution is connected to two key ideas: one links it to the Poisson process; the other uses the concept of conditional failure rate. In a Poisson process, if one is waiting for the next event to occur and the data are subsequent waiting times between the events, then the exponential distribution is the model of choice. This is due to a mathematical theorem (see Meyer 1970, p. 191). It can also be illustrated by simulation studies. An important feature of the exponential distribution is its memory-less property: P (t 0 < T ≤ t 0 + Δt | T > t 0 ) =

P (t 0 < T ≤ t 0 + Δt ) P (T > t 0 )

= P (0 < T ≤ Δt ) .

(24)

Due to the memory-less property, the conditional probability to fail within Δt units of time for a device that has reached age t0 is the same as within the first Δt units for a new device. This implies that the future lifetime (or, waiting time) is independent of age reached (or, the time already spent in waiting), i.e., t0, which amounts to a further characterization of ‘pure’ randomness. Exponential and geometric distributions share the memory-less property. This explains why the models have the same shape. If this conditional failure probability is calculated per unit time and the time length Δt is made smaller, one gets the conditional failure rate, or hazard h(t): h(t ) = lim

Δt → 0

P (t 0 < T ≤ t 0 + Δt | T > t 0 ) Δt

.

(25)

A hazard (rate) is just a different description of a distribution. Now it is possible to express the other key idea behind the exponential distribution, namely that its 27

BOROVCNIK AND KAPADIA

related conditional failure rate (or, hazard) is constant over the lifetime. If a (technical) unit’s lifetime is analysed and the internal structure supports that the remaining lifetime is independent of the unit’s age, then it may be argued that an exponential distribution is the model of choice. While such a property might seem paradoxal (old units are equally good as new units), it is in fact well fulfilled for electronic devices for a long part of their ordinary lifetime. Mechanical units, on the contrary, do show a wearing effect, so that their conditional lifetime gets worse with age. Similarly with human beings, with the exception of infant mortality when – in youngest ages – humans’ lifetime as a probability distribution improves. f. Weibull Distribution – Age-Related Hazards Lifetimes are an important issue in technical applications (reliability issues and quality assurance), waiting times are important in describing the behaviour of systems. There are some families of distributions, which may serve as suitable models. The drawback with these is that they require more mathematics to describe their density functions. Furthermore, the shape of their density gives no clue why they should yield a good model for a problem to be analysed. To view lifetimes (or waiting times) from the perspective of units that have reached some specific age already (have waited some specific time) sheds much more light on such phenomena than to analyse the behaviour of new items (with no time spent in waiting in the system). One would, of course, simulate such models first, explore the simulated data, and draw preliminary conclusions before one starts to delve deeper into mathematical issues. It may pay to learn – informally – about hazards and use this concept instead of probability densities to study probability models. Hazards will directly enhance the basic assumptions, which have to be fulfilled in case of applications. With the key idea of hazard or conditional failure rate, the discussion can relate to infant mortality (decreasing), purely random failures due to exponential lifetime (constant) and wearing-out effects (increasing). The power function is the simplest model to describe all these different types of hazard: h(t ) = β ( αt ) β −1 , α , β > 0 .

(26)

The parameter α is interpreted as the scale of time while β influences the shape and thus the quality of the change of hazard over lifetime. g. Normal Distribution – the Hypothesis of Independent Elementary Errors Any random variable that might be split into a sum of other (hidden) variables is – according to the central limit theorem (CLT) – approximately normally distributed. This explains the key underlying idea and ubiquity of the normal distribution. In the history of probability, the CLT prompted the ‘hypothesis of elementary errors’ (Gauss and earlier) where any measurement error was hypothesized to be the result 28

MODELLING IN PROBABILITY AND STATISTICS

(sum) of other, elementary errors. This supported the use of the normal distribution for modelling measurement errors in astronomy and geodesy. A generalization to the normal ‘law’ of distribution by Quételet and Galton is straightforward: it is an expression of God’s will (or Nature) that any biometric measurement of human beings and animals is normally distributed as it emerges from a superposition of elementary ‘errors of nature’ (Borovcnik, 2006)9. An interesting article about the history and myth of the normal law is Goertzel (n.d.). The mathematics was first proved by de Moivre and Laplace; the single summands Xi had then been restricted to a Bernoulli process (9). In this way, the binomial distribution is approximately normally distributed and the approximation is good enough if there are enough elements in the sum: Tn = X 1 + X 2 + ... + X n .

(27)

To illustrate matters, the Galton board or an electronic quincunx (see, e.g., Pierce, R., n.d.) may be used in teaching. Such a board has several rows of pegs arranged in a shape similar to Pascal’s triangle. Marbles are dropped from the top and then bounce their way down. At the bottom they are collected in little bins. Each time the marble hits one of the pegs, it may bounce either left or right. If the board is set up symmetrically the chances of bouncing either way are equal and the marbles in the bins follow the ‘bell shaped’ curve of the normal distribution. If it is inclined, a skewed distribution emerges, which normalizes, too, if enough rows are taken. Theoretically, one has to standardize the value of the sum Tn according to Un =

Tn − E (Tn )

(28)

var(Tn )

and the CLT in its crudest form becomes: iid

If X i ~ X are an iid process with finite variance var (X ) < ∞ , then it holds lim P(U n ≤ u ) = Φ (u ) n→∞

(29)

Here Φ (u) stands for the cumulative distribution function of the standard normal distribution, i.e., with parameters 0 and 1. Despite such mathematical intricacies, the result is so important that is has to be motivated in teaching. The method of simulation again is suitable not only to clarify the limiting behaviour of the sum (the distribution of its standardized form converges to the normal distribution), but also to get an orientation about the speed of convergence. Furthermore, this convergence behaviour is highly influenced by the shape of the distribution of the single Xi’s. A scenario of simulating 1000 different samples of size n = 20 and then n = 40 from two different distributions (see Figure 8) may be seen from Figure 9. The graph shows the frequency distributions of the mean of the single items of data 29

BOROVCNIK AND KAPADIA

Distribution of single data

Distribution of single data

7

Skewed distribution

0.3

0.3

0.2

0.2

0.1

0.1

0

0 0

2

4

6

8

10

0

10

20

30

40

50

Figure 8. A symmetric and a skewed distribution for the single summands in the scenario. Means of repeated samples of 20 data n=20

Means of repeated samples of 20 data

Normal curve

n=20

0,15

0,03

0,10

0,02

0,05

0,01

0,00

Normal curve

0,00 2

4

6

0

Means of repeated samples of 40 data n=40

4

8

12

16

20

16

20

Means of repeated samples of 40 data

Normal curve

n=40

0,15

0,03

0,10

0,02

0,05

0,01

0,00

Normal curve

0,00 2

4

6

0

4

8

12

Figure 9. Scenario of 1000 samples: distribution of the mean compared to the normal curve – left drawing from an equi-distribution, right drawing from a skewed distribution.

instead of the sum (27) – this is necessary to preserve scales as the sums are simply diverging. For the limit, the calculation of the mean still does not suffice as the mean converges weakly to one number (the expected value of Xi if all have the same distribution) – thus in the limit there would be no distribution at all. The simulation scenarios in Figure 9 illustrate that the calculated means of the repeated samples have a frequency distribution, which comes quite close to a normal distribution for only 20 summands. If the items of data are skewed, the approximation is slightly worse but with calculated means of 40 items of data in each sample the fit is sufficiently good again. The influence of the single summands (like those in Figure 8) on the convergence behaviour of the sum may be studied interactively: With a spreadsheet with slide 30

MODELLING IN PROBABILITY AND STATISTICS

controls for the number of values in the equi-distribution for the single data one could easily see that more values give a faster convergence to the fit. With a slide control for the skewness, e.g., to move the two highest values further away from the bulk of the data, one may illustrate a negative effect on convergence as the fit would become worse this way. By changes of the slide controls the effect on the distribution for an item of data Xi is seen from the bar graphs in Figure 8 and the effect on the ‘normalizing’ of the distribution of the mean of the repeatedly drawn samples may be studied from Figure 9 interactively. Distributions Connected to the Normal Distribution There are quite a few distributions, which are intimately connected to the normal distribution. The main usage of these is to describe the theoretical behaviour of certain test statistics based on a sample from a normal distribution. Amongst them are the t, the χ2 and F distribution. They are mainly used for coping with the mathematical problems of statistical inference and not for modelling phenomena. The χ2 distribution is somehow an exception to this, as the so-called Maxwell and Rayleigh distribution (the square root of a χ2) are also used by physicists to model velocity of particles (like molecules) in two or three dimensional space (with 2 or 3 degrees of freedom), see also Meyer (1970, pp. 220). SOLUTIONS TO THE STATISTICAL PART OF THE NOWITZKI TASK This section returns to the statistical questions of the Nowitzki task. Is Nowitzki weaker away than home? The usual way to answer such questions is a statistical test of significance. Such a modelling approach includes several steps to transfer the question from the context into a statistical framework, in which a null hypothesis reflects the situation of ‘no differences’ and alternative distributions depict situations of various degree of difference. As always in empirical research, there is no unique way to arrive at a conclusion. The chosen model might fit more for the one expert, and less for another one. Already the two questions posed in the formulation of the example (away weaker than home, or, away weaker than in all matches) give rise to disputes. The logic of a statistical test makes things not easier as one always has to refer to fictional situations in the sense ‘what would be if …’ Errors of type I and II, or p values give rise to many misinterpretations by students. And there are many different test statistics which use information, which could discriminate between the null and alternative hypotheses differently (not only in the sense of less precise and more precise but simply different with no direct way for a comparison). Moreover, one has to estimate parameters, or use other information to fix the hypotheses. If a Bernoulli process with success probability p is observed n times, then for the expected value of the number of successes Tn it holds: Tn = X 1 + ... + X n :

E (Tn ) = n p .

(30)

31

BOROVCNIK AND KAPADIA

Here, the value of p usually is not known and is called the true value for the (success) probability. Solutions to the First Statistical Part – Nowitzki Away Weaker than Home & Away? Part a. of the statistical questions in Example 7 is ill-posed10 insofar as the comparison of away matches against all matches does not reflect what one really should be interested in. Is Nowitzki away weaker than in home matches? A reference to comparison including all matches blurs the differences. Therefore, this question is omitted here. We will only discuss the inherent problems. With the three different success probabilities for home, away and all matches, it holds: p0 =

nH n ⋅ pH + A ⋅ p A n n

(31)

If the season is regarded as a self-contained entity, all success rates are known. If they are perceived as probabilities, the next question to discuss is whether there is one common process or two or more with different probabilities; a covariate like ‘location’ of the free throw (home, away) would explain the differences. If p0 is set as known, the problem may be handled in this way: ‘May the away free throws be modelled by a Bernoulli process with p0 from the overall strength?’ If the data is seen as a sample from an infinite (Bernoulli) process, p0 has to estimated from it, however, there are drawbacks in question a. and its modelling. Firstly, by common sense, no one would compare away scores to all scores in order to find differences between the two groups of trials away and home. Secondly, as the overall strength is estimated, it could also be estimated by the separate scores of away and home matches using equation (31): pˆ A and pˆ H are combined to an estimate pˆ 0 of p0. And the test would be performed by the data on away matches, which coincide with 231 = n A ⋅ pˆ A . Confusing here is that pA is dealt with as unknown (a test is performed whether it is lower than the overall strength), an estimate of it is used to get an estimate of the overall strength, and it is used as known data to perform the test. Solution to the Second Statistical Part – Nowitzki Weaker Away Than at Home?

In this subsection, the away scores are compared to the home matches only (part b). Various steps are required to transform a question from the context to the statistical level. It is illustrated how these steps lead to hypotheses at the theoretical level, which correspond and ‘answer’ the question at the level of context. Three different Bernoulli processes are considered: home, away, and all (home and away combined). After the end of the season, the related success probabilities are (factually) known from the statistics (see Table 5). Or, one could at least estimate some of these probabilities from the data. 32

MODELLING IN PROBABILITY AND STATISTICS

Table 5. Playing strength as success probabilities from hits and trials of the season Matches Home

Hits

Trials

TH = 267

nH = 288

Strength ‘known’ or estimated pH =

267

231

Away

TA = 231

nA = 263

pA =

All

T = 498

n = 551

p0 =

288

263 498 551

= 0.927 = 0.878 = 0.904

The basis for reference to compare the away results is the success in home matches. For home matches the success probability is estimated as in model 2 or ‘known’ from the completed season as in model 3 pH = 0.927.

(32)

Formally, the question from the context can be transferred to a (statistical) test problem in several steps of choosing the statistical model and hypotheses: T A ~ B (n A , π ) ,

(33)

ie., the number of hits (successes) in away matches is binomially distributed with unknown parameter π (despite reservations, this model is used subsequently). As null hypothesis H 0 : π = pH ,

(34)

will be chosen. This corresponds to ‘no difference of away to home matches’ from the context with pH designating the success probability in home matches. For the alternative, a one-sided hypothesis H1 : π < pH

(35)

is suggested. As in sports in general, the advantage of the home team is strong folklore, a one-sided11 hypotheses makes more sense than a two-sided alternative of π ≠ pH. The information about pH comes from the data, therefore it will be estimated by 0.927 to form the basis of the ‘model’ for the away matches. No other information about the ‘strength’ in home matches is available. Thus, the reference distribution for the number of successes in away matches is the following:

TA

H0

~ B ( n A , 0.927) .

(36)

Relation (36) corresponds to the probabilistic modelling of the null effect that ‘away matches do not differ from home matches’. The alternative is chosen to be 33

BOROVCNIK AND KAPADIA

one-sided as in (35). The question is whether the observed score of TA = 231 in away matches amounts to an event, which is significantly too low for a Bernoulli process with 0.927 as success probability, which is in the background of (36). Under this assumption, the expected value of successes in away matches is 263 ⋅ 0.927 = 243.8 . The p value of the observed number of 231 successes is now as small as 0.0033! Consistently, away matches differ significantly from home matches (at the 5% level of significance).

%

Number of hits in 263 trials - with the home strength p = 0.927

10

5

Upper limit of the smallest 5% of scores observed score

0 210

hits 220

230

240

250

260

Figure 10. Results of 2000 fictitious seasons with nA = 263 trials – based on an assumed strength of p = 0.927 (corresponding to home matches).

In Figure 10, the scores of Nowitzki in 2000 fictitious seasons are analysed. The scenario is based on his home strength with 263 free throws (the number of trials in away matches in the season 2006–07) ie., on the distribution of the null hypothesis in (36). From the bar chart it is easily seen that the observed score of 231 is far out in the distribution; it belongs to the smallest results of these fictitious seasons. In fact, the p value of the observation is 0.3%. A simulation study gives concrete data; the lower 5% quantile of the artificial data separates the 5% extremely low values from the rest. It is easy to understand that if actual scores are smaller than this threshold, they may be judged as not ‘compatible’ with the underlying assumptions (of the simulated data, especially the strength of 0.927). To handle a rejection limit (‘critical value’) from simulated data is easier than to derive a 5% quantile from an ‘abstract’ probability distribution. Validity of Assumptions – Contrasting Probabilistic and Statistical Point of Views The scenario of a Bernoulli process is more compelling for evaluating the question of whether Nowitzki is weaker in away matches than for the calculation of single probabilities of specific short sequences. In fact, whole blocks of trials are compared. This is not to ask for the probability for a number of successes in short periods of the process but to ask whether there are differences in large blocks on the whole. Homogenization means a balancing-out of short-term dependencies, or of fluctuation of the scoring probability over the season due to changes in the form of the player, 34

MODELLING IN PROBABILITY AND STATISTICS

or due to social conditions like quarrels in the team over an unlucky loss. In this way, the homogenization idea seems more convincing. The compensation of effects across single elements of a universe is essentially a fundamental constituent of a statistical point of view. For a statistical evaluation of the problem from the context, the scenario of a Bernoulli process – even though it does not apply really well – might allow for relevant results. For whole blocks of data, which are to be compared against each other, a homogenization argument is much more compelling as the violations of the assumptions might balance out ‘equally’. The situation seems to be different from the probabilistic part of the Nowitzki problem where it was doomed to failure to find suitable situations for which this scenario could reasonably be applied. Alternative Solutions to – Nowitzki Weaker Away than at Home? Some alternatives to deal with this question are discussed; they avoid the ‘confusion’ arising from the different treatment of the parameters (some are estimated from the data and some are not). Not all of them are in the school curricula. Table 6. Number of hits and trials Matches Home Away All matches

Hits 267 231 498

Failures 21 32 53

Trials 288 263 551

Fisher’s exact test. is based on the hypergeometric distribution. It is remarkable that it relies on less assumptions than the Bernoulli series and it uses nearly all information about the difference between away and home matches. The (one-sided) test problem is now depicted by the pair of hypotheses: H0: p A − p H = 0 against H1: p A − p H < 0

(37)

‘No difference’ between away and home matches is modelled by an urn problem relative to the data in Table 6: All N = 551 trials are represented by balls in an urn; A = 498 are white and depict the hits, 53 are black and model the failures. The balls are well mixed and then one draws n = 263 balls for the away matches (without replacement). The test is based on the number of white balls NW among the drawn balls. Here, a hypergeometric distribution serves as reference distribution: NW

H0

~ Hyp( N = 551, A = 498, n = 263) ,

(38)

Small values of NW indicate that the alternative hypothesis H1 might hold. The observation of 231 white balls has a p value of 3.6%. Therefore, at a level of 5% (one-sided), Nowitzki is significantly weaker away than in home matches. 35

BOROVCNIK AND KAPADIA

With this approach neither the success probability for away nor for home matches were estimated. The hypergeometric distribution needs not to be tackled with all the mathematical details. It is sufficient to describe the situation structurally and get (estimations of ) the probabilities by a simulation study. Test for the difference of two proportions. This test treats the two proportions for the success in away and home matches in the same manner as both are estimated from the data, which are modelled as separate Bernoulli processes according to (9): pˆ A estimates the away strength p A ; pˆ H the home strength p H

(39)

The (one-sided) test problem is again depicted by (37). However, the test statistic now is directly based on the estimated difference in success rates pˆ A − pˆ H . By the central limit theorem, this difference (as a random variable) – normalized by its standard error – is approximately normally distributed; it holds:

U :=

pˆ A − pˆ H pˆ A ⋅(1− pˆ A ) nA

+

pˆ H ⋅(1− pˆ H ) nH

approx

~ N (0, 1) .

(40)

There is now a direct way to derive rejection values to decide whether the observed difference between the success rates of the two Bernoulli processes is significant or not: The estimated value of the difference of the success rates of – 0.04876 gives rise to an U of –1.9257 which amounts to a p value of 2.7%. In this setting a simulation of the conditions under the null hypothesis is not straightforward. If one tests mean values from two different samples for difference then one can use the so-called Welch test. Both situations – testing two proportions or two means for significant differences – are too complex for introductory probability courses at the university. One could motivate the distributions and focus on the problem of judging the difference between the two samples. However, the simpler Fisher test may be seen as the better alternative for proportions. Some Conclusions on the Statistical Modelling of the Nowitzki Task Inferential statistics means evaluating hypotheses by data (and mathematical techniques). Is the strength of Nowitzki away equal to p = 0.927? An answer to this question depends on whether we search for deviations from this hypothesized value in both directions (two-tailed) or only in the direction of lower values (one-tailed). From the context, the focus may well be on lower values of p for the alternative as the advantage of the home team is a well-known matter in sport. The null hypothesis forms the reference basis for the given data. For its formulation, further knowledge is required, either from the context, or from the data. Such knowledge should never be mixed with data, which is used in the subsequent test procedure; this is a crucial problem of task a., which asks to compare the away 36

MODELLING IN PROBABILITY AND STATISTICS

scores to the score in all matches. A value for all matches – if estimated from the data – also contains the away matches. However, this should be avoided, not only for methodological reasons but by common sense too. If the season is seen as self-contained, the value of p = 0.927 is known. A test of 0.927 against alternative values of the strength less than 0.927 corresponds to the question ‘Is Nowitzki away weaker than home?’ 0.927 might as well be seen as an estimate of a larger imaginary season. An evaluation of its accuracy (as in section 3) is usually not pursued. A drawback might be seen in the unequal treatment of the scores: home scores are used to estimate a parameter pH while the away scores are treated as random variable. Note that in this test situation no ‘overlap’ occurs between data used to fix the null hypothesis and data used to perform the test. The alternative tests discussed here treat the home and away probabilities in a symmetric manner: both are assumed as unknown; either both are estimated from the data, or estimation is avoided for both. These tests express a correspondence between the question from the context and their test statistics differently. They view the situation as a two-way-sample. Such a view paves the way to more general types of questions in empirical research, which will be dealt with below. STATISTICAL ASPECTS OF PROBABILITY MODELLING

Data have to be seen in a setting of model and context. If two groups are compared – be it a treatment group receiving some medical treatment and a control group receiving only placebo (a pretended treatment), two success rates might be judged for difference as in the Nowitzki task. Is treatment more effective than placebo? The assumption of Bernoulli processes ‘remains’ in the background (at least if we measure success only on a 0–1 scale). However, such an assumption requires a heuristic argument like the homogenization of data in larger blocks. The independence assumption for the Bernoulli model is not really open to scrutiny as it leads to methodological problems (a null hypothesis can not be statistically confirmed). The idea of searching for covariates serves as a strategy to make the two groups as equal as they could be. Data may be interpreted sensibly and used for statistical inference – in order to generalize findings from the concrete data – only by carefully checking whether the groups are homogenous. Only then, do the models lead to relevant conclusions beyond concrete data. If success is measured on a continuous scale, the mathematics becomes more complicated but the general gist of this heuristic still applies. Dealing with the Inherent Assumptions A further example illustrates the role of confounders. Example 10. Table 7 shows the proportions of girl births in three hospitals. Can they be interpreted as estimates of the same probability for a female birth? Assume such a proportion equals 0.489 worldwide; with hospital B (and 358 births) the proportion of girls would lie between 0.437 and 0.541 (with a probability of 37

BOROVCNIK AND KAPADIA

Table 7. Proportion of girls among new-borns Hospital A B C ‘World stats’

Births 514 358

Proportion of girl births 0.492 0.450 0.508 0.489

approximately 95%). The observed value of 0.450 is quite close to the lower end of this interval. This consideration sheds some doubt on it that the data have been ‘produced’ by mere randomness, ie., by a Bernoulli process with p = 0.489. It may well be that there are three different processes hidden in the background and the data do not emerge from one and the same source. Such ‘phenomena’ are quite frequent in practice. However, it is not as simple as that one could go to hospital B if one wants to give birth to a boy? One may explain the big variation of girl births between hospitals by reference to a so-called covariate: one speculation refers to location; hospital A in Germany, C in Turkey, and B in China. In cases where covariates are not open to scrutiny as information is missing about them, they might blur the results – in such cases these variables are called confounders. For the probabilistic part of Nowitzki, it might be better to search for confounders (whether he is in good form, or had a quarrel with his trainer or with his partner) in order to derive at a probability that he will at the most score 8 times out of 10 trials instead of modelling the problem by a Bernoulli process with the seasonal strength as success probability. Such an approach cannot be part of a formal examination but should feature in classroom discussion. Empirical Research – Generalizing Results from Limited Data Data always has to be interpreted by the use of models and by the knowledge of context, which influences not only the thinking about potential confounders but also guides the evaluation of the practical relevance of conclusions drawn. A homogenization idea was used to support a probabilistic model. For the Bernoulli process the differing success probabilities should ease out, the afflictions of independency should play less of a role when series of data are observed, which form a greater entity – as is done from a statistical perspective. This may be the case for the series of away matches as a block and – in comparison to it – for the home matches as a block. One might also be tempted to reject such a homogenization idea for the statistical part of the Nowitzki task. However, a slight twist of the context, leaving the data unchanged, brings us in the midst of empirical research; see the data in Table 8, which are identical to Table 5. Table 8. Success probabilities for treatment and control group Group Treatment Control All 38

Success 267 231 498

Size nT = 288 nC = 263 N = 551

Success probability pT = 0.927 pC = 0.878 p0 = 0.904

MODELLING IN PROBABILITY AND STATISTICS

This is a two-sample problem, we are faced with judging a (medical) treatment for effectiveness (only with a 0, 1 outcome, not with a continuous response). The Nowitzki question reads now as: Was the actual treatment more effective than the placebo treatment applied to the persons in the control group? How can we justify the statistical inference point of view here? We have to model success in the two groups by a different Bernoulli process. This modelling includes the same success probability throughout, for all people included in the treatment group as well the independence of success between different people. Usually, such a random model is introduced by the design of the study. Of course, the people are not selected randomly from a larger population but are chosen by convenience – they are primarily patients of the doctors who are involved in the study. However, they are randomly attributed to one of the groups, i.e., a random experiment like coin tossing decides whether they are treated by the medical treatment under scrutiny or they receive a placebo, which looks the same from outside but has no expected treatment effect – except the person’s psychological expectation that it could affect. Neither the patient, nor the doctors, nor persons who measure the effect of the treatment, should know to which group a person is attributed – the golden standard of empirical research is the so-called double-blind randomized treatment and control group design. The random attribution of persons to either group should make the two groups as comparable as they could be – it should balance all known covariates and all unknown confounders, which might interfere with the effect of treatment. Despite all precautions, patients would differ by age, gender, stage of the disease, etc. Thus, they do not have a common success probability that the treatment is effective. All what one can say is that one has undertaken the usual precautions and one hopes that the groups are now homogenous enough to apply the model in a manner of a scenario: ‘what does the data tell us if we think that the groups meet a ceteris paribus condition’. A homogenization argument is generally applied to justify drawing conclusions out of empirical data. It is backed by random attribution of persons to the groups, which are to be compared. The goal of randomizing is to get two homogenous groups that differ only with respect to what has really been administered to them: medication or placebo. CONCLUSIONS

The two main roles for probability are to serve as a genuine tool for modelling and to prepare and understand statistical inference. – Probability provides an important set of concepts in modelling phenomena from the real world. Uncertainty or risk, which combines uncertainty with impact (win or loss, as measured by utility) is either implicitly inherent to reality or emerges of our partial knowledge about it. – Probability is the key to understand much empirical research and how to generalize findings from samples to populations. Random samples play an eminent role in that process. The Bernoulli process is a special case of random sampling. Moreover, inferential statistical methods draw heavily on a sound understanding of conditional probabilities. 39

BOROVCNIK AND KAPADIA

The present trend in teaching is towards simpler concepts focusing on a (barely) adequate understanding thereof. In line with this, a problem posed to the students has to be clear-cut, with no ambiguities involved – neither about the context nor about the questions. Such a trend runs counter to any sensible modelling approach. The discussion about the huge role intuitions play in the perception of randomness was initiated by Fischbein (1975). Kapadia and Borovcnik (1991) focused their deliberations on ‘chance encounters’ towards the interplay between intuitions and mathematical concepts, which might influence and enhance mutually. Various, psychologically impregnated approaches have been seen in the pertinent research. Kahneman and Tversky (1972) showed the persistent bias of popular heuristics people use in random situations; Falk and Konold (1992) entangle with causal strategies and the so-called outcome approach, a tendency to re-formulate probability statements into a direct, clear prediction. Borovcnik and Peard (1996) have described some specifities, which are peculiar to probability and not to other mathematical concepts, which might account for the special position of probability within the historic development of mathematics. The research on understanding probability is still ongoing, as may be seen from Borovcnik and Kapadia (2009). Lysø (2008) makes some suggestions to take up the challenge of intuitions right from the beginning of teaching. All these endeavours to understand probability more deeply, however, seem to have had limited success. On the contrary, the more the educational community became aware about the difficulties, the more it tried to suggest cutting out critical passages, which means that probability is slowly but silently disappearing from the content being taught. It is somehow a solution that resembles that of the mathematicians when they teach probability courses at university: they hurry to reach sound mathematical concepts and leave all ambiguities behind. The approach of modelling offers a striking opportunity to counterbalance the trend. Arguments that probability should be reduced in curricula at schools and at universities in favour of more data-handling and statistical inference might be met by the examples of this chapter; they connect approaches towards context and applications like that of Kapadia and Andersson (1987). Probability serves to model reality, to impose a specific structure upon it. In such an approach, key ideas to understand probability distributions turn out to be a fundamental tool to convey the implicit specific thinking about the models used and the reality modelled hereby. Contrary to the current trend, the position of probability within mathematics curricula should be reinforced instead of being reduced. We may have to develop innovative ways to deal with the mathematics involved. To learn more about the mathematics of probability might not serve the purpose as we may see from studies in understanding probabilistic concepts by Díaz and Batanero (2009). The perspective of modelling seems more promising: a modeller never understands all mathematical relations between the concepts. However, a modeller ‘knows’ about the inherent assumptions of the models and the restrictions they impose upon a real situation. Indirectly, modelling was also supported by Chaput, Girard, and Henry (2008, p. 6). They suggest the use of simulation to construct mental images of randomness. 40

MODELLING IN PROBABILITY AND STATISTICS

Real applications are suggested by various authors to overcome the magic ingredients in randomness, e.g. Garuti, Orlandoni, and Ricci (2008, p. 5). Personal conceptions about probability are characterized by an overlap between objective and subjective conceptions. In teaching, subjective views are usually precluded; Carranza and Kuzniak (2008, p. 3) note the resulting consequences: “Thus the concept […] is truncated: the frequentist definition is the only one approach taught, while the students are confronted with frequentist and Bayesian problem situations.” In modelling real problems, the two aspects of probability are always present; it is not possible to reduce to one of these aspects as the situation might lose sense. Modelling thus might lead to a more balanced way in teaching probability. From the perspective of modelling, the overlap between probabilistic and deterministic reasoning is a further source of complications as Ottaviani (2008, p. 1) stresses that probability and statistics belong to a line of thought which is essentially different from deterministic reasoning: “It is not enough to show random phenomena. […] it is necessary to draw the distinction between what is random and what is chaos.” Simulation or interactive animations may be used to reduce the need for mathematical sophistication. The idea of a scenario helps to explore a real situation as shown in section 1. The case of taking out an insurance policy for a car is analysed to some detail in Borovcnik (2006). There is a need for a reference concept wider than the frequentist approach. The perspective of modelling will help to explore issues. In modelling, it is rarely the case that one already knows all relevant facts from the context. Several models have to be used in parallel until one may compare the results and their inherent assumptions. A final overview of the results might help to solve some of the questions posed but raises some new questions. Modelling is an iterative cycle, which leads to more insights step by step. Of course, such a modelling approach is not easy to teach, and it is not easy for the students to acquire the flexibility in applying the basic concepts to explore various contexts. Examinations are a further hindrance. What can and should be examined and how should the results of such an exam be marked? Problems are multiplied by the need for centrally set examinations. Such examination procedures are intended to solve the goal of comparability of results of final exams throughout a country. They also form the basis for interventions in the school system: if a high percentage fail such an exam in a class, the teacher might be blamed, while if such a low achievement is found in a greater region, the exam papers have to be revised etc. However, higher-order attainment is hard to assess. While control over the result of schooling via central examinations ‘guarantees’ standards, such a procedure also has a levelling effect in the end. The difficult questions about a comparison of different probability models and evaluating the relative advantages of these models, and giving justifications for the choice of one or two of these models – genuine modelling aspects involve ambiguity – might not leave enough scope in the future classroom of probability. 41

BOROVCNIK AND KAPADIA

As a consequence of such trends, teaching will focus even more on developing basic competencies. From applying probabilistic models in the sense of modelling contextual problems, only remnants may remain – mainly in the sense of mechanistic ‘application’ of rules or ready-made models. To use one single model at a time does not clarify what a modelling approach can achieve. When one model is used finally, there is still much room for further modelling activities like tuning the model’s parameters to improve a specific outcome, which corresponds to one’s benefit in the context. A wider perspective on modelling presents much more potential for students to really understand probability. NOTES 1 2 3 4 5 6 7 8 9

10

11

Kolmogorov’s axioms are rarely well-connected to the concept of distribution functions. All texts from German are translated by the authors. ’What is to be expected?’ would be less misleading than ‘significantly below the expected value’. A ‘true’ value is nothing more than a façon de parler. This is quite similiar to the ‘ceteris paribus’ condition in economic models. In fact, the challenge is to detect a break between past and present; see the recent financial crisis. Model 3 yields identical solutions to model 2. However, its connotation is completely different. The final probability needs not be monotonically related to the input probability p as in this case. Quetelet coined the idea of ‘l’homme moyen’. Small errors superimposing to the (ideal value of) l’homme moyen ‘lead directly’ to the normal distribution. Beyond common sense issues, the ill-posed comparison of scores in away matches against all matches has several – statistical and methodological drawbacks. The use of a one-sided alternative has to be considered very carefully. An unjustified use could lead to a rejection of the null hypothesis and to a statistical ‘proof ’ of this pre-assumption.

REFERENCES Borovcnik, M. (2006). Probabilistic and statistical thinking. In M. Bosch (Ed.), European research in mathematics education IV (pp. 484–506). Barcelona: ERME. Online: ermeweb.free.fr/CERME4/ Borovcnik, M. (2011). Strengthening the role of probability within statistics curricula. In C. Batanero, G. Burrill, C. Reading, & A. Rossman, (Eds). Teaching statistics in school mathematics. Challenges for teaching and teacher education: A joint ICMI/IASE study. New York: Springer. Borovcnik, M., & Kapadia, R. (2009). Special issue on “Research and Developments in Probability Education”. International Electronic Journal of Mathematics Education, 4(3). Borovcnik, M., & Peard, R. (1996). Probability. In A. Bishop, K. Clements, C. Keitel, J. Kilpatrick, & C. Laborde (Eds.), International handbook of mathematics education (pp. 239–288). Dordrecht: Kluwer. Carranza, P., & Kuzniak, A. (2008). Duality of probability and statistics teaching in French education. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Chaput, B., Girard, J. C., & Henry, M. (2008). Modeling and simulations in statistics education. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Joint ICMI/IASE study: Teaching statistics in school mathematics. Challenges for teaching and teacher education. Monterrey: ICMI and IASE. Online: www.stat.auckland.ac.nz/~iase/publications Davies, P. L. (2009). Einige grundsätzliche Überlegungen zu zwei Abituraufgaben (Some basic considerations to two tasks of the final exam). Stochastik in der Schule, 29(2), 2–7. Davies, L., Dette, H., Diepenbrock, F. R., & Krämer, W. (2008). Ministerium bei der Erstellung von Mathe-Aufgaben im Zentralabitur überfordert? (Ministry of Education overcharged with preparing the exam paper in mathematics for the centrally administered final exam?) Bildungsklick. Online: http://bildungsklick.de/a/61216/ministerium-bei-der-erstellung-von-mathe-aufgaben-im-zentralabiturueberfordert/ 42

MODELLING IN PROBABILITY AND STATISTICS Díaz, C., & Batanero, C. (2009). University Students’ knowledge and biases in conditional probability reasoning. International Electronic Journal of Mathematics Education 4(3), 131–162. Online: www. iejme.com/ Falk, R., & Konold, C. (1992). The psychology of learning probability. In F. Sheldon & G. Sheldon (Eds.). Statistics for the twenty-first century, MAA Notes 26 (pp 151–164). Washington DC: The Mathematical Association of America. Feller, W. (1968). An introduction to probability theory and its applications (Vol. 1, 3rd ed.). New York: J Wiley. Fischbein, E. (1975). The intuitive sources of probabilistic thinking in children. Dordrecht: D. Reidel. Garuti, R., Orlandoni, A., & Ricci, R. (2008). Which probability do we have to meet? A case study about statistical and classical approach to probability in students’ behaviour. In C. Batanero, G. Burrill, C. Reading, & A. Rossman (2008). Joint ICMI/IASE study: Teaching statistics in school mathematics. Challenges for teaching and teacher education. Monterrey: ICMI and IASE. Online: www.stat. auckland.ac.nz/~iase/publications Gigerenzer, G. (2002). Calculated risks: How to know when numbers deceive you. New York: Simon & Schuster. Girard, J. C. (2008). The Interplay of probability and statistics in teaching and in training the teachers in France. In C. Batanero, G. Burrill, C. Reading, & A. Rossman (2008) Joint ICMI/IASE study: teaching statistics in school mathematics. Challenges for teaching and teacher education. Monterrey: ICMI and IASE. Online: www.stat.auckland.ac.nz/~iase/publications Goertzel, T. (n.d.). The myth of the Bell curve. Online: crab.rutgers.edu/~goertzel/normalcurve.htm Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgement of representativeness. Cognitive Psychology, 3, 430–454. Kapadia, R., & Andersson, G. (1987). Statistics explained. Basic concepts and methods. Chichester: Ellis Horwood. Kapadia, R., & Borovcnik, M. (1991). Chance Encounters: Probability in education. Dordrecht: Kluwer. Lysø, K. (2008). Strengths and limitations of informal conceptions in introductory probability courses for future lower secondary teachers. In Eleventh International Congress on Mathematics Education, Topic Study Group 13 “Research and development in the teaching and learning of probability”. Monterrey, México. Online: tsg.icme11.org/tsg/show/14 Meyer, P. L. (1970). Introductory probability and statistical applications reading. Massachusetts: AddisonWesley. Ottaviani, M. G. (2008). The interplay of probability and statistics in teaching and in training the teachers. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Pierce, R. (n.d.), Quincunx. Rod. In Math is Fun – Maths Resources. Online: www.mathsisfun.com/ data/quincunx.html Schulministerium NRW. (n.d.). Zentralabitur NRW. Online: www.standardsicherung.nrw.de/abiturgost/fach.php?fach=2. Ex 7: http://www.standardsicherung.nrw.de/abitur-gost/getfile.php?file=1800

Manfred Borovcnik Institute of Statistics Alps-Adria University Klagenfurt Austria Ramesh Kapadia Institute of Education University of London London WC1H OAL United Kingdom

43

ASTRID BRINKMANN AND KLAUS BRINKMANN

2. PROBLEMS FOR THE SECONDARY MATHEMATICS CLASSROOMS ON THE TOPIC OF FUTURE ENERGY ISSUES

INTRODUCTION

The students’ interest and motivation in mathematics classroom towards the subject as a whole may be increased by using and applying mathematics. “The application of mathematics in contexts which have relevance and interest is an important means of developing students’ understanding and appreciation of the subject and of those contexts.” (National Curriculum Council 1989, para. F1.4). Such contexts might be, for example, environmental issues that are of general interest to everyone. Hudson (1995) states “it seems quite clear that the consideration of environmental issues is desirable, necessary and also very relevant to the motivation of effective learning in the mathematics classroom”. One of the most important environmental impacts is that of energy conversion systems. Unfortunately this theme is hardly treated in mathematics education. Dealing with this subject may not only offer advantages for the mathematics classroom, but also provide a valuable contribution to the education of our children. The younger generation especially, would be more conflicted with the environmental consequences of the extensive usage of fossil fuels, and thus a sustainable change from our momentary existing power supply system to a system based on renewable energy conversion has to be achieved. The decentralised character of this future kind of energy supply surely requires more personal effort of everyone and thus it is indispensable for young people to become familiar with renewable energies. However, at the beginning of the 21th century there was a great lack of suitable school mathematical problems concerning environmental issues, especially strongly connected with future energy issues. An added problem is that the development of such mathematical problems requires the co-operation of experts in future energy matters, with their specialist knowledge, and mathematics educators with their pedagogical content knowledge. The authors working in such a collaboration have developed a special didactical concept to open the field of future energy issues for students, as well as for their teachers, and this is presented below. On the basis of this didactical concept we have created several series of problems for the secondary mathematics classroom on the topics of rational usage of energy, photovoltaic, thermal solar energy, biomass, traffic, transport, wind energy and hydro power. The collection of worked out problems, with an extensive solution to each problem, has been published in a book in the J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 45–66. © 2011 Sense Publishers. All rights reserved.

BRINKMANN AND BRINKMANN

German language (Brinkmann & Brinkmann, 2005). Further problems dealing with so-called energy hybrid systems i.e., combinations of several energy types, will be developed (see Brinkmann & Brinkmann, 2009). Some problem examples are presented in paragraph 3 of this article. DIDACTICAL CONCEPT

The cornerstones of the didactical concept developed by the authors in order to promote renewable energy issues in mathematics classrooms are: – The problems are chosen in such a way that the mathematical contents needed to solve them are part of mathematics school curricula. – Ideally every problem should concentrate on a special mathematical topic such that it can be integrated into an existing teaching unit; as project-oriented problems referring to several mathematical topics are seldom picked up by teachers. – The problems should be of a greater extent than usual text problems, in order to enable the students and also their teachers to concern themselves in a more intensive way with the subject. – The problems should not require special knowledge of teachers concerning future energy issues and especially physical matters. For this reason all nonmathematical information and explanations concerning the problem’s foundations are included in separate text frames. – In this way information about future energy issues is provided for both teachers and students, helping them to concentrate on the topic. Thus, a basis for interdisciplinary discussion, argumentation and interpretation is given. EXAMPLES OF MATHEMATICAL PROBLEMS

The Problem of CO2 Emission This is an inter-disciplinary problem linked to the subjects of mathematics as well as chemistry, physics, biology, geography, and social sciences. Nevertheless, it may be treated in lower secondary classrooms. With respect to mathematics the conversion of quantities is practised, knowledge of rule of three and percentage calculation is required. The amount of CO2 produced annually in Germany especially by transport and traffic is illustrated vividly so that students become aware of it. Information: In Germany, each inhabitant produces an annual average of nearly 13 t of CO2 (Carbon dioxide). Combustion processes (for example from power plants or vehicle combustion motors) are responsible for this emission into the atmosphere. Assume now that this CO2 would build up a gaseous layer which stays directly above the ground. a) What height would this CO2-layer reach in Germany after one year? 46

PROBLEMS FOR THE SECONDARY MATHEMATICS

Hints: – Knowledge from chemical lessons is How long can helpful for your calculations. There you I breathe? learned that amounts of material could be measured with the help of the unit ‘mole’. 1 mole of CO2 weighs 44 g and takes a volume of 22.4 l, under normal standard conditions (pressure 1013 hPa and temperature 0°C). With these values CO2 you can calculate approximately. – You will find the surface area and the number of inhabitants of Germany in a lexicon. Help: Find the answers of the following partial questions in the given order. i) How many tons of CO2 are produced in total in Germany every year? ii) What volume in l (litres) takes this amount of CO2? (Regard the Hint!) iii) How many m3 of CO2 are therefore produced annually in Germany? Express this in km3! iv) Assume, the CO2 produced annually in Germany forms a low layer of gas directly above the ground, what height would it have? Information: – In Germany the amount of waste is nearly 1 t for each inhabitant (private households as well as industry) every year, the average amount of CO2 produced per inhabitant is therefore 13 times of this. – The CO2, which is produced during combustion processes and emitted into the atmosphere, distributes itself in the air. One part will be absorbed by the plants with the help of the photosynthesis, a much greater part goes into solution in the oceans’ waters. But the potential for CO2 absorption is limited. – In the 1990s, 20% of the total CO2-emissions in Germany came from the combustion engines engaged in traffic activities alone. b) What height would the CO2-layer over Germany have, if this layer results only from the annual emissions from individual vehicles? How many km3 of CO2 is this?

Usable Solar Energy This problem deals with the heating of water in private households using solar energy. It can be treated in lessons involving the topic of percent calculation and the rule of three. It requires the understanding and usage of data representations. Information: In private households the warm water required can be partly heated up by solar thermal collectors. They convert the solar radiation energy in thermal energy. This helps us to decrease the usage of fossil fuels which leads to environmental problems.

47

BRINKMANN AND BRINKMANN

Private households need heated water preferably in the temperature region 45°–55°C. Permanently in our region, the usable thermal energy from the sun is not sufficient to bring water to this temperature because of seasonal behavior. Thus, an input of supplementary energy is necessary.

Figure 1. A solar thermal energy plant (source: DGS LV Berlin BRB).

The following figure (Figure 2) shows how much of the needed energy for heating up water to a temperature of 45°C in private households can be covered respectively by solar thermal energy and how much supplementary energy is needed. Energy Coverage in % 100 90

Percentage of Covered Energy

80 70 60 50 40 30 20 10

0

1

2

3

4 5 6 7 8 9 Month March (1) to February (12)

10

11

12

Figure 2. Usable solar energy and additional energy. 48

PROBLEMS FOR THE SECONDARY MATHEMATICS

The following problems refer to the values shown by Figure 2. a) What percent of the thermal energy needed for one year can be provided by solar thermal energy? Information: – Energy is measured by the unit kWh. An average household in central Europe consumes nearly 4000 kWh energy for the heating of water per year. – 1 l fossil oil provides approximately 10 kWh thermal energy. The combustion of 1 l oil produces nearly 68 l CO2. – The great amount of CO2 worldwide produced at present by the combustion of fossil fuels damages the environment. b) How many kWh may be provided in one year by solar thermal energy? c) How many litres of oil have to be bought to supply the supplementary thermal energy needed for one year for a private household? d) How many litres of oil would be needed without solar thermal energy? e) How many litres of CO2 could be saved by an average household in Germany during one year by using Solar Collectors?

The Problem of Planning Solar Collector Systems This problem deals with calculations for planning solar collector systems by using linear functions. The understanding and usage of graphical representations is performed. Information: – In private households, the heating of warm water can partly be done by solar collector systems. Solar collector systems convert radiation energy in thermal energy. This process is called solar heat. – The usage of solar heat helps to save fossil fuels like natural gas, fuel oil or coal that damages the environment. – Energy is measured by the unit kWh. An average household in central Europe consumes nearly 4000 kWh thermal energy per year for the heating warm water. – Private households need heated water preferably in the temperature region 45°–55°C. Permanently at our latitude, the usable thermal energy from the sun is not sufficient to bring water to this temperature because of the seasonal behavior. Thus, an input of supplementary energy is necessary. – The warm water requirement per day per person can be reckoned at about 50 l. By energy-conscious living this value can be easily reduced to 30 l per person and day. – 1 l of fossil oil provides approximately 10 kWh of thermal energy. The combustion of 1 l of oil produces nearly 68 l of CO2. The diagram in Figure 3 provides data for planning a solar collector system for a private household. It shows the dependence of the collector area needed, for example on the part of Germany where the house is situated, on the number of persons living in the respective household, on the desired amount of warm water per day per person, as well as the desired output of thermal energy needed by solar thermal energy (in per cents). 49

BRINKMANN AND BRINKMANN

Storage Capacity 400 l 300 l 200 l 100 l

2

4

6 Persons

2 4 6 8 Collector Area in m2

Figure 3. Dimensioning diagram.

Example: In a household in central Germany with 4 persons and a consumption of 50 l of warm water per day for each one, a collector area of 4 m2 is needed for a reservoir of 300 l and an energy coverage of 50%. a) What would be the collector area needed for the household you are living in? What assumptions do you need to make first? What would be the minimal possible collector area, what the maximal one? b) A collector area of 6 m2 that provides 50% of the produced thermal energy is installed on a house in southern Germany. How many persons could be supplied with warm water in this household? c) Describe using a linear function the dependence of the storage capacity on the number of persons in a private household. Assume first a consumption of 50 l of warm water per day per person, and second a consumption of 30 l. Compare the two function terms and their graphical representation. d) Show a graphical representation of the dependence of the collector area on a chosen storage capacity assuming a thermal energy output of 50% for a house in central Germany. Sun Collectors This problem can be integrated in lessons about quadratic parabola and uses their focus property as application in the field of sun collectors. 50

PROBLEMS FOR THE SECONDARY MATHEMATICS

Information: – Direct solar radiation may be concentrated in a focus by means of parabolic sun collectors (Figure 4). These use the focus property of quadratic parabola. – sun collectors are figures with rotational symmetry, they evolve by rotation of a quadratic parabola. Their inner surface is covered with a reflective mirror surface; that is why they are named parabolic mirrors. – Sun beams may be assumed to be parallel. Thus, if they fall on such a collector, parallel to its axis of rotation, the beams are reflected so that they all cross the focus of the parabola. The thermal energy radiation may be focused this way in one point. – The temperature of a heating medium, which is lead through this point, becomes very high, relative to the environment. This is used for heating purposes, but also for the production of electric energy.

Figure 4. Parabolic sun collectors (source: DLR).

a) A parabolic mirror was constructed by rotation of the parabola y 0.125 x 2 . Determine its focal length (x and y are measured in meters). b) A parabolic mirror has a focal length of 8 m. Which quadratic parabola was used for its construction? c) Has the parabolic mirror with y 0.0625x 2 a greater or a smaller focal length than that one in b)? Generalize your result. d) A parabolic mirror shall be constructed with a width of 2.40 m and a focal length of 1.25 m. How great is its arch, i.e., how much does the vertex lay deeper than the border? e) In Figure 5 you see a parabolic mirror, the EuroDish with a diameter of 8.5 m. Determine from the figure, neglecting errors resulting from projection sight, its approximate focal length and the associated quadratic parabola. 51

BRINKMANN AND BRINKMANN

Figure 5. EuroDish system (source: Wikimedia Commons).

Information: Other focussing sun collectors are figures with length-symmetry, they evolve by shifting a quadratic parabola along the direction of one axis. They are named parabolic trough solar collectors (Figure 6).

Figure 6. Parabolic trough solar collectors in Almería, Spain and California, USA (source: DLR and FVEE/PSA/DLR). 52

PROBLEMS FOR THE SECONDARY MATHEMATICS

f) The underlying function of a parabolic trough solar collector is given by

y

0.35 x 2 (1 unit

1 m). Where has the heating pipe to be installed?

Photovoltaic Plant and Series Connected Efficiencies The aim of this problem is to make students familiar with the principle of series connected efficiencies, as they occur in complex energy conversion devices. As an example, an off-grid photovoltaic plant for the conversion of solar energy to ACcurrent as a self-sufficient energy supply is considered. The problem can be treated in a teaching unit on the topic of fractions. Figure 7 shows the components of an interconnected energy conversion system to build up a self-sufficient electrical energy supply. This kind of supply system is of special interest for developing countries, and also for buildings in rural off-grid areas (Figure 8). Figure 7 shows in schematic form the production of electrical energy from solar radiation with the help of a solar generator for off-grid applications. In order to guarantee a gap-free energy supply for times without sufficient solar radiation, a battery as an additional storage device is included. SolarGenerator

K PV

Charge Control

KCC

Battery

KB

Inverter

Consumer

KI

Figure 7. Off-Grid photovoltaic plant.

Figure 8. Illustration of an off-grid photovoltaic plant on the mountain hut “Starkenburger Hütte” (Source: Wikipedia). 53

BRINKMANN AND BRINKMANN

Information: The components of an off-grid photovoltaic (PV) plant are 1) a solar generator, 2) a charge control, 3) an accumulator and 4) an inverter (optional for AC-applications). The solar generator converts the energy of the solar radiation into electrical energy as direct current (DC). The electricity is passed to a battery via a charge control. From there it can be transformed directly, or later after storage in the battery, to alternating current (AC), when it is needed by most of the electric devices. Unfortunately, it is not possible to use the radiation energy without losses. Every component of the conversion chain produces losses, so only a fraction of the energy input for each component would be the energy input for the following Power going out component. The efficiency K of a component is defined by K . Power coming in Power is the energy converted in 1 second. It is measured by the unit W or kW (1 kW = 1000 W). For comparison standard electric bulbs need a power of 40 W, 60 W or 100 W, a hair blower consumes up to 2 kW. Assume in the tasks a), b) and c) that all the electric current is first stored in the battery before it reaches the consumer. a) Consider that the momentary radiation on the solar generator would be 20 kW. Calculate the out going power for every component of the chain, if:

K PV

3 , KCC 25

19 , KB 20

b) What is the total system efficiency Ktotal

4 and K I 5

23 . 25

gained power for the consumer ? insolated power

How can you calculate Ktotal by using only the values K PV , KCC , K B and KI ? Give a formula for this calculation. c) Transform the efficiency values given in a) into decimal numbers and percents. Check your result obtained in a) with these numbers. d) How do the battery efficiency and the total system efficiency change, if only 1 / 3 of the electric power delivered by the charge control would be stored in the battery and the rest of 2 / 3 goes directly to the inverter? What is your conclusion from this? Wind Energy Converter This problem deals with wind energy converters. It can be treated in lessons on geometry, especially calculations of circles or in lessons on quadratic parabola. The conversion of quantities is practised. Information: The nominal power of a wind energy converter depends upon the rotor area A with the diameter D as shown in Figure 9 below. 54

PROBLEMS FOR THE SECONDARY MATHEMATICS

2500 kW 2000 kW 1500 kW 1000 kW 750 kW 600 kW 500 kW 300 kW

80 m 72 m 64 m 54 m 48 m 44 m 40 m 33 m 27 m

225 kW

Figure 9. Nominal power related to the rotor area.

a) Interpret the meaning of Figure 9. b) Show the dependence of the nominal power of the wind energy converter on the rotor diameter D and respectively on the rotor area A by graphs in co-ordinate systems. c) Find the formula which gives the nominal power of the wind energy converter as a function of the rotor area and of the rotor diameter respectively. d) What rotor area would you expect to need for a wind energy converter with a nominal power of 3 MW? Give reason for your answer. (Note: 1 MW = 1000 kW.) What length should the rotor blades have for this case? Information: – The energy that is converted in one hour [h] by the power of one kilowatt [kW] is 1 kWh. – In central Europe, wind energy converters produce their nominal power on average for 2000 hours a year when wind energy conditions are sufficient. e) Calculate the average amount of energy in kWh, which would be produced by a wind energy converter with a nominal power of 1.5 MW during one year in middle Europe. Information: – An average household in central Europe consumes nearly 4000 kWh electrical energy per year. 55

BRINKMANN AND BRINKMANN

f) In theory, how many average private households in central Europe could be supplied with electrical energy by a 1.5 MW wind energy converter? Why do you think, that this could only be a theoretical calculation? g) Assume the nominal power of a 600 kW energy converter would be reached at a wind speed of 15 m/s, measured at the hub height. How many km/h is this? How fast are the movements of the tips of the blades, if the rotation speed is 15/min. Give the solutions in m/s and km/h, respectively. Compare the result with the wind speed. Wind Energy Development This problem requires the usage and interpretation of data and statistics which is done in the context of wind energy development in Germany. Information: At the end of 1990 the installed wind energy converters in Germany had a total nominal power of 56 MW. At the end of 2000 this amount increased to a total of 6113 MW. Power is the energy converted in a time unit; it is measured by the unit Watt [W]. 106 W are one Megawatt [MW]. The following table shows the development of the new installed wind power in Germany in the years 1991–2000. Table 1. Development of new installed wind energy in Germany Year

Number of new installed wind energy converters

Total of new installed nominal power

1991

300

1992

405

74

1993

608

155

1994

834

309

1995

911

505

1996

804

426

Total of nominal power

48

1997

849

534

1998

1010

733

1999

1676

1568

2000

1495

1665

a) Fill in the missing data in the 4th column. b) Show the development of the annual new installed nominal power and of the total annual nominal power in graphical representations. 56

PROBLEMS FOR THE SECONDARY MATHEMATICS

c) Considering only the data of the years 1991–1998, what development in respect to the installation of new wind power in Germany could be expected in your opinion? Give a well-founded answer! Compare your answer with the real data given for 1999 and 2000 and comment on it. What is your projection for 2005? Why? d) Calculate using the data in Table 1 for each of the years 1991–2000, the average size of new installed wind energy converters in kW. (Note: 1 MW = 1000 kW.) Show the respective development graphically and comment on it. Can you offer a projection for the average size of a wind energy converter that will be installed 2010? e) Comment on the graphical representation in Figure 10. Also take into account political and economical statements and possible arguments.

Development of wind energy use in Europe: installed capacity Predictions and Reality

Installed capacity in gigawatt

70

64.2

60

European Commission: Advanced Scenario, 1996 EWEA 1997

50 European Commission: White paper, 1997

40 30 20

European Commission: PRIMESt, 1998

10

Greenpeace/EWEA, 2002 2.5

0

IEA 2002

1990 1995 2000 2005 2010 2015 2020

Actual development

Figure 10. Development of wind energy use in Europe.

Betz’ Law and Differentiation This problem deals with the efficiency of a wind energy converter; it can be treated in lessons on differentiation and the determination of local extreme values. 57

BRINKMANN AND BRINKMANN

Information: 2% of the radiated solar energy is converted to kinetic energy of air molecules. In combination with the earth’s rotation, this results in a wind production. The kinetic energy of an air mass 'm is E 1 2 'm v 2 , in which v denotes the velocity of

the air mass. The kinetic energy can be written as E 1 2 U 'V v 2 given the density of air U 1, 2g/l and the relation 'm U 'V with the volume element

'V . The power is defined as the ratio of energy to time as P

E 't .

a) A wind volume 'V flows through a rotor area A and needs the time 't to travel the distance 's . Therefore the speed is v 's 't . Determine the general formula for the volume element which passes the rotor area A during the time interval 't as a function of the wind speed. b) Give the formula for the amount of wind power Pwind , which passes through the rotor area A as a function of the wind velocity. Show that the power increases with the third power of the wind velocity. Information:

A rotor of a wind energy converter with area A slows down the incoming wind speed from v1 in front of the rotor to the lower speed v2 behind the rotor (Figure 11). The wind speed in the rotor area itself can be shown to be the average of v1 and v2 i.e., v Pc

v1 v2

P1 P2

2 . The converted power is then given by:

1 'V U v12 v22 . 2 't

v1

v2

v A

A1

A2

Figure 11. Wind flow through the rotor area.

c) Express the formula for the determination of Pc as a function of A , v1 and v2 . d) Describe the converted power Pc in c) as a function of the variable x and v1 . 58

v2 v1

PROBLEMS FOR THE SECONDARY MATHEMATICS

Information: The efficiency of a wind energy converter (power coefficient) is defined as the ratio of the converted power to the wind power input as cP Pc Pwind .

e) Express the power coefficient as a function of the variable x

v2 v1 . Draw the

graph of this function as a function of x . Note that x > 0,[email protected] , why?

f) Determine the value xmax which corresponds to the maximum value of the power coefficient, the so-called Betz’ efficiency. This is the value for x which gives the best energy conversion.

v2 v1

Biomass and Reduction of CO2 Emissions This problem deals with fossil fuels and biomass, especially with the production of CO2 emissions and possibilities for their reduction. The conversion of quantities is practised, and knowledge of rule of three and percentage calculation is required. Information: In Germany for example, an average private household consumes nearly 18000 kWh of energy annually. 80% of this amount is for heating purposes and 20% for electrical energy. The energy demand for heating and hot water is mainly covered by the use of fossil fuels like natural gas, fuel oil or coal. Assume that the calorific values of gas, oil and coal can be converted to useable heating energy with a boiler efficiency of 85%. This means that 15% is lost in each case.

a) The following typical specific calorific values are given Natural gas: 9.8 kWh/m³ Fuel oil: 11.7 kWh/kg Coal: 8.25 kWh/kg (That is, the combustion of 1 m3 natural gas supplies 9.8 kWh, 1 kg fuel oil supplies 11.7 kWh and 1 kg coal supplies 8.25 kWh.) What amount of these fuels annually is necessary for a private household in each case? b) The specific CO2-emissions are approximately: Natural gas: 2.38 kg CO2/kg Fuel oil: 3.18 kg CO2/kg Coal: 2.90 kg CO2/kg The density of natural gas is nearly 0.77 kg/m³. How many m³ of CO2 each year for a private household in Germany does it take in each case? Hint: Amounts of material could be measured with the help of the unit ‘mole’. 1 mole of CO2 weights 44 g and has a volume of approximately 22.4 l. 59

BRINKMANN AND BRINKMANN

Information: Wood is essentially made with CO2 taken from the atmosphere and water. The bound CO2 is discharged by burning the wood and is used again in the building of plants. This is the so-called CO2-circuit. Spruce wood has a specific calorific value of nearly 5.61 kWh/kg. The specific CO2-emissions are approximately 1.73 kg CO2/kg. c) How many kg of spruce wood would be needed annually for a private household instead of gas, oil or coal? (Assume again a boiler efficiency of 85%). How many m³ of fossil CO2-emissions could be saved in this case? Information: Spruce wood as piled up split firewood has a storage density of 310 kg/m³. d) How much space has to be set aside in an average household for a fuel storage room, which contains a whole year’s supply of wood? Compare this with your own room! e) Discuss the need for saving heat energy with the help of heat reduction.

Automobile Energy Consumption This problem can be treated in lessons on trigonometry. Its solution requires knowledge of the rule of three. The problem makes clear the dependence of an automobile’s energy consumption on the distance-height-profile, the moved mass and the velocity. Tim and Lisa make a journey through Europe. Just before the frontier to Luxembourg their fuel tank is empty. Fortunately they have a reserve tank filled with 5 l fuel. “Let’s hope it will be enough to reach the first filling station in Luxembourg. There, the fuel is cheaper than here” Tim says. “It would be good if we had an exact description of the route, than we would be able to calculate our range”, answers Lisa. Information: – In order to drive, the resisting forces have to be overcome. Therefore a sufficient driving force Fdrive is needed. For an average standard car, the law for this force (in N) is given by the following formula: Fdrive (0.2 9.81 sin D ) m 0.3 v2 for Fdrive t 0 , where m the moving mass (in kg) is the mass of the vehicle, passengers and packages; v is the velocity (in m/s), and D is the angle relative to the horizontal line. D is positive for uphill direction and negative in the downhill case (Figure 12). – The energy E (in Nm) which is necessary for driving, can be calculated in cases of a constant driving force by: E Fdrive s , with s as the actual distance driven (in m). – The primary energy consisting of the fuel amounts to about 9 kWh for each l l of fuel. (kWh is the symbol for the energy unit ‘kilowatt-hours’; it is 1 kWh = 3 600000 Nm). 60

PROBLEMS FOR THE SECONDARY MATHEMATICS

– The efficiency of standard combustion engines in cars for average driving conditions is between 10% and 20% nowadays; this means only 10%–20% of the primary energy in the fuel is available to generate the driving forces.

D D Figure 12. Definition of the angle D .

The distance which Tim and Lisa have to drive to the first filling station in Luxembourg can be approximately given by a graphical representation like that one given in Figure 13. (Attention: think of the different scaling of the co-ordinate axis). The technical data sheet of their vehicle gives the unladen weight of their car as about 950 kg. Tim and Lisa together weigh c. 130 kg, and their packages nearly 170 kg. The efficiency of the engine can be assumed to be c. 16%. h [m] 350

200 140

I

II

III

Figure 13. Distance-Height-Diagram to the next filling station (h is the height above mean sea level).

a) Can Tim and Lisa take the risk of not searching for a filling station before the frontier? Assume at first, the speed they drive is 100 km/h. b) Would Tim and Lisa have less trouble, if they had only 50 kg packages instead of 170 kg? c) Would the answer to a) change if Tim and Lisa chose their speed to be only 50 km/h? Help for a): i) Note, the speed in the formula for Fdrive has to be measured in m/s. ii) The value for sin D can be calculated with the information given in Figure 13. iii) Determine for each section the force Fdrive and the energy needed. The distance s has to be measured in m. Convert the energy from Nm in kWh. 61

BRINKMANN AND BRINKMANN

iv) v)

Determine the total energy which is needed for the whole distance as a sum over the three different sections. How many kWh of energy to drive are given by the 5 l reserve fuel? Consider the efficiency of the motor.

Automobiles: Forces, Energy and Power This is a problem that can be treated in higher secondary mathematics education, in the context of differential and integral calculus. This problem shows the dependence of an automobile’s power and energy consumption on the distance-height-profile, the moved mass and the velocity. Kay has a new electric vehicle, of which he is very proud. He wants to drive his girlfriend Ann from the disco to her home. Ann jokes: “You will never get over the hill to my home with this car!” “I bet that I will”, says Kay. The distance-height-characteristic of the street from the Disco (D) to the house (H), in which Ann lives, is shown in Figure 14, and it can be described by the following function:

h( x )

1 (4 x 2 104 x 300) for x > 0; [email protected] , 1 000

where x and h are measured in kilometres (km). h [km] H

D

x [km] Figure 14. Distance-Height-Diagram between D and H.

a) Show, that it is possible to calculate the real distance s depending of a given height function h over the interval [ x1 , x2 ] with the help of the following formula: x2

s

³

x1

62

1 (hc( x)) 2 dx .

PROBLEMS FOR THE SECONDARY MATHEMATICS

Help: Consider the right angle triangle as shown in Figure 15.

's

'h 'x

Figure 15. Geometrical representation.

b) How long is the real distance which Kay has to drive from the Disco to the house where Ann is living? Help: Show that 1 2 2 2 ³ 1 x dx 2 ( x 1 x ln( x 1 x )) const. 1 with the help of the derivative of ( x 1 x 2 ln( x 1 x 2 )) . 2 (Hint: This partial result is helpful for solving the problem f ).) c) D means the angle of the tangent to the curve h at the point x0 on the x -axis. Prove that hc( x0 ) . sin D 1 (hc( x0 ))2 (Note that hc( x0 )

tan D .)

d) Assume Kay wants to drive at a constant speed of 110

km . h

Determine the driving power necessary at the top of the hill (maximum of h ) and at the points with h( x ) 0.4 and h( x) 0.8 . For this purpose you need the following data: Kay’s electric vehicle has an empty weight of 620 kg, Kay and Ann together weigh nearly 130 kg. Information: – In order to drive, the resisting forces have to be overcome. Therefore a sufficient driving force Fdrive is needed. For an average standard car, the law for this force (in N) is given by the following formula: Fdrive (0.2 9.81 sin D ) m 0.3 v2 for Fdrive t 0 , where m the moving mass (in kg), is the mass of the vehicle, passengers and packages, v is the velocity (in m ), and D is the angle relative to the horis

zontal line. D is positive for uphill direction and negative in the downhill case (Figure 16). 63

BRINKMANN AND BRINKMANN

– The driving power P (in

Nm ), s

which is needed to hold the constant speed, can

be calculated using the product of the driving force (in N) and the velocity (in m ): s

P

Fdrive v .

The power P is measured with the unit [kW], with: 1 kW = 1 000

h

Nm . s

D

D

h Figure 16. Angle D dependant on the function of h x .

e) Kay’s electric vehicle has a nominal power of 25 kW. Is it possible for him to bring Ann home? f) Determine the driving energy which has to be consumed for the route from the disco to Ann’s home. Assume that Kay drives uphill with a speed of 80 km and h

downhill with a speed of 110

km . h

(Attention: Because h x as well as x are

expressed in km in the function equation of h , the resulting energy in the following equation is obtained in N km = 1 000 Nm .) Information: – The pure driving energy E (in Nm), which is necessary for driving, can be calculated as: s0

E

³ Fdrive ds 0

x0

³F

drive

1 (hc( x)) 2 dx , where s is the actual distance (in m) driven.

0

– The energy E is usually measured in kilowatt-hour [kWh]. 1 kWh 3 600 000 Nm . g) The actual charged electrical energy in the batteries of Kay’s vehicle is 6 kWh. The driving efficiency of his electrical vehicle is nearly 70%; this means only 70% of the stored energy can be used for driving. Is the charging status of Kay’s batteries sufficient to bring Ann to her home, under the assumptions in f )? Will I make it home afterwards??

64

PROBLEMS FOR THE SECONDARY MATHEMATICS

CLASSROOM IMPLEMENTATION

The problems on environmental issues developed by the authors must be seen as an offer for teaching material. In each case, the students’ abilities have to be considered. In lower achieving classes it might be advisable not to present every problem in full length. In addition, lower achievers need a lot of help for solving complex problems that require several calculation steps. The help given in some problems, like in example 1 above, addresses such students. The problems should be presented to higher achievers without much help included. It might even be of benefit not to present the given hints from the beginning. Students would thus have to find out, which quantities are yet needed in order to solve the problem. The problem would become more open and the students would be more involved in modelling processes. As the intention of the authors is also an informal one, in order to give more insight in the field of future energy issues, the mathematical models/formulas are mostly given in the problem texts. Students are generally not expected to find out by themselves the often complex contexts; these are already presented, thus guaranteeing realistic situation descriptions. The emphasis in the modelling activities lies rather in the argumentation and interpretation processes demanded, recognising that mathematical solutions lead to a deeper understanding of the studied contents. In the context of an evaluation of lessons dealing with problems like those presented in this paper, students amongst others were asked to express what they have mainly learned. The given answers can be divided in three groups: mathematical concepts, contents concerning renewable energy topics, as well as convenient problem solving strategies. As regards the last point, students stressed especially that they had learned that it is necessary to read the texts very carefully, and also to consider the figures and tables very carefully. Almost all students expressed, that they would like to work on much more problems of this kind in mathematical classes, as the problems are interesting, relevant for life, and are more fun than pure mathematics. Classroom experiences show that students react in different ways to the problem topics. While some are horrified by recognizing for example that the worldwide oil reserves are already running low during their life time, others are unmoved by this fact, as twenty or forty years in the future is not a time they worry about. In school lessons there are again and again situations in where students drift away in political and social discussions related to the problem contexts. Although desirable, this would sometimes lead to too much time loss for mathematical education itself. Cooperation with teachers of other school subjects would be profitable if possible. OUTLOOK AND FINAL REMARKS

In order to integrate future energy issues into curricula of public schools, several initiatives have already been started in Germany, supported and in co-operation with the ‘Deutsche Gesellschaft für Sonnenergie e.V. (DGS)’, the German section of the ISES (International Solar Energy Society). There exists a European project, named “SolarSchools Forum”, that aims to integrate future energy issues into curricula of public schools. In the context of this 65

BRINKMANN AND BRINKMANN

project the German society for solar energy DGS highlights the teaching material the authors created (http://www.dgs.de/747.0.html). Most of this material is only available in German language. This article is a contribution towards making these materials accessible in English also. (The English publications up to now (see e.g. Brinkmann & Brinkmann, 2007) present only edited versions of some of the problems.) Although the education in the field of future energy issues is of general interest, the project that we presented in this paper seems to be the only major activity focusing especially on mathematics lessons. The amount of problems should thus be increased, especially with problems which deal with a combination of different renewable energy converters, like hybrid systems, to give an insight into the complexity of system technology. Additionally, the sample mathematical problems on renewable energy conversion and usage have to be permanently adjusted to actual and new developments because of the dynamic evolution of the technology in this field. REFERENCES Brinkmann, A., & Brinkmann, K. (2005). Mathematikaufgaben zum Themenbereich Rationelle Energienutzung und Erneuerbare Energien. Hildesheim, Berlin: Franzbecker. Brinkmann, A., & Brinkmann, K. L. (2007). Integration of future energy issues in the secondary mathematics classroom. In C. Haines, P. Galbraith, W. Blum, & S. Chan (Eds.), Mathematical modelling (ICTMA 12): Education, engineering and economics (pp. 304–313). Chichester: Horwood Publishing. Brinkmann, A., & Brinkmann, K. (2009). Energie-Hybridsysteme – Mit Mathematik Fotovoltaik und Windkraft effizient kombinieren. In A. Brinkmann & R Oldenburg, (Eds.). Schriftenreihe der ISTRONGruppe. Materialien für einen realitätsbezogenen Mathematikunterricht, Band 14 (pp. 39–48), Hildesheim, Berlin: Franzbecker. Hudson, B. (1995). Environmental issues in the secondary mathematics classroom. Zentralblatt für Didaktik der Mathematik, 27(1), 13–18. National Curriculum Council. (1989). Mathematics non-statutory guidance. York: National Curriculum Council.

Astrid Brinkmann Institute of Mathematics Education University of Münster, Germany Klaus Brinkmann Umwelt-Campus Birkenfeld University of Applied Science Trier, Germany

66

TIM BROPHY

3. CODING THEORY

INTRODUCTION

Throughout the world teachers of mathematics instruct their pupils in the wonders of numbers. It is always a challenge to find areas where the topics covered at school intersect the world of the student. The search for such intersection points is well rewarded by arousing the interest of the student in the topic. Teachers already have a very heavy workload and may not simply have the time to do such research. This chapter attempts to link the students’ world of shopping, curiosity and music to modular arithmetic, trigonometry and complex numbers. It is hoped that the busy teacher will find here ideas to enliven classwork and use the students’ natural curiosity as a pedagogical tool in the exploration of numbers. In the world today we rely for many things on digital information. Without digital storage there would be no such thing as a CD or a DVD, satellite television or an mp3 player. NASA would not have been able to receive pictures from Mars. Mobile phones would not work.

Figure 1. Asteroid ice (Courtesy NASA/JPL-Caltech).

Information stored in digital form can, indeed will, become corrupted. This leads to errors in the information stored or transmitted. This article is about the two processes of first detecting and then correcting these errors. Sometimes the errors are unimportant. You may be speaking to someone on a telephone line with a faint crackle in the background. This crackle, while it may be irritating, does not prevent the transmission or reception of information: your voices. Sometimes the errors that could occur are very important. You may be making a purchase with a credit card. If there is an error in transmitting or receiving the amount of money involved it could seriously upset either you or your bank. J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 67–85. © 2011 Sense Publishers. All rights reserved.

BROPHY

The simplest way of storing information for use in digital media is in binary form. All the information will be encoded as a sequence of the two digits 0 and 1. A computers is a machine that contains banks of switches that can be either on or off. A switch that is on can be represented by the number 1. A switch that is off can be represented by the number 0. To store and transmit information in this format means working with only two integers, 0 and 1. These are called binary digits from which we get the word bit. When we limit the available integers used in arithmetic the process is called modular arithmetic. This is the key to much error detection and correction. We will look first at errors in bar codes. BAR CODES

Modular Arithmetic Prime numbers are numbers that are divisible only by themselves and one. Prime numbers are involved in coding information so that it can be used in digital media. It turns out that many of the uses of prime numbers depend on using Modular Arithmetic. What is this? While the numbers go on forever, human beings don’t. A way to begin looking at the endless rise of the numbers is to look at them in cycles. We do not count the hours of the day as rising forever. In fact we have two different systems for keeping track of time, a twelve hour and a twenty four hour system, and they are both cyclic. The older of the two that we use takes a fundamental cycle of twelve and begins again after the twelve is reached. An addition table would look something like the following (Table 1): Table 1. Addition on a clock + 1 2 3 4 5 6 7 8 9 10 11 12

1 2 3 4 5 6 7 8 9 10 11 12 1

2 3 4 5 6 7 8 9 10 11 12 1 2

3 4 5 6 7 8 9 10 11 12 1 2 3

4 5 6 7 8 9 10 11 12 1 2 3 4

5 6 7 8 9 10 11 12 1 2 3 4 5

6 7 8 9 10 11 12 1 2 3 4 5 6

7 8 9 10 11 12 1 2 3 4 5 6 7

8 9 10 11 12 1 2 3 4 5 6 7 8

9 10 11 12 1 2 3 4 5 6 7 8 9

10 11 12 1 2 3 4 5 6 7 8 9 10

11 12 1 2 3 4 5 6 7 8 9 10 11

12 1 2 3 4 5 6 7 8 9 10 11 12

Notice that the number 12 plays the part that is normally taken by zero. No matter what we add to 12 it remains the same: 12 + 2 = 2 68

CODING THEORY

12 + 7 = 7 12 + 12 = 12 What this means is that two hours after 12 noon is 2 pm. Seven hours after 12 noon is 7 pm and twelve hours after 12 noon is 12 midnight. Adding hours is arithmetic. To calculate the sum of two numbers in this system we add them together and then, if the total is greater than twelve, subtract twelve from the total. We learn to do this at a very early age and do not see any problems with it. The method is illustrated below: 7+2=9 7 + 6 = 13 and 13 – 12 = 1 5 + 11 = 16 and 16 – 12 = 4 Since the number 12 plays the part of zero we will use zero whenever the number 12 appears. This gives rise to Table 2 which is addition using only the numbers from zero to eleven. We call this addition modulo 12 and it is an example of modular arithmetic. Table 2. Addition modulo 12 + 1 2 3 4 5 6 7 8 9 10 11 0

1 2 3 4 5 6 7 8 9 10 11 0 1

2 3 4 5 6 7 8 9 10 11 0 1 2

3 4 5 6 7 8 9 10 11 0 1 2 3

4 5 6 7 8 9 10 11 0 1 2 3 4

5 6 7 8 9 10 11 0 1 2 3 4 5

6 7 8 9 10 11 0 1 2 3 4 5 6

7 8 9 10 11 0 1 2 3 4 5 6 7

8 9 10 11 0 1 2 3 4 5 6 7 8

9 10 11 0 1 2 3 4 5 6 7 8 9

10 11 0 1 2 3 4 5 6 7 8 9 10

11 0 1 2 3 4 5 6 7 8 9 10 11

0 1 2 3 4 5 6 7 8 9 10 11 0

Barcodes contain information that can be read by scanners electronically. This eliminates certain errors and minimises others. The barcode, as shown in Figure 2, consists of various groups of digits. The first two or three digits give the country to which the barcode has been issued. This is not the same as the country in which the product is manufactured. In Figure 2 the first three digits, 539, indicate that the barcode was issued to some group in Ireland. The remaining digits in this example, except for the final one, have no meaning. The final digit is called a check digit. It is in evaluating the check digit that modular arithmetic is used. When a barcode is scanned errors can occur. These errors can 69

BROPHY

Figure 2. Barcode.

be caused by a multitude of events, from dirt blocking either a bar or a space to random electronic errors. While no automatic system can trap all errors, a check digit can catch quite a few. Check Digit In the EAN-13 (European Article Number) system (Figure 3) the first two (or sometimes three) digits give the country to which the barcode has been issued. The next five (or four if the country used three) digits belong to a particular company. The following five digits identify the actual product. There is one more digit to go. This is the check digit. It is calculated from all the digits already present.

Figure 3. Meaning of the numbers.

The barcode reader will do the same calculation and if a different digit is arrived at then an error has occurred. For some errors this can even be corrected. The process of calculating the check digit is quite simple to carry out. Indeed a description of the process is much more complicated than actually doing it. Consider the fictitious bar code above: 5391234512342. The final digit, 2, is the check digit. This is calculated using the following steps in Table 3: Table 3. How to calculate the check digit 1. 2. 3. 4. 5. 6. 70

Write the number in a row. Label the digits alternately Odd(O) or Even(E) starting at the right and moving left. Make out a table with 3 below each O and 1 below each E. These are the weights. Multiply the digits by their weights mod 10. Add up the new row mod 10. If the total is t then the check digit, c, is the solution to t + c = 0 mod 10.

CODING THEORY

All this may seem very complicated but an example should make it clear. The barcode we invented above before a check digit was assigned to it was 539123451234. We will go through the process of calculating the check digit. Table 4. Calculation of check digit Barcode Position Weights Calculation Result

5 E 1

3 O 3

9 E 1

1 O 3

2 E 1

3 O 3

4 E 1

5 O 3

1 E 1

2 O 3

3 E 1

4 O 3

5×1 5

3×3 9

9×1 9

1×3 3

2×1 2

3×3 9

4×1 4

5×3 5

1×1 1

2×3 6

3×1 3

4×3 2

Adding these results together modulo 10 gives us 5+9+9+3+2+9+4+5+1+6+3+2=8 This is the value of t above. We calculate the check digit by subtracting this number from 10 to get c = 10 – 8 = 2 When the barcode reader analyses this barcode it only takes it a fraction of a second to calculate the check digit and compare it to the one on the barcode. However the check digit can do more than this. Quite often a particular bar can be smudged and be unreadable to the scanner. If there is only one error and the check digit is legible then the barcode reader can calculate the correct digit by reversing the process and shopping can proceed. For example if the scanner reads the number 4#02030145620 where # represents an unreadable smudge the calculation would proceed as follows. Table 5. Correction of an error Barcode Position Weights Calculation Result

4 E 1

x O 3

0 E 1

2 O 3

0 E 1

3 O 3

0 E 1

1 O 3

4 E 1

5 O 3

6 E 1

2 O 3

4×1 4

x×3 3x

0×1 0

2×3 6

0×1 0

3×3 9

0×1 0

1×3 3

4×1 4

5×3 5

6×1 6

2×3 6

Adding these results together modulo 10 gives us 4 + 3x + 0 + 6 + 0 + 9 + 0 + 3 + 4 + 5 + 6 + 6 = 3x + 3 So the check digit should be 10 – 3x – 3 = 7 – 3x = 0. We are looking for a number between 0 and 10 that when multiplied by 3 gives a remainder of 7. 7 is 71

BROPHY

not divisible by 3, neither is 17 but 3×9 = 27. This tells us that the unreadable digit was 9. The error has been both detected and corrected. How good is this method at picking up errors? Studies have shown that slightly more than 79% of errors are the replacement of one digit by a different digit. Suppose a digit, d, whose weight is 1 has been replaced by the digit e. The weighted sum will now change by the amount d – e (mod 10). This will only go undetected if d – e = 0 (mod 10) which means that d = e so there is no error. Similarly if a digit whose weight is 3 was altered we would have 3 (d – e) = 0 (mod 10) so again there is no error. The check digit approach traps all of the commonest types of error. DEEP SPACE

Figure 4. Saturn (Courtesy NASA/JPL-Caltech).

We have all looked in wonder at the photographs sent back from deep space by NASA (Figure 4). How are they sent? How are they received? How do we know that the information has not been corrupted? Mathematics is, of course, the answer to all these questions. To begin the information must be recorded. In other words a digital photograph has to be taken. This process stores all the required information as a sequence of the digits 0 and 1. This information, a sequence of bits, is sent to Earth as a bitstream. Images as Bits The grid drawn in Figure 5 has certain squares coloured black. It represents a fairly badly drawn smiley face. The only colours used here are black and white. This is very easy to translate into the digits that are used by all computers. If we represent a white square by the digit 1 and a black square by the digit 0 then the whole picture is represented by the number 72

CODING THEORY

1001001110010011111111111100011101101110101111011101101111100111

Figure 5. Smiley face.

The receiver of the above number can reconstruct the face only because it is known that the face is drawn on a grid with 8 squares horizontally and 8 squares vertically. The eight squares can each have a white colour or not. Eight squares will have 256 possible combinations of white or black (1 or 0). All black will be 0 and all white will be 255. A collection of eight bits is called a byte. Going from left to right the decimal value of a 1 will be 128

64

32 16

8 4 2 1

while the decimal value of a zero is always zero. For example the number 10010011 = 128 + 0 + 0 + 16 + 0 + 0 + 2 + 1 = 147 The big sequence of zeros and ones written above can be thought of as the sequence of the following eight numbers: 10010011 = 147 10010011 = 147 11111111 = 255 11000111 = 199 01101110 = 110 10111101 = 189 73

BROPHY

11011011 = 219 11100111 = 231 Can you see that 0110110001101100000000000011100010010001010000100000011000 will give the inverse smiley face of Figure 6?

Figure 6. Inverse smiley.

This shows how simple images can be translated into a sequence of binary digits. Depending on the information that is going to be transmitted the details of the coding will be a lot different. This does not matter here. You can see how a sequence of binary digits can even carry pictures. The next question is how to get the binary sequence from outer space to Earth. Phase Modulation You have probably seen the graph of y = sin x many times before (Figure 7). This graph, and modifications that can be carried out on it, is the secret to transmitting information from deep space back to our own planet.

Figure 7. Sin(x). 74

CODING THEORY

Electromagnetic radiation travels through space at about 300 000 km/s. Nothing can travel any faster. The distances to be covered are so vast that this speed is needed. Electromagnetic radiation, depending on its frequency, is perceived as light, radio, x-rays, γ-rays etc. It is as radio waves that NASA’s spacecraft transmit information back to their receivers. Radio waves have the advantage of requiring little energy to produce and passing easily through the Earth’s atmosphere. How can these radio waves carry information? Specifically, how can these waves carry a stream of the digits 0 and 1? To answer this question we need to look more closely at the graph of Sin(x). Figure 8 shows two sin waves with different phases.

Figure 8. Illustration of phase.

The red curve is the graph of Sin(x). The blue curve has exactly the same shape but is in a different place. It is the graph of Sin(x – θ). θ displaces the original wave by a certain amount. This is called the phase of the wave. There are three different methods by which waves can be used to carry information. The maximum height of the wave is called its amplitude. This can be modified to give Amplitude Modulation or AM which is common with certain radio signals. The number of wave fronts that pass a given point in a certain time is called the frequency of the wave. This gives rise to Frequency Modulation or FM which is also used in the transmission of radio waves. The third method, Phase Modulation, is the method used to get signals across the Solar System. Recall that all we need is to transmit a sequence of the digits 0 and 1. This is done in the following manner. Figure 9 shows three different graphs of waves. When the space craft is ready to transmit information back to Earth it begins the process by broadcasting a simple wave. This is received some time later at the Earth’s surface and allows the receiver to detect the frequency and phase of the transmitted wave. For our purposes we can regard this as a phase of zero. The point P is a typical point on the wave. Once communication is established between the transmitter on the spacecraft and the receiver on the ground the transmitter begins to modify the wave. It does this by changing the phase of the wave. The point P’ is on a wave that has its phase shifted 75

BROPHY

Figure 9. Phase shift.

by 900 which might represent the digit 0. The point P’’ is on a wave that has its phase shifted by –900 which might represent the digit 1. So by modulating the phase of the wave a sequence of the two digits 0 and 1 can be transmitted back to Earth. This requires very little energy. This is just as well because, as NASA points out, by the time the signal reaches Earth it is so weak that it would take 40,000 years to collect enough energy from it to light a Christmas tree bulb for one millionth of a second. Coding the Information If the signal is very weak and the distances to be covered are very large then there is obviously a strong possibility of errors occurring. These can occur in creating the signal, transmitting the signal or receiving the signal. While the mathematicians at the Jet Propulsion Laboratory (JPL) have invented many ways of detecting and correcting errors they all involve redundant information. In its simplest form this means that every binary digit (bit) is sent three times. Suppose that a very small part of the information being transmitted is 1011 then the transmitting computer will first transform this into 111 000 111 111. Figure 10 shows the sequence of waves that would then be transmitted to Earth. Each digit 1 in our system is sent as Sin (x + π/2) which gives a phase shift of –90o. Each digit 0 is sent as Sin (x – π/2) which gives a phase shift of 90o. Naturally this is just one continuous signal that is received at Earth. By looking at the phases of these waves the scientists at the receiving end are able to retrieve the message 111 000 111 111. Hence, knowing the redundancy that was applied, they can deduce that the original message was 1011.

Figure 10. 111 000 111 111. 76

CODING THEORY

Suppose that instead of the sequence of waves that are shown in Figure 10 a slightly different set arrived at the receiver. Look at the sequence of waves shown in Figure 11. An error has occurred and one wave has a different phase than those above.

Figure 11. 111 000 111 011.

At some point one of the waves had its phase changed. There are many ways this could have happened. The result is that the message received is not a string of triples of digits. Instead it is 111 000 111 011. The fact that not every digit is a triple alerts the receiver immediately that an error has occurred. The receiver even knows where the error is: 011. The fact that there are two occurrences of the digit 1 and only one occurrence of the digit 0 strongly implies that it is the 0 that is incorrect and the message received should have been 111 000 111 111 which means that the original sequence was 1011. This method, with its built-in redundancy, increases the length of the message by a factor of three. This is very wasteful and the methods actually used by NASA are much more efficient. This example merely demonstrates the principle that errors can be both detected and corrected. Hamming codes are much more efficient and are the basis of many of the codes used by NASA and, indeed, in most types of digital coding. The Hamming code uses redundancy also but it is not anything like as wasteful as the tripling method defined above. Hamming codes work by a very clever use of Geometry. Geometries The word “Geometry” conjures up for us ideas of points and lines and probably certain properties of triangles. Euclid, about 300 BC, assembled all that was known of Geometry into thirteen books called Elements. For many centuries the theorems in these books were regarded as much a part of the physical world as the mathematical one. During the Renaissance artists discovered the secrets of perspective drawing. This is the method of drawing far things smaller than near ones. Here parallel lines will meet. Think of the appearance of railway tracks as they disappear into the distance. Mathematicians, particularly the French mathematician Gérard Desargues, became very interested in these ideas and realized, after some resistance, that the artists were using a geometry other than that of Euclid. If one other geometry could exist then why not more? The very meaning of the words point, line and distance can change. To see how this can be used in error correction we need to look at some definitions, particularly distance. In the geometry of Euclid the distance between two 77

BROPHY

points is the length of the line segment connecting them. In coordinate geometry we use a formula involving squares and square roots to calculate this. (x1 − x 2 ) 2 + (y1 − y 2 ) 2

In ordinary life you may use quite different definitions. If a motorist stopped you to inquire how far he was from his destination you might reply: “It is 5 km as the crow flies but the road winds a lot and you will have to drive 7 km”. If you saw a signpost indicating that the distance to a city was 156 km you would know that meant that there were 156 km of road to cover. The actual length of the shortest line segment may be only 120 km. As you see, “distance” means what we want it to. Using Hamming codes for error correction only the digits 0 and 1 are used. A word is defined as a particular sequence of these digits. We regard each word as a point in a space. The “distance” between two words is defined as the number of digits that differ between the two words. Thus the distance between 1100101 and 1010110 is 4 since there are exactly four positions where the strings of digits differ. The geometric properties of the space where the words live are then used in error detection and correction. The mathematics required is quite advanced and uses tools from Linear Algebra, especially vectors and matrices. Vectors can be thought of as line segments pointing in specific directions. Matrices can be thought of as objects that do things to vectors.

Figure 12. Vectors. 78

CODING THEORY

The blue vector (closest to horizontal axis) in Figure 12 has been transformed by a matrix to the green one. This is a rotation matrix. In a certain sense, there are only specific directions these vectors can point. This makes error detection possible. The distance function then makes error correction possible also. We will illustrate this with a simple example. We will use just two valid code words: 000 and 111. It is not too difficult to imagine a situation where just two words are needed. Suppose that 000 signifies Off and 111 signifies On then we have our binary system back again. There are three digits being used here which gives us eight possible words: 000 001 010 011 100 101 110 111 Only the first and last of these are valid. The other six are errors. We can regard each of these words as points in 3D space where they form a cube. In two dimensions you are familiar with representing points with two coordinates. We refer to these pairs as (x, y) where the number x tells how far to move in a horizontal direction and the number y gives the distance in the vertical direction. Similarly once we move into three dimensions another direction becomes possible and so we need three numbers to represent each point. The triplet is usually referred to as (x, y, z). As we increase the number of dimensions we increase the number of coordinates. We lose the ability to draw pictures but the principles are the same and we can work out distances and equations of curves as easily in seven dimensions as in two. To see how error correction with Hamming codes works we will stay in three dimensions to get a feel for the process. In Figure 13 we show all the possible code words as the coordinates of a cube. Each side of the cube is one unit long as measured using the Hamming definition of distance. Here is how we use this geometry to correct words with one error. If the transmitted word is 000 then the possible errors lead to 001, 010, 100. These are the only words that can be got from 000 with just one error. You can think of them as being on a sphere in this space whose radius is 1 with centre 000 as in Figure 14. 79

BROPHY

Figure 13. Cube of words.

Figure 14. Errors on sphere.

All these words are also on a sphere of radius 2 centered at 111. The sphere of radius 1 centred at 111 is shown in Figure 15 and contains the triplets 110 101 and 011. The two spheres shown divide the error words into two sets with no elements in common as shown in Figure 16. To correct a word that has only one error we simply find the nearest code word to the error word. Other methods allow the detection and correction of more errors but are beyond the scope of this introduction. 80

CODING THEORY

Figure 15. More errors.

Figure 16. Spheres of the hamming code. COMPACT DISCS

Digital Music Compact discs (CD) are used to store many different types of information today. They were originally used to store sound, particularly music. How can music be encoded on the surface of a disc and how can it be retrieved? You will not be surprised to discover that, once again, trigonometry and binary digits form the key. All sound is transmitted as waves of some form. The sound itself needs a medium to carry it. The medium, usually air, will vibrate. This vibration is carried from one place to another as a wave. If certain regular patterns are present then humans call it music. From this point of view there is no difference between Grand Opera and Heavy Metal. Physically they will both be transmitted as waves with certain patterns.

Figure 17. A sound wave. 81

BROPHY

Figure 17 shows a pure sound wave. This is composed of a combination of various sin waves of different frequency and amplitude. For any instant between the start and end of the sound the wave will be at some particular point. This is what we mean by saying that the sound is continuous. Technically it is an analog signal. This type of signal cannot be represented just by using the two digits 0 and 1. The first thing to be done is sample the signal at various places. This will return the values of the sound at specific places: but not everywhere.

Figure 18. Sampling a wave.

Figure 18 shows a series of blue lines imposed on the sound wave illustrated in Figure 17. These are the values of the sound wave at specific points. To attain a sample that is close to the original sound there must be a large number of sample points. For a CD the sound is sampled 44,100 times each second. This gives rise to a sample such as that shown in Figure 19.

Figure 19. Sampled sound.

Each blue line represents a particular amplitude. The sampling rate of 44,100 Hz (Hz means per second) allows the use of 65536 different amplitudes to reconstruct the sound. What use are all these amplitudes if digital media can only store the digits 0 and 1? These digits can be used to build up numbers of any size. The two digits being used are 0 and 1. 0 represents a switch being in the Off position and 1 represents the switch in the On position. Each digit, therefore, distinguishes between two states. Hence a sequence of 16 digits can be combined to give the large number of 216 = 65536 values. Since one of these positions has all the switches turned off sets of sixteen bits (two bytes) can be used for all the numbers between 0 and 65535. The process is the same as we saw in constructing the smiley face from sets of eight bits. Physical Structure The CD itself is a plastic disc (Figure 20). During its manufacture the plastic is shaped with very small bumps arranged in a continuous spiral. This is next covered with a

82

CODING THEORY

Figure 20. Structure of a CD.

thin layer of highly reflective aluminium. An acrylic layer protects the aluminium. The label is printed onto the acrylic. A laser beam is reflected from the aluminium layer where the change in reflectivity caused by the bumps is interpreted as a sequence of the digits 0 and 1. Thus we have come via trigonometry to a bitstream yet again. A sophisticated piece of electronics then converts the digital signal back into analog form and this is used to convert the data back to sound. The very high sample rate means that the loss in quality is undetectable to most human ears although some musicians claim that they can distinguish between the sound on a CD and the analog sound of vinyl records. In Figure 21 because the sample rate is low either the green or red curve, or indeed many others, could be reconstructed.

Figure 21. Low sample rate.

In Figure 22, however, it would be very difficult to reconstruct the wrong wave as the high sample rate leaves hardly any freedom.

Figure 22. High sample rate.

This is the reason why such a high sampling rate is needed in the transformation of an analog sound wave to a set of digital data. Error Detection and Correction Errors, of course, can occur either in the manufacturing process or from physical damage to the CD. In the section on barcodes we saw how check digits can detect 83

BROPHY

and correct simple errors. A very sophisticated extension of this method is used in detecting and correcting errors on a CD. The methods used involve complex numbers. These are numbers of the form a + bi where i is defined by the equation

i 2 = −1 Using complex numbers it is possible to get various roots of the number 1.

Figure 23. Complex roots of 1.

The red points in Figure 23 are the eight eighth roots of 1. These are 1 2

+

1 2

i,i,−

1 2

+

1 2

i,−1,−

1 2

−

1 2

i,−i,

1 2

−

1 2

i,1

This means that any of those numbers raised to the power of eight will give the number 1. The data points can be regarded as the coefficients of a certain polynomial, p. The complex roots of 1 are used to create a second polynomial, g. The product of these two polynomials is analysed by the CD player which divides this result by g. If this division process leaves a remainder then there is an error in the received data. By evaluating this remainder at the complex roots of 1 the error can be corrected. All this happens far too quickly to be detected by the ear so the human listener hears the continuous sound of a melody thanks to some very complicated electronics and mathematics. This method is called a Reed-Solomon code and was first described in 1960. The particular roots of 1 to be used depend on the length of each word and the number of errors to be checked. It is almost incredible to think that nearly 50% of errors can be corrected by this method. This explains why a CD, unlike a vinyl record, does not exhibit gradual signs of decay. As long as the scratches on the CD remain below a critical level then the error correcting methods will be able to reconstruct the sound perfectly. If the faults in the CD rise above a certain level then correction is impossible and it seems to us that the CD has suddenly been corrupted. TO DO IN CLASS

Art Work In this activity the teacher will get different groups in the class to draw pictures on graph paper and transmit the information in code. Time constraints probably mean 84

CODING THEORY

that the grid on the paper should be fairly large. A (simple) piece of art should be drawn with each square clearly either blank or covered as in the smiley face drawn earlier. Divide the students into different groups. Each group will have two sheets of graph paper. After a discussion each group will draw a simple figure on one sheet of graph paper. Using binary arithmetic this figure should then be written as a binary number and then translated into decimal. The decimal number is written on the blank sheet of graph paper. The blank sheets are now passed to different groups and the figures reconstructed. Remember if a figure is drawn incorrectly the fault may lie with the translation into a decimal number or the translation from the decimal number to the corresponding binary number. How do these correspond to the errors discussed above? REFERENCES Cederberg, J. N. (2001). Axiomatic systems and finite geometries. A Course in Modern Geometries, 18–25. (2001). Speaking in phases. Technology Teacher [serial online (60, 12–17)]. Available from: Academic Search Complete, Ipswich, MA. Fitzpatrick, P., & Kingston, J. (2000). Error correcting codes and cryptography. Newsletter Irish Mathematics Teachers Association, (97), 45–58.

Tim Brophy National Centre for Excellence in Mathematics and Science Teaching and Learning (NCE-MSTL) University of Limerick

85

JEAN CHARPIN

4. TRAVELLING TO MARS: A VERY LONG JOURNEY Mathematical Modelling in Space Travelling

INTRODUCTION

Just over forty years ago, Neil Armstrong, Edwin ‘Buzz’ Aldrin and Michael Collins were the first people to travel to the Moon. This was, in the words of Neil Armstrong, ‘one small step for man, one giant leap for mankind’. The next big milestone in space travelling is to reach our next closest neighbour in the solar system: Mars. This planet is much further away than the Moon and there are a lot of challenges related to this trip. Mathematics will be key to solving them. This chapter introduces a few activities related to this trip focussing on two aspects of the school curriculum: 1. Geometry: circles and ellipses. The activities proposed in the first section of this chapter involve some simple geometry: drawing circles and ellipses, studying the distance between two points belonging to the circles, using the properties of aligned points and diameters to determine the minimum and maximum distance between the Earth and Mars. 2. Large numbers: Space travelling involves large numbers. Understanding what they represent is rather difficult for everyone. The simplest way to make sense of these values is to make a careful choice of units: large numbers are then transformed into much smaller ones which are much easier to interpret. The activity presented in the second section will show a simple way to achieve this. These activities offer mathematics teachers interesting and rewarding ways to engage secondary school students that are accessible to students and teachers alike. CIRCLES, ELLIPSES AND DISTANCE BETWEEN THE EARTH AND MARS

In this first part, the orbits of Earth and Mars will be studied. In the Solar system, the planets move around the Sun describing a curve known as an ellipse. The properties of orbits and ellipses will be briefly reviewed at first and two possible activities will then be presented. Some Background Some properties of ellipses and circles. Figure 1 shows some of the properties of ellipses. They are flattened versions of a circle and have a lot of common properties J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 87–98. © 2011 Sense Publishers. All rights reserved.

CHARPIN

with this well known curve. A point belongs to a circle if its distance to the centre is equal to the radius. An ellipse also has a centre, denoted O on the figure, but this is only a symmetry point. A point P belongs to the ellipse if the sum of its distance to the two points F1 and F2, known as the foci, equals a constant: P ∈ Ellipse ⇔ PF1 + PF2 = d1 + d 2 = C

y P b

d1 F1

B d2 F2

O

A x

f

a Figure 1. Geometry of an ellipse.

If the value of the constant distance d1+d2 is just above the distance between the two foci, F1F2, the ellipse will be very flat. Conversely, if d1+d2 ∠AK ' D '

Similarly let BK " meet the circle again at D ' . Then ∠AKB = ∠AD ' B (same segment) = ∠AK D + ∠K AD ' (external angle equal to opposite internal angles) "

212

'

"

TEACHING ASPECTS OF SCHOOL GEOMETRY

= ∠AK " B + ∠K " AD ' > AK " B

Thus K is the required position. Now let AB = a, BC = b and CK = x. CK² = CB.CA x² = b(a + b) x=

b( a + b )

which gives a formula for the distance x. AB is actually 5.6m. So for example if b = 20m then x=

20(5.6 + 20)

x = 22.63m (2 d.p.) and if b = 30 you get 32.68m (2 d.p.). So in practice the kicker should come out a bit more than the distance BC. A Different Approach (Using Calculus)

Figure 6. Using calculus interpretation.

Let K be an arbitrary point on the perpendicular at C to AB. Let the angle AKB = θ and angle BKC = φ (Figure 6). Let AB = a, BC = b and CK = x. a+b b Then tan ( θ + φ ) = and tan φ = x x 213

LEAHY

a+b tan θ + tan φ = 1 − tan θ tan φ x b tan θ + x = a+b ⇒ b x 1 − tan θ x

⇒

Let tan θ = t. So b x = a+b bt x 1− x ⎛ b⎞ ⎛ bt ⎞ ⇒ x⎜ t + ⎟ = (a + b) ⎜1 − ⎟ x x⎠ ⎠ ⎝ ⎝ ⎛ bt ⎞ = (a + b) ⎜ − ⎟ (a + b) ⎝ x⎠ b ⎛b⎞ ⇒ t [x + ⎜ ⎟ (a + b)] = (a + b) - x ⎛⎜ ⎞⎟ ⎝ x⎠ ⎝ x⎠ = a +b−b = a ⎞ ⎛ ⎟ ⎜ a ⎟ ⇒t =⎜ ⎜ x + b ( a + b) ⎟ ⎟ ⎜ x ⎠ ⎝ ⎞ ⎛ ⎟ ⎜ a ⎟ =⎜ 2 ⎜ x + b(a + b) ⎟ ⎟ ⎜ x ⎠ ⎝ ⎞ ⎛ ax ⎟ = ⎜⎜ 2 ⎟ ⎝ x + b( a + b) ⎠ t+

Differentiating t with respect to x yields

dt [ x 2 + b(a + b)]a − ax.2 x AB(a + b) − ax 2 = 2 = dx [ x 2 + b(a + b)]2 [ x + b(a + b)]2 Since the denominator on the right hand side is positive 214

TEACHING ASPECTS OF SCHOOL GEOMETRY

dt = 0 if and only if AB(a + b) − ax 2 = 0 . dx ⇒ AB(a + b) = ax 2 ⇒ b(a + b) = x 2 since a ≠ 0

Since x > 0, x =

b( a + b) .

To show this turning point is a maximum note: x

0 ⇒ a x 2 < AB(a + b) ⇒ AB(a + b) - a x 2 > 0 dt ⇒ >0 dx

Similarly x >

b( a + b) ⇒

tan θ is a maximum at x =

dt < 0 i.e., t is a maximum at x = dx

b(a + b) or

b( a + b) .

⇒ θ is a maximum at x = b(a + b) since θ increases when tan θ increases. ANGLES IN SNOOKER

The game of Pool, popular among young people, and similar games like Snooker and Billiards lend themselves to interesting uses of elementary geometry. The main objective in these games is to strike a billiard ball with another ball in order to direct it into one of the pockets at the side of the table. Problems arise when the ball to be struck is hidden behind another ball so that the cue ball must be deflected off the side of the table into the path of the ball to be struck. The cue ball may need to be deflected off more than one side depending on the configuration of the balls on the table. Any text in transformation geometry would prove useful during this phase e.g. Jeger (1968). The simplest case is illustrated in Figure 7. The sides of the table are labelled a, b, c, d. C is the cue ball and O is the ball to be struck by the cue ball. The other balls are not shown in the diagram. The cue ball C must strike the side a at some point P so that when deflected it strikes the object ball O. Now in physics there is a law of reflection which states that under ideal conditions when an object is reflected than the angle of incidence is equal to the angle of reflection. 215

LEAHY

Figure 7. Snooker shot off one side cushion.

Figure 8. Law of reflection.

In Figure 8 the angle of incidence is ∠ APC and the angle of reflection is ∠ BPC where CP ⊥ E so that ∠ APC = ∠ BPC. This implies that ∠ APD = ∠ BPE which is more useful for our purpose.

Figure 9. Construction to locate position of P. 216

TEACHING ASPECTS OF SCHOOL GEOMETRY

The problem now is to find P given that ∠ CPA = ∠ BPO (Figure 9). Extend CP to meet the perpendicular from O to the side a at Oa meeting AB at E. Then

∠Oa PB = ∠CPA = ∠BPO

⇒ ΔOa PE ≡ ΔOPE ⇒ Oa E = OE i.e., Oa can be thought of as the reflection of O in side a. This then gives another method to find P. Simply find Oa first and then join C to Oa meeting AB at the required point P. In practice you can imagine a virtual ball at Oa which must be struck by the cue ball C. This will result in striking ball O. An alternative analysis is as follows. Draw CF ⊥ AB . Then ΔCFP is similar to ΔOEP so that FP CF = PE OE

i.e., FE is divided in the ration. CF : OE This method is not practical in actual play but for teaching purposes it does introduce the relationships between similar triangles. Exercise: In Figure 9 if CF = 96cm, OE = 75cm and FE = 228cm calculate the distance FP. Teaching Note Professional snooker players often make shots using multiple cushions. ‘What if ’ considerations may now be employed by students to extend the analysis in interesting ways e.g. consider similar shots involving 2, 3 or more cushions. Other questions may occur to students that can be investigated. For example students might come to conjecture that the cue ball travels the least possible distance in all such cases. Is this true? Can you prove it? These situations are dealt with in the following sections for the benefit of the mathematics teacher. Two Cushion Shots Now consider the case where the cue ball must be reflected off two sides a and b (Figure 10). 217

LEAHY

Figure 10. Snooker shot of two side cushions.

To analyse this case we reduce it to two cases of reflection off one side. First imagine the cue ball is at P1 (yet to be determined) needing to strike ball O by reflecting off side b. As in section 1 above find the reflection Ob of O in side b. Then joining P1 to the virtual ball at Ob gives the point P2 . But of course you do not know point P1 on side a. However, the problem is now reduced to reflecting the cue ball C off side a to strike the virtual ball at Ob and again this has been solved in section 1. So find the image Oa by reflection the virtual ball Ob in side a. Then joining C to Oa gives the point P1 which in turn will reflect to the point P2 and thence to O. To construct the path of the cue ball C therefore first find Ob , the reflection of O in side b, then find Oa , the reflection of Ob in side a. Join C to Oa meeting side a at P1 , then join P1 to Ob meeting side b at P2 and finally joining P2 to C completes the path. Exercise: Prove CP1 // OP2 n Cushions Shots The method of the proceeding section can now be extended to deal with reflections off any number of sides. The path for reflections off three sides a, b and c in turn is shown in Figure 11. In order to find Oc , Ob and Oa , join C to Oa giving P1 , P1 to Ob giving P2 and P2 to Oc giving P3 . Exercise: Sketch the path for reflections in turn off sides a, b, c, and d. 218

TEACHING ASPECTS OF SCHOOL GEOMETRY

Figure 11. Snooker shot off three side cushions.

Shortest Path (Cue Ball Travels Least Possible Distance) An interesting aspect of the problem is that in any given situation the cue ball C travels the least possible distance to reach the object ball O. In Figure 12 let P ' be any point on side a other than P. Join C P ' , O P ' and Oa P ' .

Figure 12. Shortest path calculations.

From section 1,

ΔO a PE ≡ ΔOPE ⇒ Oa P ≡ OP 219

LEAHY

Similarly, Oa P' ≡ OP' So CP + PO = CP + O A P

< CP '+O A P' (triangle inequality) = CP'+ P' O i.e., CP + PO < CP'+ P' O Further Investigations Students will find that GeoGebra is a very suitable vehicle for pursuing these and other geometric investigations. 1. Sketch the path for reflecting off side a and c in turn 2. Sketch the path for reflecting off sides a,c,a, and c, in turn. 3. Try sketching the path that will return the cue ball to its starting position (ignoring O) under different sequences of reflections 4. Investigate if it is always possible to construct a path from C to O under specific given initial conditions. FINAL REMARKS

These problems and investigations offer real opportunities for student engagement in the mathematics classroom. I have no doubt that mathematics teachers and students who use them will find interesting extensions and a lot of enjoyment working with them. This approach exploits students’ cultural propensity to play and after a fashion engage in mathematical activity, together with the motivational power of context for teaching and learning mathematics. By working through the investigations and problems students are invited to marshal their resources, investigate, experiment, specialize and generalize, conjecture and prove and in other ways experience mathematics as mathematicians do. And there were surprises – an optimum angle for the kicker, and a path of least possible length for the cue ball! REFERENCES Bishop, A. J. (1991). Mathematical enculturation: A cultural perspective on mathematical education. Dordrecht: Kluwer Academic Publishers. Jeger, M. (1968). Transformation geometry. London: Allen and Unwin.

Jim Leahy Department of Mathematics and Statistics University of Limerick

220

JUERGEN MAASZ

13. INCREASING TURNOVER? STREAMLINING WORKING CONDITIONS? A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES AS A TOPIC IN MATHEMATICS LESSONS

INTRODUCTION

Mathematics is applied in various ways in both everyday life as well as the working world. In this chapter, I aim to bring parts of the working world into the classroom by means of the Multi-Moment Recording. More precisely, the students should be acquainted with Multi Moment Recording as a tool to understand and record more effectively production processes. The main focus of this chapter is not on deducing the formula used, but simply on using it. For teachers that do not like this grade of reality in a mathematics lesson there are two helping excurses, one on checking its plausibility and practicality, and one on its mathematical origin. Furthermore, I plan on simulating and optimizing a simple production process (production of paper planes) in order to learn about mathematics as a means of presenting and communicating facts – aspects that are rarely addressed at school. OUTLINE: SUGGESTED LESSON

I will start this contribution with a short outline of the proposed teaching sequence and some preliminary remarks. Preliminary Remarks

First preliminary remark: Since it has often been shown that learning is facilitated by active participation in class, I would like to suggest a lesson that might seem somewhat unusual at first glance. The following description is meant to provide stimulus and support for the actual implementation of the suggested lesson. My proposal includes hints for teaching methods. I think they are appropriate. This should not mean that I know better than you how to teach. You, as a teacher, are the best expert for your teaching. I just like to show how this unusual thing (=my proposal) could really happen. The second preliminary remark refers to the question concerning the appropriateness of the topic for the classroom. Should the optimization of production processes be addressed at school? Wouldn’t such a topic mean meddling with social or trade-union related affairs? The answer to the first question is a clear YES, J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 221–237. © 2011 Sense Publishers. All rights reserved.

MAASZ

because practical relevance is a basic requirement in all current school curricula. The second question relates back to a crucial issue of lessons which are close to reality: One of the major curricular objectives concerns the development of students’ ability to judge, that is, the ability to form a personal opinion based on critical reflection and the ability to justify these opinions successfully. Effective instruction meets these objectives. In other words: “Effective” does not mean that the teacher summarizes his or her opinion in short words to make this non mathematical part of the mathematics lesson very short. The teacher helps the students to learn to make their own decision using mathematics for a more rational decision. The third preliminary remark concerns an objection frequently voiced by teachers saying that instruction close to real life is useful but time-consuming and hardly practical; it does not fit into day-to-day school life, which is dominated by time pressure and other curricular demands. However, traditional mathematics lessons, which dedicate much time to operative work including practicing calculation methods, have proven ineffective. That is, fostering active project work during lessons is time well spent indeed. OUTLINE OF THE LESSON

Presenting the development of industrial production, i.e., the transition from handicraft to manufacture and current car production by means of industrial robots and production islands is a good lesson opener and motivation. A film such as Charlie Chaplin’s ‘Modern Times’, which presents assembly line work from a very specific perspective, could complement such a lesson opener. If an interdisciplinary approach in combination with (economic) history is adopted, the students will get an insight into the development of productivity. At the same time, students will realize that today assembling a car is impossible for an individual. Mass production of automobiles is impossible without division of labour and specializations are fundamental prerequisites. The second step is a paper plane construction contest. Who can produce the best paper plane? Who is fastest? How long does it take? Of course, other simple objects can be produced as well. Paper planes, however, are particularly suitable because they require very little material (a piece of waste paper or spare photocopies, which are easily available in schools), they are easy to fold (no scissors, glue or other material to fix component parts are required) and students can discuss the best folding instructions. Google offers 12600 sites related to paper plane construction, one of which, for example, shows a youtube video: http://www.tippsundtricks24.de/heimwerken/ do-it-yourself/anleitung-papierflieger-saebelzahntiger-basteln.html. Furthermore, there is a simple quality check for every paper plane: a test flight! At this point is seems useful to note that students should decide on a code of conduct, for example, how to cooperate with others, how to use the finished products, where in the classroom to establish the designated zone to test the paper planes. After evaluating the results, the teacher sets an additional assignment: How can we all together produce the largest number of paper planes in the shortest possible time? Imagine we want to produce and sell them. The knowledge gained in the initial phase of the lesson suggests that the key to success is division of labour. However, another important precondition needs to be 222

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

addressed first. What might an optimal design look like? What is optimal under the given circumstances? First, we need to set appropriate criteria. Such criteria might include: 1. 2. 3. 4.

The paper plane needs to be capable of flying. It should look good. Folding it should be easy. The production requires only folding but not gluing.

From this, the first conclusions can be drawn, which can in turn serve as arguments for such criteria. For example, crumpled paper is not effective because neither does it look nice nor will the plane have the desired flight qualities. How do the students in the class find the optimal design? Let’s organize a competition. Groups of three develop a proposal. All proposals are gathered, presented in class and discussed. Finally, students try to reach a consensus and jointly agree on one proposal (however, nobody is allowed to vote for their own proposal). In order to illustrate the suggested lesson, I have decided to describe the following part of the lesson by means of a specific model of a paper plane, using the internet, which offers a large number of step-by-step instructions as well as videos on youtube, for example this one: http://www.youtube.com/watch?v=y5ebnviXegc&feature= player_embedded My understanding is that the paper plane model ‘arrow’ meets the four criteria mentioned above. Ultimately, however, I have chosen this model because it requires a relatively small number of work steps. The video, which runs 1 minute and 26 seconds, is considerably shorter than other videos, which usually run for three minutes. This new criterion (length of the video) is mentioned here to indicate that considerations taken in the initial phases of the lesson might need revision in the course of the project. Let’s consider the chosen paper plane in more detail (Figure 1):

Figure 1. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded 223

MAASZ

The third step is a test run: How many paper planes of this type can we produce together in five or ten minutes? The test run probably brings about a problem of quality control: Do the folded paper planes actually resemble the model? At this point we have reached the question of improving the speed and quality of the production process. That is, we are back to organization and division of labour. Let’s do a simple test in class. After having agreed on a model and construction plan, we fold as many planes as possible – for about ten to fifteen minutes. Then, we will plan the division of labour: Every student does one bend and hands over the paper to the next station, where the next bend will be produced – until the plane is finished at the last workstation. Then another test run and we’re done! In order to plan the division of labour, a precise analysis of the instructions is needed. I will demonstrate this using the ‘arrow’ model. We’ll watch the video carefully, take a piece of paper and fold along with the video. 1) 2) 3) 4) 5)

Take a piece of paper. Fold down the centre. Make sure the crease is straight and sharp. Open it out again. Fold in the top left hand corner to the centre line.

Figure 2. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded

1) Fold in the second top corner to the centre line (Figure 2). 2) Fold the first corner again. 224

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

Figure 3. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded

3) Fold the second corner again. 4) Fold the two sides again along the centre line (Figure 3). 5) Indicate web width. 6) Fold the left wing (Figure 4). 7) Fold the right wing.

Figure 4. Screenshot from http://www.youtube.com/watch?v=y5ebnviXegc&feature=player_embedded 225

MAASZ

8) Fold the edge of left wing. 9) Fold the edge of right wing. 10) Flight test. Why does this plane fly but others don’t? Or at least not as well as this design? At this stage, I would like to point out that interdisciplinary and cross-curricular lessons in cooperation with physics might be useful. We could, for example, investigate why we should fold the front section of the paper more than once. The lesson is now beginning to get exciting. Once the division of labour has been discussed and organized by the students (who is sitting where doing what?), another five-minute production phase follows. Can we produce a larger number of airworthy planes? In step four, the optimization of the production based on division of labour begins. How can we become even faster and more efficient? This question is raised in every real assembly hall every day. We are looking for answers and find many of them on the internet. As we’re talking about mathematics lessons here, I have chosen those approaches which are strongly connected to mathematics. (Motivation or pressure would be examples of alternative solutions.) I am going to focus on Multi Moment Recording here, which is a process to investigate how long every work step takes in relation to the total labour time. Multi Moment Recording is a sampling procedure to determine the frequency of occurrence of predefined phenomena. A number of temporary observations of work systems are collected without involving the observed person in an active way, for example, by asking for information or interruptions of other kinds. The organization manual of the German Federal Ministry of the Interior describes the Multi Moment Method (http://www.orghandbuch.de/ nn_414926/Organisations Handbuch/DE/6__MethodenTechniken/61__Erhebungstechniken/616__Multimom entaufnahme/multimomentaufnahme-node.html?__nnn=true). I translate and draft it in the following way: The Multi Moment Method is a statistical method to find out how often different parts of a working process happen. This is done by viewing the working process after defining and separating single work steps. If viewing the work in progress happens several times (often enough) it gives an exact image of what the working process is. This is what you need to optimize it. This organisation manual provides a process description which can be used for a research design in the classroom based on the Multi Moment Method to investigate the production of paper planes. The following plan could be implemented. The class is divided into several groups, most of which produce paper planes. Two groups conduct Multi Moment Recordings of the ongoing production by describing in detail the various work steps. Then, the students create an observation plan, define the number of observations and plan the observation tour, i.e., a list of times at which the various work stations will be observed. What does that mean in our example, i.e., the production of ‘arrow’ paper planes? I have identified and listed a total of 15 work processes. It seems obvious to take 226

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

these 15 processes as a starting point. One question that arises fairly quickly is how to record the handing over of the unfinished planes. Let’s assume that the workstations are arrayed in line (the tables should be arranged in a row, if possible). In this way, the planes can readily be handed over to the adjacent workstation. The usefulness of this arrangement becomes obvious when the distances between two tables are increased so that transporting the planes to the nearest workstation takes a minimum of 10 seconds. 14 transports would amount to 140 seconds or 2 minutes and 20 seconds of transit time. This is significantly longer than the time needed to fold a plane. We would like to keep the experiment rather simple and assume a transit time of 0 seconds. All students working at the assembly line hand over the plane to the next student. However, another problem will soon become apparent: Not all processes take equally long. The station that takes longest will cause a blockage. Which one is it? Are the differences so great that we need to provide intermediate storage facilities or a different plan of work steps? We are going to solve this problem by recording the video time: Process number 1) Take a piece of paper 2) Fold down the centre 3) Make sure the crease is straight and sharp 4) Open it out again 5) Fold in the top left hand corner to the centre line 6) Fold in the second top corner to the centre line 7) Fold the first corner again 8) Fold the second corner again 9) Fold the two sides again along the centre line 10) Indicate web width 11) Fold the left wing 12) Fold the right wing 13) Fold the edge of the left wing 14) Fold the edge of the right wing 15) Flight test Total:

time in seconds 6 7 3 2 6 7 6 9 6 1 6 8 5 5 10 (?) 77 seconds

plus 10 seconds testing time for trial run. The remaining time adding up to the total length of the video is used to present the current state of production. Now it’s about time for a moment of reflection. The various work steps obviously take different amounts of time. Steps 4 and 10, for example, are relatively short. In addition, even similar work steps differ in the required amount of time, which can also be due to measurement error (imprecise observation). I will, therefore, conflate some work steps in order to improve the conditions for the first attempt of Multi Moment Recording. The merging of individual work steps aims to approximate equal production times for every work station. 227

MAASZ

Station new I II III IV

Process old 1 to 5 6 to 8 9 to 12 13 to 15

Duration 24 seconds 22 seconds 21 seconds 20 seconds

In this first attempt I have merged subsequent processes in order to have work phases of equal length. I hope that the differences remain small and suggest students to have a first trial run. This corresponds with the standard procedure of modelling. After all, the model assumptions can easily be modified or refined in a second attempt (Siller 2008). For step five we need more information about the method and the procedure of the Multi Moment Recording. We have four stations and plan to measure the proportion of working time at each station in relation to the total working time. This relative proportion should be determined with precision e with a probability of 95% (that is, a confidence interval of 95% with a range of 2e). How many measurements (observations) do we need? The following formula has been found:

http://bios-bremerhaven.de/cms/upload/Dokumente/Erfa-PBE/mma_methode.pdf

At this point we have arrived at the need for a very important decision: – If we do teach as usual we now start to analyze this formula and go into a statistics lesson. – If we behave more like people in reality do we make some plausibility tests with this formula and use it afterwards. – If we do like most people do in reality we simply believe that this formula is correct and use it. – I ask you to choose the third way: Teaching real world mathematics should include this way, too. This is not an argument to always do this. My didactical argument is that students should learn at school how to work with mathematics in reality, too. This will give them a better view on mathematics itself and the relation of mathematics and the world around. 228

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

Excursion 1: Some Hints About the Mathematical Background For all teachers that prefer to go into a statistic lesson I now will give some hints about the background of this formula. Where has this formula been taken from and what does it consist of ? It is a formula used in probability calculations, or more precisely the normal distribution. Let’s take a specific work process or a work station (I will use the term work process from now on). Suppose we want to determine the current percentage of time that a specific work process takes. How many times should we observe the process in order to be 95% confident that the estimated (sample) proportion is within e percentage points of the true proportion of time that a specific work process takes? To do so, we make random observations of the construction process of n different planes and check whether what we observe is the process we have defined as our focus or a different one. Let’s translate the variables: – p …. is the true proportion of time a specific work process takes in relation to the whole production (= the probability of success in a single trial ) – n …. is the total number of observations (number of repeated trials) – X … random variable, measures the number of times out of n observations we find the paper plane in this particular process – The random variable X is a binomial random variable and has a binomial distribution with parameters n and p. X ~ B(n, p) – X….is approximately normally distributed; this means an approximation to B (n, p) is given by the normal distribution: E(X) = n.p = μ Mean value and Standard deviation σ(X) = n.p.q

Confidence intervals can be calculated for the true proportion. The underlying distribution is binomial. Estimate p (the area under the Normal Distribution Curve) with 95% confidence (95% confidence interval). For ρ = 0.95 it follows that α = 1 − ρ = 0.05 so we get the area α 1−

2

= 0.975

For a value of 0.975 the z-score is equal to 1.96. 1.96 is the approximate value of the 97.5 percentile point of the normal distribution. 95% of the area under a normal curve lies within roughly 1.96 standard deviations of the mean. This number is therefore used in the construction of approximately 95% confidence intervals.

229

MAASZ

To form an estimated proportion, take X, the random variable, and divide it by n, the number of trials(observations). We derive an approximation for p, by dividing X through n, the number of trials. If we divide the random variable X by n, the mean by n, and the standard deviation by n, we get a normal distribution of proportions called the estimated proportion, as the random variable: The random variable is that estimated proportion: 1 . X n

1 1 1 For a proportion, the mean is: E ( . X ) = E ( x) = .n. p = p n n n 1 denoted as E ( . X ) = p = μˆ n For a proportion, the standard deviation is: 1 n

1 n

σ ( .X ) = σ ( X ) =

1 n. p.q = n

1 n. n

n p.q =

p.q n

where σ (c. X ) = c.σ ( X ) , if c ≥ 0 denoted as

1 n

σ ( .X ) =

p.q = σˆt n

1 .X n follows a normal distribution for proportions: 1 1 . X ~ N ( . X p.q ) n n , n

For the true proportion of p= 95% the confidence interval has the form:

μˆ

σˆ z

σˆ z

μˆ - σˆ z P( μˆ - σˆ z

μˆ + σˆ z

≤ 1 . X ≤ μˆ + σˆ z )= Φ ( z ) − Φ (− z ) =2 Φ (z ) − 1 = 1 − α

range = 2 σˆ z

n

μˆ e

σˆ z =e 230

e

=ρ

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

Solving for n gives us an equation for the sample size:

σˆ z =e p.q .z = e n p.q e = n z p.q e 2 = 2 n z n=

z 2 pq e2

For the true proportion of p= 95%: z= 1.96 n=

1.96 2 pq e2

Suppose we want to determine the current percentage of time that a specific work process takes. How many times should we observe the process in order to be 95% confident that the estimated (sample) proportion is within e percentage points of the true proportion of time that a specific work process takes. This gives us a large enough sample so that we can be 95% confident that we are within e percentage points of the true proportion of time that a specific work process takes. The sample size should be n observations in order to be 95% confident that the estimated proportion is within e percentage points of the true proportion of time that a specific work process takes. Excursion 2: Some Ideas About the Plausibility of the Formula Did we find a suitable formula? How can we examine the formula in more detail? We will study the formula with a software program. As we are working with percentages, we take p, the true proportion of time a specific work process takes in relation to the whole production, times 100. 100.p = x 231

MAASZ

We plot the graph to see how it changes as p increases from 0 to 1 or x increases from 0 to 100 (we know, of course, that less than 0% or more than 100% would not make sense): 1. Axis… percentage x 2. Axis… number of observations

We obtain a parabola. Maximum value p=50%. If a work process takes a lot or very little time, the number of needed observations shrinks to zero ( p=0% or 100%). This is obviously correct. If a work process takes no or the whole time, we don’t need any observations. What effect does our claim for more or less accuracy of measurement have? For example, in our video we measured a time of 6 seconds for work process number 5. 6 out of 87 seconds is 6.9 %. p is therefore 0.069. For accuracy of measurement we claim a 5 % margin of error (or 10 % or1 %). That is, e = 0.05 (or 0.1 or 0.01). Now we use these numbers in our formula: n = 1.96^2 * 0.069 * (1 – 0.069) /0.05^2 = 98.7 (for accuracy of measurement of 5 %) n = 1.96^2 * 0.069 * (1 – 0.069) /0.1^2 = 24.68 232

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

(that is 25 for accuracy of measurement of 10 %) n = 1.96^2 * 0.069 * (1 – 0.069) /0.01^2 = 2468 (for accuracy of measurement of 1 %) We plot the graph again: We chose p=0.069 and let e vary. 1. Axis… accuracy of measurement e 2. Axis… number of observations

Again, our graph seems accurate: If we ask for greater accuracy, the number of needed observations grows – ad infinitum. We end our excursion with the assumption that our formula seems accurate. ad 3) Back to Step Five: What now? Almost 2500 observations in one lesson – that’s impossible. Even 25 observations, which allow a precision of only 10%, are quite a lot. It seems we have to plan more precisely! How long does an observation take? We have to consider what is to be observed in which manner. One group of students should plan observations of the work stations. If, for example, there are 26 students in a class and four students work at four stations each, 2 students remain to make observations from a suitable position. One of the two observes and dictates (for example, “group 6, station III”) and the other 233

MAASZ

one takes notes. In a class of 28 students, two teams can make observations independently from each other. If more teams observe simultaneously, higher precision can be achieved. Furthermore, the interval between two observations should be long enough to allow the completion of one paper plane. However, as the brevity of one lesson does not allow doing so, we will choose a shorter interval. At the end of step five we make a first attempt: paper plane production and observations. This means 10 minutes of production, every 10 seconds an observation (i.e., 6 times per minute), 6 times 10 in total, which makes 60 observations, then discussion of results. That’s the plan. In reality, however, there will always be complications and errors, which have to be considered as yet. For the next step we obtain the following measurement report: Number of measurement 1 2 3 4 ...

group 1 2 3 4

station I I II II

work process 1 2 6 7

In step six, the results of the first measurement will be analyzed. For the purpose of this chapter I have compiled the following table and chosen the values in such a way that they can be used effectively in the next step. Work process number

Duration in seconds (video)

1 Take a piece of paper 2 Fold down the centre 3 Sharpening the crease 4 Re-opening 5 Fold the first corner 6 Fold the second corner 7 Fold the first corner again 8 Fold the second corner again 9 Fold up the two sections 10 Determine web width 11 Fold the left wing 12 Fold the right wing 13 Fold the edge of the left wing 14 Fold the edge of the right wing 15 Flight test? (not shown in the video) TOTAL 234

Percent Percent (video) (measure)

6 7 3 2 6 7 6 9 6 1 6 8 5 5 10

8 8 3 2 7 8 7 10 7 1 7 9 6 6 11

7 8 2 2 6 8 9 10 8 3 8 8 6 6 8

87

100

100

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES

What does the table indicate? The results correspond approximately to the values obtained from the video as well as the expected values. Their meaning is in the detail. Step 10, for example, takes three seconds instead of one. Such seemingly small differences can have severe effects on production. The measurement should be as precise as possible because the results will have implications for work organization and improvement of the output. If such measurement and reorganisation in real-life production can lead to an increase in profit by one or more percent, it is well worth the measuring effort. There are a number of options for the remaining lesson: One option is to end the lesson with the conclusion that the division of labour obviously results in an increase in profit; production becomes faster and more efficient. To increase the efficiency it is important to know exactly how long each work process takes. One suitable method to find out exact length of each working step in the process makes use of statistics and is called Multi Moment Recording. If the students decide on yet another step (step seven), the question of how this instrument is used in practice to determine labour time, could serve as the motivation for this step. The starting point for such considerations using internet research (discussion forum) could be the question of how to deal with breaks during work. In our example, breaks have been ignored as yet; everyone works as fast as possible, right? However, everyone knows that in real life breaks are essential – and there is often disagreement about when and how long breaks should be. I would like to include these considerations in our suggested lesson. One possible option is to take a 30 (or 10) second break after completion of every single paper plane. Thus, work process number 16 is a 30-second break. This will probably be reflected in another round of measurement. Let’s check the paper plane production in class. What’s going to happen? As before, the work processes are measured again – this time including breaks. One thing is crucial: the workers, whose work processes are measured, can influence the measurement results by intentionally taking (or not) a break exactly while measuring. However, this option of intentional influence is possible only if the time of measurement is known or predictable. What can be done if this form of influence is to be avoided? After all, the purpose of this measurement is to increase the efficiency of production. Do you think students will consider a random measurement? This idea does not seem too far-fetched – it is one of many opportunities for students to discover and try out for themselves. How can this idea be realized? Obviously, the moments of measurement play an important role. If measurement takes place at irregular intervals, chance comes into play: The moments of measurement will be determined by a random generator (such as a dice). If initially observations were made every 10 seconds, you can try the following this time: Throw a dice and alternately add and subtract the scores to or from 10, respectively. Furthermore, we agree to take the value 10 every time the dice shows a score of 6. In this way we obtain a sequence of numbers such as, for example, 12, 8, 15, 11, ... Now the class can discuss, try out and decide. Finding a series of random measurement moments can be exciting. Will the measurement of work processes be effective 235

MAASZ

this time? Will the measurements be accurate? And to what extent will attempts to influence the process actually be constricted? The practice of real-life measurement in manufacturing shows that random variables are indeed used for this purpose. This provides a further argument for reflection and discussion in class. In the final eighth step, the entire teaching sequence ought to be reflected. What have we learned? Which questions remain unanswered? What follows from it? In any case, students have gained a brief insight into the real working world. The paper plane production has given the students some understanding of the historical development of the production of goods, ranging from manual production to manufacturing and assembly line production supported and optimized by mathematics. Perhaps students or parents have had critical questions concerning this interdisciplinary and crosscurricular mathematics lesson: Is this really mathematics? There are hardly any calculations! My answer is YES! Mathematics is much more comprehensive than just solving a given arithmetic problem; it is a means of describing and shaping the world. Mathematics can help to organize and communicate interrelations (see R. Fischer). I consider it essential that mathematics lessons at school address these socially relevant aspects of mathematics. The application of mathematics in social contexts clearly shows that it is not at all neutral and value-free, particularly with respect to the selection of aspects to be modelled and goal-setting as well as the application of mathematics (Maaß 1990 and 2008). Maybe the reflection in class takes a different course and there is disagreement on whether a production process should be optimized in the first place. From my point of view, increase in productivity, in the number and quality of manufactured goods per time unit are the basis for growing social prosperity and wealth. How this prosperity is distributed and who benefits from it is a different question. The evaluation of this increase in productivity will depend on whether or not somebody benefits from it. Those who believe that this kind of progress can only be achieved at the expense of personal health due to the high intensity of labour, the kind of work itself or staff reduction resulting in unemployment will certainly consider this increase in productivity negatively. REFERENCES http://en.wikipedia.org/wiki/Methods-time_measurement http://www.rsscse.org.uk/ts/ Roland, F. (2007). Technology, mathematics and consciousness of society. In U. Gellert & E. Jablonka (Hrsg.), Mathematisation and demathematisation. Social, political and ramifications (pp. 67–80). Rotterdam: Sense Publishers. Roland, F. (2006). Materialisierung und Organisation. Zur kulturellen Bedeutung der Mathematik. (p. 292 S). Wien München: Profil. Juergen Maasz. (1990). Mathematische Technologie = sozialverträgliche Technologie? Zur mathematischen Modellierung der gesellschaftlichen “Wirklichkeit” und ihren Folgen. In R. Tschiedel (Hrsg.), Die technische Konstruktion der gesellschaftlichen Wirklichkeit, Profil-Verlag München.

236

A POSSIBLE WAY TO OPTIMIZE PRODUCTION PROCESSES Maasz, J. (2008). Manipulated by mathematics? Some answers that might be useful for teachers. In V. Seabright, et al. (Ed.), Crossing borders - Research, reflection and practice in adults learning mathematics. Belfast/Limerick. Hans-Stefan, S. (2008). Modellbilden - eine zentrale Leitidee der Mathematik. Aachen: Sahker Verlag. ISBN-Nr.: 978-383227211.

A. Univ. Prof. Univ. Doz. Dr. Juergen Maasz Universitaet Linz Institut fuer Didaktik der Mathematik Altenberger Str. 69 A - 4040 Linz

237

JUERGEN MAASZ AND HANS-STEFAN SILLER

14. MATHEMATICS AND EGGS Does this Topic Make Sense in Education?

INTRODUCTION

Eating “Easter eggs” is something many young people like very much. When we enjoyed eating eggs at Easter in the year 2010 we started to think about eggs from a mathematical point of view: Eggs are very simple objects but it is not simple to calculate measures of interests. What is the volume and the surface area of a given egg? This is not typical mathematics lesson content in Austrian schools. Our first approach to calculate eggs was a “scientific” one. We went to the library and found a very nice book by Hortsch (1990) with a lot of formulas. We tried to understand them, and then we used them. This is the content of the first part of this chapter. The second part of the chapter gives a much easier approach to a more practical solution. We started with the old and simple idea of Cavalieri. We divided the egg into several slices and made a model of the volume and the surface area. Therefore each slice is similar to a cylinder. The tower of several cylinders is approximating an egg. With the help of a spreadsheet we think that younger students should be able to find a good approximation for the volume and the surface area. Our first question is: Why should students be motivated (and not forced) to do this? Thinking about motivations leads us to some research in the internet about biggest eggs and other events involving eggs. This is the third part of our chapter, answering didactical question around our proposal to learn modelling by looking at objects of daily life, like eggs. LEARNING MATHEMATICS AND MODELLING BY CALCULTING EGGS - SOME ARGUMENTS FROM MATHEMATICS EDUCATION

Modelling of daily life objects is a topic which could be a motivating and fascinating access point to mathematics education. For this reason this chapter presents one such possibility, how an object which is common to all students in school, the egg, can be an item for an exciting discussion in schools. Based on two mathematical definitions a figure is constructed. Based on those the description in polar-coordinates and Cartesian implicit equations is developed. This stepwise modelling cycle shows the way how modelling of daily objects could be done in class while satisfying the demands of the curricula. The concept of modelling for education has been discussed for a long time. It is a basic concept in all parts of sciences and in particular in mathematics. This concept is a well accepted fundamental idea (cf. Siller, 2008), in this case the preliminary-definition J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 239–256. © 2011 Sense Publishers. All rights reserved.

MAASZ AND SILLER

of Schweiger (1992) is used: “A fundamental idea is a bundle of activities, strategies or techniques, which 1. can be shown in the historical development of mathematics, 2. are sustainable to order curricular concepts vertically, 3. are ideas for the question, what is mathematics, for communicating about mathematics, 4. allows mathematical education to be more flexible and clear, 5. have a corresponding linguistic or activity-based archetype in language and thinking.” Therefore it is not remarkable that the concept of modelling can be found in a lot of different curricula all over the world. For example in the Mathematics curriculum for Austrian grammar-schools one can find a lot of quotations for it (cf. BMUKK, 2004). By interpreting those quotations modelling can be seen as a process-related competence. That means – translating the area or the situation to be modelled into mathematical ideas, structures and relations, – working with a given or constructed mathematical model, – interpreting and testing results. Those process-related competencies have been described by many people, e.g. Pollak (1977), Müller and Wittmann (1984), Schupp (1987), Blum (1985). With regards to all the developments in modelling, Blum and Leiß (2007) have designed a modelling cycle which observes a more cognitive point of view. PROBLEMS IN THE REALM OF STUDENTS’ EXPERIENCES IN MATHEMATICS EDUCATION

Problems of real-life, like problems related to the environment, sports or traffic, are often a starting point for calculations and applications of mathematics. But before using mathematics in such fields it is necessary that the problem is well understood. This asks for a lot of time and dedication, because it is necessary to translate the problem from reality to mathematics and back to reality. Therefore models are used as an adequate description of the given situation. Modelling problems based on students’ experiences means creating a picture of reality which allows us to describe complex procedures in a common way. Creating such an image has to observe two directions as Krauthausen (2003) suggests: – Using knowledge for developing mathematical ideas. – Developing knowledge about reality by its reliance on mathematics. If such problems are discussed in mathematics education it is possible that students will be more motivated towards mathematics. But there are a lot of other arguments why such problems should be discussed. They – help students to understand and to cope with situations in their everyday life and in environment, – help students to achieve necessary qualifications, like translating from reality to mathematics, 240

MATHEMATICS AND EGGS

– help students to get a clear and straight-forward picture of mathematics, so that they are able to recognize that this subject is necessary for living, – motivate students to think about mathematics in an in-depth manner, so that they can recall important concepts even if they were taught them a long time ago. If a teacher is concerned with the listed points, then they will be able to find a lot of interesting topics which he/she is able to discuss with students. As a useful example we want to show a problem in realm of students’ experiences by observing an egg. THE EGG - MOTIVATION AND STARTING EXAMPLES

The teaching and learning of mathematics works much better if students are intrinsically motivated to learn. Real world problems should bring this type of motivation into the classroom. So we start with a little collection of real world questions concerning eggs. You find the answers at the end of this chapter shown as examples for school. An easy approach can be done by looking at eggs made of chocolate. If we get an egg that seems to be big as a hen’s egg - how much chocolate is it made of ? Is it cheaper to buy a normal chocolate bar if we have to pay for it? What is the price of the same type of chocolate in different forms? If teachers and students open their ears and eyes for “egg” questions they will find many of them. Here is one we heard in the news. A German company produces “Children Surprise Eggs” (Figure 1). Inside each of these eggs is a hidden toy, the surprise. The shell consists of chocolate.

Figure 1. “Children Surprise Egg”.

How much chocolate is the shell made from? If we compare costs of chocolate and toys: Is it a good idea to buy such surprise eggs? A lot of real world problems concerning eggs that are good for motivation in class can be found in daily media, like the internet. For example: What is the biggest egg at the world? (Figure 2) Where can you find it? Is there any mathematics involved in its construction? 241

MAASZ AND SILLER

Figure 2. Biggest Easter-egg 2008 (http://www.vol.at/news/tp:vol:special_ostern_aktuell/artikel/ das-groesste-osterei-der-welt/cn/news-20080312-03572171)

Another question which could be interesting might be the following: What is the biggest egg made of glass? (Figure 3) What is its weight? Is its form solid?

Figure 3. Biggest egg made of glass (http://www.joska.com/new s/news_ostern_2008) THE EGG - STARTING POINTS FOR CALCULTIONS

If we have a closer look at an egg, we will see that its shape is very harmonic and impressive. Considering a hen’s egg it is obvious that the shape of all those (hen’s) eggs is the same. Because of the fascinating shape of eggs we tried to think about a method to describe the shape of such an egg with mathematical methods. Searching the literature we found some material by Münger (1894), Schmidt (1907), Malina (1907), Wieleitner (1908), Loria (1911), Timmerding (1928) or Hortsch (1990). The book of Hortsch is a very interesting summary about the most important results of ‘egg-curves’. He also finds a new way for describing egg-curves by experimenting with known parts of ‘egg-curves’. The approach used by the authors to derive the ‘egg-curves’ is very fascinating. But none of the above listed authors has thought 242

MATHEMATICS AND EGGS

about a way to create a curve by using elementary mathematical methods. The way how the authors describe such curves are not suitable for mathematics education in schools. So we thought about a way to find such curves with the help of well known concepts in education. Our first starting point is a quotation found in Hortsch (1990): “The located ovals were the (astonishing) results of analytical-geometrical problems inside of circles.” Another place to start is a definition of ‘egg-curves’ found by Schmidt (1907) and presented in Hortsch (1990). Definition 1 Schmidt (1907) quotes: “An ‘egg-curve’ can be found as the geometrical position of the base point of all perpendiculars to secants, cut from the intersection-points of the abscissa with the bisectrix, which divide (obtuse) angles between the secants and parallel lines in the intersection-points of secants with the circle circumference in halves. The calculated formula is r = 2⋅a⋅cos²ϕ or (x2+y2)3 = 4⋅a2⋅x4” In education the role of technology is more and more important. Different systems, like computer-algebra-systems (CAS), dynamical-geometry-software (DGS) or spreadsheets, are now commonly used in education. With the help of technology it is possible to design a picture of the given definition immediately. In the first part we use a DGS because with its help it is possible to draw a dynamical picture of the given definition (Figure 4). First of all we construe the one point P of such an egg as it is given in the definition.

Figure 4. Translating the definition of Schmidt to the DGS.

According to the given instruction we first construe a circle (centre and radius arbitrarily) and then the secant from C to A (points arbitrarily). After that a parallel line to the x-axis through the point A (= intersection point secant-circle) is drawn and the line that bisects the angle CAD can be determined, which crosses the x-axis. 243

MAASZ AND SILLER

So we get the point S. Now we are able to draw the perpendicular to the secant through S. The intersection point of the secant and the perpendicular is called P and is a point of the ‘egg-curve’. We activate the “Trace on” function and use the dynamical aspect of the construction. By moving A towards the circle the ‘egg-curve’ is drawn as Schmidt has described it. This can be seen in the following figure (Figure 5):

Figure 5. Egg-curve construed by DGS.

Now we have to find a way to calculate the formulas r = 2⋅a⋅cos²ϕ or (x2+y2)3 = 4⋅a2⋅x4 as mentioned above. Let us start with the following figure (Figure 6), which is the picture constructed through the given definition:

Figure 6. Initial situation for calculating the equations of the egg-curve. 244

MATHEMATICS AND EGGS

We know (from the construction) that the triangle CPS is right-angled. Furthermore we recognize that the distances CP and PS are the same and that the triangle CAB is also rectangular, because it is situated in a semicircle. This is shown in Figure 7 (where the real ‘egg-curve’ is shown as a dashed line).

Figure 7. Affinity of the triangles.

From the known points C (0, 0), A (x, y), and B (2⋅r, 0) and through the instructions provided the coordinates of the points S and P are calculated. Therefore only a little bit of vector analysis is necessary. The calculation itself can be done through CAS. First of all we have to define the points and the direction vector of the bisecting line w:

245

MAASZ AND SILLER

Now we calculate the equation of the normal form of the bisection line, cut it with the x-axis and define the intersection-point S.

Then we calculate the intersection point P of the secant and the perpendicular through S.

Now all the important elements for finding the ‘egg-curve’ are calculated. Lets have a closer look at Figure 7 again. It is easy to recognize that there are two similar triangles – triangle CPS and triangle CAB. The distance CP is r and the radius of the circle is a. So the distance CB = 2⋅a. The other two distances which are needed . are CA and CB. The (calculated) length of CA = CB = In a next step the similarity theorem (of triangles) can be applied:

246

MATHEMATICS AND EGGS

Transforming this equation produces . By using the characteristic of the right-angled triangle CAB and calling ϕ the angle ACB we know that

. Transforming this equation yields . Inserting this connection in the equation above yields

another equation . Simplifying this equation by cancelling common terms gives the result: By substituting r and cosϕ it is possible to get the implicit Cartesian form, mentioned in the definition:

respectively

As we have seen the ‘egg-curve’ is modelled through elementary mathematical methods. By using technology teachers and students get the chance to explore such calculations by using the pivotal concept of modelling. Definition 2 Another approach for constructing an egg-curve is formulated by Münger (1894). He quotes: “Given is a circle with radius a and a point C on the circumference. CP1 is an arbitrarily position vector, P1Q1 the perpendicular to the x-axis, Q1P the perpendicular to the vector. While rotating the position vector around C point P is describing an egg-curve. The equation of this curve is r = a⋅cos²ϕ in Cartesian form (x2+y2) 3 = a2⋅x4.” As it is stated in the construction instructions a circle (radius arbitrarily) and a point C on the circumference of the circle is constructed. Then an arbitrarily point P1 on the circumference of the circle is constructed. The perpendicular to the x-axis is constructed through P1. The intersection point with the x-axis, Q1, can be found. 247

MAASZ AND SILLER

After that the perpendicular to the secant CP1 through Q1 is constructed. All these facts are shown in Figure 8:

Figure 8. Translating the definition of Münger to the DGS.

If the point P1 is moved around the circle, P will move along the ‘egg-curve’. It will be again easier to see if the “Trace on” option is activated (Figure 9).

Figure 9. Egg-curve construed by DGS.

The formula given by Münger can be derived in a similar way as the other formula was found. The most important fact which has to be seen here is that in this picture two rectangular triangles CPQ1 and CP1A exist. Those triangles are similar (Figure 10). 248

MATHEMATICS AND EGGS

Figure 10. Affinity of the triangles.

The coordinates of the points can be found mentally – without any calculation: C (0, 0), P1 (x, y), A (2⋅a, 0), Q1 (x, 0). If the distance CP is called r, then the coordinates of P will not be used. Otherwise they can be calculated analytically. For the sake of completeness we write down the coordinates of P: P Using the fact that both triangles are similar, the following equation is obvious:

Through elementary transformation, the mathematical fact for the triangle CP1A and substituting the term following equation is found:

by 2⋅a⋅cosϕ the

2⋅a⋅x⋅cosϕ = 2⋅a⋅r This result yields r = x⋅cosϕ. Because of the assumption (in the calculation) that x is part of our circle – it is the x-coordinate of P1 – it can be substituted by x = a⋅cosϕ, with a as the radius of the starting circle. So the formula of Münger (r = a⋅cos²ϕ) is found in polarcoordinates. If the implicit Cartesian form is desired, another substitution has to be done. The result in this case is: (x2+y2)3 = a2⋅x4 249

MAASZ AND SILLER

CALCULATING EGGS WITH COMPUTERS

Our first approach to calculate the volume of an egg is guided by our academic education. If we try to find a solution for a mathematical problem that we cannot remember or we have not seen before, we will walk into a library and start looking for literature such as that of Narushin (2005) or Zheng et al (2009). In the first part of this chapter we explained how useful this way can be – like in many other situations. We found a lot of formulas to calculate eggs that were proofed in history. We tried to understand these formulas and to use them. Our mathematical knowledge was trained enough to search for these formulas (we knew what we were looking for and to decide which book or article could help us), and to understand the literature. Reflecting the approach outlined above we feel follows the typical and traditional academic way to solve problems. This is great und useful - but it needs a lot of mathematical training to start it and to bring it to a successful conclusion. Going on with this reflection we started to look for an easier way. We want to describe this way now by starting with a basic ideas going back to Bonaventura Cavalieri (1598–1647). He said that we can find the volume of an object if we divide it in slices and add each volume. If the slices become thinner and thinner we go into a typical process that we know from Analysis – the limit of this process should be the exact volume. Indeed this is true under certain circumstances or conditions for the volume. Thinking about this we made a decision. We want to calculate the volume of an egg more or less exactly, but we do not want to go into an infinite process. In simple words: We want to solve the questions without Analysis this time. What exactly is the volume of a hen’s egg or an egg produced of chocolate? Maybe 1 g is good, maybe 1 mg? We will get this grade of exactness after a while if we use a finite process. Hence we divide the egg in some (or many) slices, and we know that we can count the number of slices. We can use a computer to add the volumes of the slices. When this basic idea is clear (or found by the students themselves) the rest of the work is fun with a little bit of a trial-and-error-strategy. We will show this now.

Figure 11. Taking a photograph. 250

MATHEMATICS AND EGGS

We like to have a general idea about the volume at the beginning. Our first step is a simple method to estimate it. We take the chocolate egg and put it under water (Figure 11). We look at the scale at the wall of the container and see a volume of about 70 cm³. How is it possible to estimate the slice of an egg? It looks like a cylinder but we know that we make a little mistake by saying that. Let us have a look at Figure 12:

Figure 12. A picture of an “egg-slicer”.

Do you know an egg slicer like the one you see on the picture? All slices together have the same volume as the egg had before we use the slicer. Now we have to find the volume of one slice and in the next step the volume of the sum of all slices. A typical slice somewhere in the middle is similar to a cylinder. The volume of a cylinder is V = r²·π·h (r is the radius of the basic circle; h the height of the cylinder). Now we have to find r and h. If we cut the egg with a knife into halves and put one half on a sheet of paper we can take a pencil and draw a line around that half. By adding some coordinates a picture – like Figure 13 – can be drawn:

Figure 13. Profile of an egg.

We decide to start with six slices along the x-axis which is scaled in centimetres. Each slice should have 1 cm height. What is the radius r? We take a close look at 251

MAASZ AND SILLER

one slice. We select the second one from 1 cm to 2 cm. Looking at its profile we see one like Figure 14:

Figure 14. Finding one slice.

Looking at this draft we see at least two lines that could be the radius, one on the left side and one on the right side. In the left portion of the egg the left radius is always shorter whereas in the right portion the right part is shorter. Now we are facing an important point, a good chance for good ideas or inventions done by the students. Students should learn to invent or discover new mathematics. We propose to give the students the chance to learn or practice it here. What are possible ideas? We show two main ways – both well known: one is known from introducing integrals (upper-sum and lower-sum), the other one is known as interpolation. In the first approach we take the bigger and the smaller radius and estimate their difference. While the slices become thinner the difference will fall to the moment we have reached the grade of exactness we want to reach. For the second idea we take the radius of the point in the middle between left side and right side. In this case x is 1.5 cm. If the slices become thinner the radius in the middle will become closer and closer to one of the border radius. Again it is a question of the grade of exactness we want to have. Now we will explain both ideas with concrete numbers and results. We name the left radius r and the right radius R. Figure 15 shows the situation appropriately.

Figure 15. Looking for r and R. 252

MATHEMATICS AND EGGS

Looking for concrete numbers for the length of the radius for cm 0 to 6 we come back to Figure 13. A spreadsheet is used to calculate the volume of the cylinders and to sum it up (Table 1): Table 1. Calculating first results x

Y (measured)

Smaller radius

Volume cylinders

Bigger radius

Volume cylinders

0.0

0.0

0.0

0.00

1.7

9.08

1.0

1.7

1.7

9.08

2.1

13.85

2.0

2.1

2.1

13.85

2.3

16.62

3.0

2.3

2.3

16.62

2.3

16.62

4.0

2.3

2.0

12.57

2.3

16.62

5.0 2.0 6.0 0.0 Sum Arithmetic middle

0.0 0.0

0.00 0.00 52.12

2.0 0

12.57 0.00 85.36

68.74

The result is really surprising! We made a first trial with little effort and little exactness and we got a result that is near the empirical result we got with the measurement of water displacement at the beginning (68.74 is near 70.00). Are we ready now? NO – not at all! The way we got the length of the radius is not very exact. We made a draft on the paper and took a simple line to measure the distance from the x-axis to the outline of the egg. If we did this very well maybe the length is correct within a little measurement error – let us estimate 1 mm. What would happen if the real length is 1 mm more in each case? (see Tables 2 and 3) Table 2. Influence of inexact measurement 1: plus 1 mm x

y

0.0 0.0 1.0 1.8 2.0 2.2 3.0 2.4 4.0 2.4 5.0 2.1 6.0 0.0 Sum Arithmetic middle

Smaller radius 0.0 1.8 2.2 2.4 2.1 0.0 0.0

Volume cylinders 0.00 10.18 15.21 18.10 13.85 0.00 0.00 57.33

Bigger radius 1.8 2.2 2.4 2.4 2.4 2.1 0.0

Volume cylinders 10.18 15.21 18.10 18.10 18.10 13.85 0.00 93.53

75.43 253

MAASZ AND SILLER

Table 3. Influence of inexact measurement 2: minus 1 mm x

Smaller radius 0.0 1.6 2.0 2.2 1.9 0.0 0.0

y

0.0 0.0 1.0 1.6 2.0 2.0 3.0 2.2 4.0 2.2 5.0 1.9 6.0 0.0 Sum Arithmetic middle

Volume cylinders 0.00 8.04 12.57 15.21 11.34 0.00 0.00 47.16

Bigger radius 1.6 2.0 2.2 2.2 2.2 1.9 0.0

Volume cylinders 8.04 12.57 15.21 15.21 15.21 11.34 0.00 77.57

62.36

What is the result of our experiment with small errors? The length which was measured is about 20 mm. We estimated an error of 1 mm that means about 5 percent. The result differs from 68.74 to 75.43 (plus 9.7 percent) or to 62.36 (minus 9 percent). The border line itself is about 1 mm thick. So we are really in danger of getting wrong results because we cannot measure the length exactly. One result of this reflection is that it is not useful to make the slices thinner before the method of length measuring is improved. This is similar to many situations in physics. Without exact measurement is it often useless to think about more calculation. What about the second option we proposed – following the idea of interpolation? We take the arithmetic middle of r and R and measure the length for these points on the x-axis. In Table 4 the results are shown. Table 4. Calculation through interpolation x 0.5 1.5 2.5 3.5 4.5 5.5

y 1.2 1.9 2.2 2.4 2.2 1.6

Cylinder volume 4.52 11.34 15.21 18.10 15.21 8.04 72.41

We are satisfied to see that the empirical results are nearly the same as the results in the calculation. Reaching this result we think we have shown how we can use Cavalieri and a spreadsheet to reach the empirical results. Many further steps are possible to be more precisely and to go on further, like calculating the surface area. But we think it is not necessary to explain this here because teachers know what to do and how to do it. 254

MATHEMATICS AND EGGS

EPILOGUE

Looking at objects from our daily life through the lens of mathematics can show interesting and motivating (mathematical) results. The grade of complexity is not an essential attribute for discussing objects from the realm of students’ experiences. It is necessary to show students that mathematics allocates methods and instruments to analyse objects in daily life. For this reason Austrian and German mathematics educators founded the ISTRON group in 1990. This group has the aim to look at problems in daily life and to show teachers how they can improve their education by implementing such examples. A more detailed look at mathematical aspects of an egg can be found in the article of Siller, Maaß & Fuchs (2009). All in all it is necessary that adequate problems are shown for education. Problems which are already known in research should be adapted and constructed for education. The fields for educational research in this area should be expanded and strengthened. SOME LAST REMARKS AND HINTS

For the sake of completeness we want to advice hints and solutions for the listed possible examples (in this article) that shall motivate students: 1. “Children Surprise Egg” (Figure 1) The old version of this egg is similar to the egg of a chicken. The mass of the chocolate is about 20 g, with a volume of about 15 cm³. The EU has decided that the toys inside these eggs are so small that little children are in danger to swallow them. The next generation of the eggs is bigger. The mass of chocolate will be about 100 g and the estimated size of the egg is 12.3 cm for the height and 8.3 cm for the diameter. The volume of this egg can be calculated as V = 72.26 cm³ (cf. Siller, Maaß & Fuchs, 2009, p. 106). 2. Biggest Easter egg 2008 (Figure 2) It has a surface of about 130 m². 3. World’s biggest glass egg (Figure 3) The volume of this egg is about 0.15 m³. The information in the internet says that this egg weights about 20 kg. If it is made of glass without air inside it would weight about 375 kg (2,5 kg/cm³). The shell would have a thickness of about 0.58 cm. REFERENCES Blum, W. (1985). Anwendungsorientierter Mathematikunterricht in der didaktischen Diskussion. Math. Semesterber, 32(2), 195–232. Blum, W., & Leiss D. (2007). How do students and teachers deal with mathematical modelling problems? The example “Filling up”. In Haines, et al. (Eds.), Mathematical modelling (ICTMA 12): Education, engineering and economics. Chichester: Horwood Publishing. BMUKK: AHS-Lehrplan Mathematik (Oberstufe). Wien: Bundesministerium für Unterricht und Kultur. Retrieved from http://www.bmukk.gv.at/schulen/unterricht/lp/lp_ahs_oberstufe.xml Hortsch, W. (1990). Alte und neue Eiformeln in der Geschichte der Mathematik. München: Selbstverlag Hortsch, München. 255

MAASZ AND SILLER Krauthausen, G., & Scherer, P. (2003). Einführung in die Mathematikdidaktik. Heidelberg: Spektrum. Loria, G. (1911). Ebene Kurven I und II. Berlin. Malina, J. (1907). Über Sternenbahnen und Kurven mit mehreren Brennpunkten. Wien. Mathematik Lehrplan AHS-Oberstufe, Wien, 2004, abrufbar unter:http://www.bmukk.gv.at/schulen/ unterricht/lp/lp_ahs_oberstufe.xml (Stand: 27.08.2008) Müller, G., & Wittmann, E. Ch. (1984). Der Mathematikunterricht in der Primarstufe. Braunschweig Wiesbaden: Vieweg. Münger, F. (1894). Die eiförmigen Kurven. Dissertation, Universität Bern, Bern. Narushin V. G. (2005). Egg geometry calculation using the measurements of length and breadth. Poultry Science, 84, 482–484. Pollak, H. O. (1977). The interaction between mathematics and other school subjects (Including integrated courses). In Proceedings of the third international congress on mathematical education (pp. 255–264). Karlsruhe. Schmidt, C. H. L. (1907). Über einige Kurven höherer Ordnung, Zeitschrift für mathem. u. naturwiss. Unterricht, 38. Jg., p. 485. Schweiger, F. (1992). Fundamentale Ideen - Eine geistesgeschichtliche Studie zur Mathematikdidaktik. In JMD, Jg. 13, H. 2/3, pp. 199–214. Schupp, H. (1987). Applied mathematics instruction in the lower secondary level: Between traditional and new approaches. In B. Werner, et al. (Hrsg.), Applications and modelling in learning and teaching mathematics (Vol. 37). Chichester: Horwood. Siller, H.-St. (2008). Modellbilden – eine zentrale Leitidee der Mathematik. In K. Fuchs (Hrsg.), Schriften zur Didaktik der Mathematik und Informatik. Aachen: Shaker Verlag. Siller, H.-St., Maaß, J., & Fuchs, K. J. Wie aus einem alltäglichen Gegenstand viele mathematische Modellierungen entstehen – Das Ei als Thema des Mathematikunterrichts. In H.-St. Siller, J. Maaß, (Hrsg.), ISTRON-Materialien für einen realitätsbezogenen Mathematikunterricht, Franzbecker, Hildesheim (to appear). Timmerding, H. E. (1928). Zeichnerische Geometrie, Akad. Verlagsgesellschaft, Leipzig. Wieleitner, H. (1908). Spezielle ebene Kurven. Leipzig: Sammlung Schubert. Zhou, P., Zheng, W., Zhao, C., Shen, C., & Sun, G. (2009). Egg volume and surface area calculations based on machine vision. Computer and Computing Technologies in Agriculture II, 3, 1647–1653.

Juergen Maasz IDM University of Linz Hans-Stefan Siller IFFB – Dept. for Mathematics and Informatics Education University of Salzburg

256

THOMAS SCHILLER

15. DIGITAL IMAGES Filters and Edge Detection

INTRODUCTION

Interdisciplinary work based on examples with reference to reality is essential to improve the teaching of mathematics. Based on such examples pupils learn the proper use of mathematical methods and their motivation to learn increases. Therefore I have chosen to explain important fields of the topic “digital image processing” in this article, which can be dealt with in an interdisciplinary and project based way in mathematics and informatics. Pupils are increasingly confronted with digital images, e.g. by using their modern smartphones with integrated cameras. The quality of images, especially of digital photos, may need to be improved during or after taking the photo, so you can recognize the objects in the image better, e.g. by adjusting the contrast. Principles of digital image processing have been prepared for the classroom in this chapter. You will see that you don’t need many special requirements in class, often simple knowledge of spreadsheets is all that is required. Among the prepared examples there is the use of linear filtering, which can, for example, be used for smoothing images, and a short explanation, how you can find positions of possible edges in the image. Edges, rapid transitions between light and dark (or differently coloured) areas, play an important role in the automatic recognition of objects in the image, as they define objects. By working with these examples pupils can see that the basic ideas behind the processing of image data on the values representing the colour or brightness of individual pixels are relatively simple, but in order to actually handle the complex reality and so get useful results much more work is required. IMAGES IN INFORMATICS

In practice there are many types of digital raster images, such as photos, colour and greyscale images, screenshots, fax documents, radar images, ultrasound images etc. Raster images are usually rectangular and made of regularly arranged elements, the pixels (short for picture element). In general, raster images are rectangular, and differ mainly by the values stored in the pixels. In addition to raster images, there are vector graphics. [Burger et al., 2006, p. 5] This article only deals with raster images, so I will not talk in detail on vector graphics. One can interpret a recorded image as a two- dimensional matrix with numbers. A digital image I is, considered more formally, a two- dimensional function of J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 257–271. © 2011 Sense Publishers. All rights reserved.

SCHILLER

non- negative integer coordinates to a set of image values, therefore I: N x N → V (N … natural numbers; V … set of pixel values). In this way, images can be represented, stored, processed, compressed or transmitted using computers. It doesn’t matter which way an image has emerged actually, we simply understand it as numeric data. [Burger et al., 2006, p. 10]

Figures 1 and 2. Part of an image (magnified) and related greyscale function (comp. figure 2.5 in [Burger et al._2006, p. 11]).

The values of individual pixels are binary words of length k. Therefore a pixel can assume 2k different values. “k” is frequently called the “depth” of the image. The exact coding of the pixel values depends, for example, on the type of the image (RGB colour image, binary image, greyscale image etc.). [Burger et al., 2006, p. 12] Greyscale images consist of only one channel representing the image brightness. As the values of the intensity of the light energy, which cannot be negative, are represented, normally only non-negative values are stored. Therefore image data in greyscale images are usually made of integer values of the interval [0, 2k-1], e.g. values representing the intensity of the interval [0, 255] at a depth of 8 bits. It this case, 256 different grey levels are possible in which usually 0 represents a black pixel (= pixel with minimum brightness) and 255 a white one (= pixel with maximum brightness). [Burger et al., 2006, p. 12] FILTERS IN DIGITAL IMAGE PROCESSING

Many effects such as the sharpening or smoothing of an image can be realized with the help of filters. [Burger et al._2006, p. 89] Smoothing in a Spreadsheet Regions in images with locally strong intensity changes, so large differences between neighbouring pixels, are felt as sharp. In contrast the image looks blurry or fuzzy, if the brightness changes only a little bit. If one wants to smooth an image, one could replace each pixel value by the average of the neighbouring pixels. One creates a new image I’ starting from the source image I by assuming the pixel values I’(u, v) as the arithmetic means of the pixel values of the pixel I(u, v) and the 8 neighbouring pixels. [Burger et al., 2006, p. 89f] 258

DIGITAL IMAGES

Let’s try to implement the smoothing of a greyscale image in a spreadsheet. There are several methods to import pixel values into a spreadsheet. After manipulating the values, they can be exported and viewed as an image again. In Figure 3 the values of the individual pixels of the source image (consisting of 256 grey levels) are represented. Figure 4 represents the pixel values formed by the calculation of the arithmetic mean values.

Figure 3. Screenshot with individual pixels of the source image.

Figure 4. Screenshot with pixel values formed by the calculation of the arithmetic mean values.

The filter process using arithmetic mean values can be implemented relatively quickly in a spreadsheet using the mean function (such as AVERAGE) that is included in the program. For example in cell B2 this requires writing the name of the function for averaging and the range of cells from A1 to C3 used for the calculation of the mean value. Of course, you can also enter the sum of the values (and the subsequent division) without using a predefined function for calculating the mean, e.g. with help of a sum function (e.g. SUM) or through multiple additions. Regardless of the chosen way, the references to other cells have to be relative, because afterwards the formula has to be copied into (almost) all other cells that represent pixel values. They should access the neighbouring pixels from the actual cells’ point of view while calculating the actual mean value. Since only integer pixel values are allowed, the value afterwards has to be changed into an integer number, e.g. by using the function INT or ROUND. 259

SCHILLER

Figure 5 shows how the table with the results from Figure 4 looks like in formula view:

Figure 5. Figure 4 in formula view (Ausgangsbild = source image, Graustufen = greyscale, ganze Zahl = integer number, Mittelwert = mean value).

As you can see in Figure 5, the mean values on the fringes of the image are not calculated and the original image values are used instead. But why? The students will find out by implementing the filtering on their own, because either they will try to create the formula for calculating the mean value in cell A1 and fail because of the lack of the value on the left side of cell A1, or they will create the formula in another cell not lying on a fringe of the image. In this case they will face problems when copying the formula to the fringes of the image due to an error message (Figures 6, 7 and 8):

Figure 6. Error messages on the fringes (Bezug = reference).

Figure 7. Error messages on the fringes.

Figure 8. Error messages on the fringes.

As can be seen e.g. in Figures 4 and 5, the formulae are not written in the same worksheet in which the values of the pixels of the source image are. So the old values won’t be changed, but a new image with new values based on the pixel values of the old image is created, as described in the theory above. The fact that in each cases a new image, therefore a new number matrix, is created and the manipulations are not performed directly in the output matrix, is essential for such filter processes. 260

DIGITAL IMAGES

Pupils will realize that this is the only possible right way when trying the wrong one. They will fail, because the spreadsheet shows a circular reference warning, if iterative calculation of formulas is turned off. In our case this means that while calculating a value of a cell, this cell is being accessed. This can’t work properly, because at the time the value is calculated, the value of this cell changes. Due to the direct influence of the value of the cell on the calculation, the cell will be updated taking the new value into account. This could (at least in theory) continue endlessly, but doesn’t make any sense in our case. The error message (a circular reference warning) can be avoided by activating the iterative calculation (and in addition for example setting the desired number of iterations), as described in the appropriate help in the spreadsheet. It should be noted that this is an appropriate example for introducing the topic of “recursion and iteration” (both in mathematics and in computer science education), which I will not describe in detail here.1 The problem with the circular reference when working with filters and “accidentally” using the source image should be discussed in class, because if such a filter is not only implemented in a spreadsheet, but in (any) programming language, there will not be a circular reference warning that draws one’s attention to the error. The value of each pixel is only calculated and changed once and therefore no problems will occur during the calculation while executing the program, although wrong pixel values are used. It isn’t easy to find the problem if one isn’t warned what error occurred. In our case you don’t have to search for it in the obvious core of the programmed method in which it comes to the filtering, but before that, when a new (empty) image has not been created and the main image is set up for editing.

Figures 9 and 10. Before and after smoothing in a spreadsheet.

Figure 9 is a section of the source image before smoothing in a spreadsheet. Figure 10 represents the corresponding image after smoothing. You can (especially on the concrete slabs) clearly see that by smoothing, the transitions between the different shades of grey have been blurred and the image seems to be out of focus. Another important aspect: In this example, the pixel values on the fringes of the image have been taken from the source image. Depending on the function of the filter, e.g. differentiation2, that usually doesn’t make sense. The calculation of the arithmetic means in the presented smoothing could rely just on the neighbouring pixels that are available. In the corners you can use only four (instead of 9) pixels, on the fringes (without the corners) you can use the remaining six pixels. Even by using other additional filters, the treatment of the pixels on the fringes of the image 261

SCHILLER

has to be considered separately. In doing so, it is up to the pupils’ creativity to handle these special cases in a way that to make sense. In this article I am going to demonstrate the sense of (linear) filters on different examples. It is particularly interesting how the image itself changes, i.e., the large interior of the image and not the (thin) fringes.3 Pupils can later develop new creative filters themselves and visualize them on the computer to observe the effects they have achieved. When a new creative filter is discovered, pupils can still handle a necessary treatment of pixels on the fringes. Filters in General The discussed smoothing filter with the local averaging includes all the elements that are typical of a filter. This smoothing filter belongs to the group of linear filters. In general, filters use several pixels of the source image for calculating a new pixel value. The size of the filter, respectively the region of the filter, determines how many pixels are used to calculate a new pixel value. The most recently introduced filter for smoothing had a size of 3 x 3 pixels. Filters of sizes 5 x 5, 7 x 7 or 21 x 21 would also be possible and would produce a stronger smoothing. The filter needn’t necessarily have a square shape. For round filters, the filter effect would occur uniformly in all image directions. Additionally a different weighting of the involved pixel values is possible, so that some more distant pixels are not considered to be as strong as those lying next to the pixel whose value is being calculated. The filter size theoretically doesn’t have to be finite and certainly doesn’t have to include the original pixel. Because of the high number of possibilities, filters are classified systematically. They are for example divided into linear and nonlinear filters, which can be seen on the mathematical expression for calculating a new pixel value. [Burger et al., 2006, p. 90f] For linear filters, the values of the pixels are linked in a linear form, e.g. by a weighted sum. In the example of smoothing, in each case all 9 pixel values were allocated a weight of one ninth and added up. [Burger et al., 2006, p. 91] By using the AVERAGE- function, the pupils may not be directly aware of that. Therefore in the example above one can enter the 9 summands and the following division in the spreadsheet individually instead of using the AVERAGE- function. If the division by 9 is considered as multiplying by one ninth and brackets are multiplied out, pupils can see immediately that each summand, so each pixel value, is weighted by one ninth before the values are eventually added to a new one. Depending on the choice of the singular weights many different filters with a completely different behaviour are possible. Linear filters can be represented by matrices, where the weights are given. The matrix for the smoothing filter in size (3 x 3) is

. [Burger et al., 2006, p. 91] 262

DIGITAL IMAGES

Filters are applied to images by positioning the centre of the filter matrix over the actually considered pixel of the source image, multiplying the pixel values with their respective weights from the filter above the image and adding these products. Afterwards the result of this calculation has to be inserted at the corresponding position in the resulting image. [Burger et al., 2006, p. 92f] Sharpening an Image Also the sharpening of images is possible in the same way as smoothing is. The only difference is the matrix used for the filter operation. For sharpening an image in [Pehla, 2003] the matrix

is used.

Figures 11 and 12. Before and after sharpening.

Figure 12 is obtained by applying the above filter matrix on Figure 11. Pupils’ Experiments (further (linear) filters) Linear filters offer you the chance to experiment. Based on a commonly known filter pupils can change well- directed singular entries in the filter matrix. They can even design their own new matrix and consider the impact of new entries. In experiments one should first use simpler (greyscale) images, such as the little smiley in Figures 1 and 2. The image should be, especially if it is small, spatially extended by adding additional (empty) fringes. This way you can prevent it from happening that all the pixels of the (too small) image are on the fringes and no filtering happens at all, thus one would see no effects. The pupils can carry out experiments guided by specific questions. For example they should consider what will happen if they use a filter matrix consisting of only zeros. The pupils should not try to find out the answer4 just by experimenting, but 263

SCHILLER

by considering before what will happen, and then confirm or disprove their suspicion by trying. Due to the necessary considerations, the filter process can be internalized more easily and experimentation doesn’t degenerate into wild, untimed and thoughtless tinkering. EDGE DETECTION

What is it, What is it Needed for and How Can it be Done? In human vision, edges play an important role. Figures can be reconstructed as a whole from a few striking lines. Edges could be roughly described by the fact that in the image, which has an edge, the intensity will change greatly within a small neighbourhood and along a distinct direction. Stronger changes of the intensity are a stronger indication of an edge in the observed position. The strength of the intensity change corresponds to the first derivative, which is thus an approach to determine the strength of an edge. [Burger et al., 2006, p. 117] So the steepness of the greyscale function represents the strength of an edge. The steepness of the greyscale function is nothing but the amount of the gradient and the direction of the edge is perpendicular to the direction of the gradient. [Tönnies, 2005, p. 174f] Since edges also separate objects from the background, the knowledge of edges may be useful for example in the detection of objects (e.g. characters (OCR), faces). As mentioned above, a greyscale image can be interpreted as function I (u, v). In order not to overwhelm pupils with two-dimensional differentiation, it is useful first to look at the whole process in a one-dimensional way by restricting themselves to one image line.5 One might draw a function, which is holding a high value (e.g. 250) relatively constantly, then at some point relatively suddenly is falling down (e.g. to 10) within a short period and is afterwards again remaining relatively constant at the value to which it has just fallen. Additionally, the derivative function is needed. One can then see clearly that the derivative function, wherever the function itself is relatively constant, approximately remains at value 0 and only at the relatively rapid shift from 250 to 10 it suddenly gets a large negative deflection. So at the position, where the observed image line is changing from a nearly perfect white on an almost perfect black, at the position of an edge, the derivative function has a remarkably low level. In a change from a darker to a brighter area, however, the deflection would be positive, because the greyscale function increases. Such swings in the derivative functions can thus be interpreted as positions of edges. What the pupils will notice is that behind the images to be analyzed we do not have continuous functions and therefore the functions cannot be differentiated. But the derivatives can be approximated by differences. The differentiation in x-direction can be approximated by the difference of the grey value of the pixel afterwards and the grey value of the pixel before the pixel currently under consideration. This can be achieved by convolution with the matrix

. 264

[Tönnies, 2005, p. 175]

DIGITAL IMAGES

The differentiation in the y-direction can be approximated with the help of the matrix

.

[Tönnies, 2005, p. 175]

It is important to note that the background to the differentiation is not necessarily required. In class, it will do if the teacher explains that big changes of the values between closely spaced pixels are indications for edges. As mentioned above, according to [Burger et al., 2006, p. 117] an edge is a place of great change in intensity. Thus, this basic idea behind the edge detection can be implemented in the classroom much earlier, because the necessary differences are also suitable for the lower grades. Example: Differentiation in the x-direction and Catching Inadmissible Values Let’s look at filtering with the matrix Dx in a practical way using the following image consisting of three different grey values as a source (Figure 13):

Figure 13. Test image with three different grey values.

Differentiation in x-direction provides us with the following result (Figure 14):

Figure 14. Result of differentiation in x-direction (clipping to the range 0 to 255).

At first glance, the result corresponds to our expectations: In a change from a dark shade of grey to a light one, the difference is significant, so the edge pixels are displayed. The strengths of the edges can also be seen in the resulting image, 265

SCHILLER

because when grey changes to white, the difference is not as great as from black to white, so these edge pixels are plotted darker. This follows automatically because the difference between grey and white is not as great as that between black and white. At second glance, we find out that edge transitions from lighter to darker shades of grey don’t occur in the resulting image. This is not surprising, because of the used transformation of the matrix of pixel values into an image, the values that lay outside the interval of 0 to 255 were clipped to the range 0 to 255, since they are not valid grey values. Subtracting the value of a bright pixel from a dark one provides us with a negative value. Due to the formation of differences, we are outside the (for greyscale images) admissible value range of 0 to 255. Pupils should consider at this point, what is the range of values generated from the differentiation in this way. This is the basic requirement for finding a reasonable solution that also plots the edge transitions from light to dark areas in the resulting image. The answer can be easily found: The most negative number is the one in which we subtract from the smallest number (0) the largest number (255), i.e., -255. The largest positive number is obtained by subtracting nothing from the largest number (255) and it therefore remains 255. The new range after a single differentiation in the way discussed here thus is the interval from -255 to +255. A clipping of negative values by setting them to 0 doesn’t make sense in this case, because by doing this these existing edges are not displayed. Negative values must be caught in a different way to find all possible edge pixels and not leave the permitted range of values. But negative values could also be caught by the absolute value (see [TUChemnitz, 2008, p. 8]). Let’s examine that (Figure 15):

Figure 15. As Figure 14, but catching by the absolute value.

We actually find all the edges we were searching for. PUPILS’ EXPERIMENTS WITH A GENERAL SPREADSHEET

Implementation and its Problems As mentioned above, it makes sense to let the pupils experiment with filters independently. They can determine the entries of a filter matrix themselves and test the effects of the filter procedure on an image quickly. Therefore they can create a spreadsheet with three tables: a table with the (greyscale) pixel values of the source image, one for entering the filter matrix, and one for the pixel values of the resulting image. 266

DIGITAL IMAGES

To make the document suitable for experiments, it makes sense to keep it as general as possible. Therefore it is recommended to use a filter matrix that is as large as possible, e.g. the size of 21 x 21, as can be seen in Figure 16. If a smaller filter matrix is required, the external cells have to be filled with zeros, and then the entire spreadsheet can be used principally unchanged. In Figure 16 you can see two matrices. The reason for this is that it is often useful to divide all matrix entries by a certain value. If we calculate the mean, for example, each pixel has to be weighted with 1 / (number of entries). Also for the reason of clarity, it is useful to single out certain common factors. Therefore, in my file the divisor, by which all the values have to be divided, is entered between the two matrices. The right matrix is filled with the (not yet divided) entries. The left matrix is provided with formulas to take the values of the right matrix and divide them by the divisor. In the implementation in the classroom, make sure that the column widths are selected appropriately so that (for clarity) the matrix has full space on the screen and so that between the entries with less numerics there is not too much space that would tear the matrix optically. But the real challenge for the students when creating such a generally usable spreadsheet is not the handling of the matrix, but rather the actual filter procedure: In the source image the values of the pixels on the fringes of the image should be unaffected. Later on, of course, other boundary treatments can be introduced. For simplicity, you start with the largest fixed filter matrix, in my case a filter matrix of size 21 x 21, even if you work only with a matrix of size 7 x 7 later and the other matrix entries are set to 0. What I’m trying to show is that e.g. in my case a fringe with a width of 10 pixels is excluded from the actual image filtering process, even if in the case of filtering with a matrix of size 3 x 3 only one pixel width would be enough. As the fringes of the image are negligible, this doesn’t matter. But if one wants to, they can work for a more accurate dynamic determination of the fringes, e.g. with the help of appropriate IF- conditions. You can start with the actual image filtering process at the top left point that no longer belongs to the fringes. Here a (long) formula has to be entered in which the new pixel value is calculated. Then this formula can be copied to the other cells representing pixels, which have to be filtered, and we get the desired result. But for several reasons this is not as easy to handle as it seems to be. The formula for the filter process with a matrix of size 21 x 21 is very long. After all, the sum of 212 = 441 multiplications, i.e., 441 matrix entries with one pixel

Figure 16. Screenshot of worksheet with the filter matrix.

267

SCHILLER

value in each case, has to be built. For matrices of size 3 x 3 that would be accomplished relatively quickly, but in a formula which refers to 882 different cells (matrix entries and pixel values), the probability of input, typing or clicking errors is very high. But as in the preparation of the formula (because of iterating through a matrix) in principle you (almost) always have to access neighbouring cells, you can implement a small utility program for creating the formula. Then you can paste the formula created by this program into the cell in the spreadsheet. To write such a utility program quickly, it is important to get an idea of the formula and begin to write it down: =RUNDEN(‘source image (greyscale)’!A1*’filter matrix’!$A$1+’source image (greyscale)’!A2*’filter matrix’!$A$2+’source image (greyscale)’!A3*’filter matrix’!$A$3+’source image (greyscale)’!A4*’filter matrix’!$A$4+’source image […] (greyscale)’!A20*’filter matrix’!$A$20+’source image (greyscale)’!A21*’filter matrix’!$A$21+’source image (greyscale)’!B1*’filter matrix’!$B$1+’source image (greyscale)’!B2*’filter matrix’!$B$2+’source image […];0) The distinction between absolute and relative references is very important. While the references to the surrounding pixels have to change when copying the formula to neighbouring cells of neighbouring pixels, the entries in the filter matrix remain always in the same place and therefore must be determined absolutely. However, references to the surrounding pixels must be relative. If you want to copy the formula obtained by the utility program into the spreadsheet, you may face another problem. For example, you get an error message saying that the maximal number of characters for a formula (on the level of 8192) was exceeded. Here you must reflect on how to shorten the formula. A possible solution would be to reduce the names of the worksheets, because “source image (greyscale)” or “filter matrix” already needs a lot of space. But how long can the new short names of the worksheets be? Can the terms be ever so short that the formula matches within the maximum allowable number of characters? Here arithmetic provides the answer. As already mentioned, there are 882 references on other cells. Of these, 441 refer to cells of pixel values and therefore are relative references. For the chosen image size you need column names at the length of 2 characters (letters) to address the entire width of the image. For the rows 3 characters (digits) are required. Therefore such a reference needs (without names for the worksheets) up to 5 characters. With the 441 references to the cells of the filter matrix, it looks a bit different. Here the line numbers are only 2 characters (digits), and in order to include the column name only 1 more character (letter) is necessary to address the matrix of size 21 x 21. Since the references to the cells of the filter matrix must be absolute, 2 more characters (dollar signs) per reference will be added. In sum you obtain again up to 5 characters for each such reference (without the names of the worksheets). Under the last assumptions, there are up to characters needed, thus about 3782 characters are left for the names of the worksheets. When we consider that all the 441 references refer to the source image or to the filter matrix, there are characters remaining for the names of the worksheets. Therefore 268

DIGITAL IMAGES

I have chosen the names “src” (source) for the name of the worksheet with the pixel values of the source image and “m” representing the worksheet with the filter matrix. For consistency, I changed the name of the worksheet of the resulting image to “dest”. If you change the utility program that prints the formula according to that work, it is now easy to insert the formula into the spreadsheet. Another possible solution would be to reduce the number of references by reducing the size of the filter matrix. But this allows fewer experiments, since comparisons between smaller and larger matrices can be interesting. As mentioned earlier, you can paste the formula created by the utility program in the cell located at the very top left, which is no longer on the fringes of the image, and then copy it to the other cells. But that sounds easier than it really is. After all, the formula consists of several thousand characters that have to be copied into over 50000 cells! Depending on your computer, resource problems are unavoidable. It will take a lot of patience! I can only recommend not to copy the formula in all the other cells immediately, but to proceed in stages. You should also save the file after such a stage of copying. I recommend keeping the image size rather small. I created the file for images of size 335 x 224 myself. Originally I planned using an image size about four times this, but I failed because of lack of time, as the copying of formulas and saving the document took a very long time on my computer. My file was in effect over 100 MB and the opening on my computer took about 10 minutes. Here you can estimate the necessary resources with an image that is four times larger! But these are also important experiences for pupils, after all. The construction or use of such a spreadsheet demonstrates the complexity of an implementation of simple basic principles. Similar to possible duration studies in filtering with a self-implemented program, pupils can see that the basic principles work, although perhaps there must be significant improvements to use them rationally in real-time applications in reality. Example to Test the General Spreadsheet Here is a little test of the general filter procedure implemented in a spreadsheet. As the source image I used the following greyscale image (Figure 17):

Figure 17. Test image (greyscale). 269

SCHILLER

I entirely filled the (right) filter matrix with 1s and I set the divisor to the number of filter entries, i.e., 212. Thus the mean of the values lying under the filter (that is lying over the image) is calculated. We can see the smoothing effect mentioned above well in the resulting image (Figure 18):

Figure 18. Resulting image by smoothing using a filter matrix of size 21 x 21.

Due to the size of the filter matrix the smoothing effect is so strong that the source image is no longer really visible. Because of the differences in brightness you can see very rough structures, such as the bright area in the left down zone of the image containing some flowers or the bright area in the upper zone, in which the concrete slabs that are lighter than the underlying plants are lying. The fringes of the image are also striking. These are already 21 pixels wide, and this raw area is clearly visible. In all previous examples, in which the untreated fringes were only one pixel wide, this was not noticed because of its thinness. But just the thickness of the fringes could now provide an extra boost to deal with boundary treatment in detail. Comments With such a spreadsheet pupils can experiment with different filters, not only with different filter entries, but also with different sizes of the matrix. Additionally experience is gained in connection with the available resources, which is important, because many pupils believe that resource problems are solved automatically when they use the next computer generation, which normally is not true. It usually depends on the functioning of algorithms that can save a multitude on resources. By working in a spreadsheet, as I have presented in this article, the filter procedure can be implemented in general, not only for linear filters, in the mathematics classroom (without the computer science teacher). The mathematics teacher needs only one ready-produced program (e.g. implemented by a computer science teacher) to be able to import the pixel values into a spreadsheet and convert the new pixel 270

DIGITAL IMAGES

values back into an image. Thus, they can solve the examples represented in this article without the help of a computer science teacher or deeper knowledge of computer science. NOTES 1 2 3

4

5

suggestions for literature: [Weigand_1989], [Weigand_1993]. see section “EDGE DETECTION”. You can find different suggestions for the treatment of image fringes in [Burger et al._2006, S. 113] (e.g. the obtaining of the original pixel values or the assumption, the image would be continued over the fringes). In [Burger et al._2006, S. 91] you get the hint, that the treatment of image fringes could need more complexity as needed for the big inner part of the image. Since each pixel is weighted by zero, thus multiplied with zero, and the total sum of zeros again is zero, the result is (apart from the not treated fringes) a completely black image. Thus, the image was deleted. The way over the one- dimensional case is in dispute e.g. also in [Burger et al., 2006, S. 118ff ].

REFERENCES Burger, W., & Burge, M. J. (2005 und 2006). Digitale Bildverarbeitung, Eine Einführung mit Java und ImageJ (Vol. 2). Überarbeitete Auflage, Springer-Verlag Berlin Heidelberg; ISBN 978-3-54030940-6 (Print) bzw. 978-3-540-30941-3 (Online) (SpringerLink), ISBN-13 978-3-540-30940-6; (This book is also available in English: Burger, W., & Burge, M. J. (2008). Digital image processing, an algorithmic introduction using Java. Springer; ISBN 978-1-84628-379-6. Pehla, M. Beispiel aus dem Tutorial “Bildverarbeitung mit Java”, 25.03.2003. Retrieved October 19, 2008, from http://www.dj-tecgen.com/downloads/Beispiele/Beispiel5.java. Tönnies, K. D. (2005). Grundlagen der Bildverarbeitung. Pearson Studium, ISBN 3-8273-7155-4. TU-Chemnitz, Kapitel 1: Farbräume – Warum nachts alle Katzen grau sind. Retrieved December 13, 2008, from http://www-user.tu-chemnitz.de/~niko/biber/tutorial/tutorial.pdf. Weigand, H. G. (1989). Zum Verständnis von Iterationen im Mathematikunterricht. Dissertation, Verlag Franzbecker, Bad Salzdetfurth, ISBN 3-88120-183-1. Weigand, H. G. (1993). Zur Didaktik des Folgenbegriffs, überarbeitete Habilitationsschrift. BI, Mannheim, ISBN 3-411-16221-X.

Thomas Schiller Linz (Austria)

271

HANS-STEFAN SILLER

16. MODELLING AND TECHNOLOGY Modelling in Mathematics Education Meets New Challenges

INTRODUCTION

Mathematics is a part of our daily life. A lot of (problem-) solving techniques were discovered by mathematical discussion or through mathematical activities involving real objects. Since the end of the 1970’s educational researchers are looking at the impact of real real-life situations that can be solved by mathematical methods and applied in the area of mathematical education. For this reason it is necessary to show the possibilities of real-life problems to mathematics education. Hence it is necessary to think about didactical-reflected and approved principles for education, so that the implementation of real-life-situations in mathematics education can be treated successfully. By thinking about this subject, it is necessary that we ask ourselves certain questions: “Where am I able to find mathematics in our daily life? Which (particular) educational principle allows educating it? How is mathematics needed for the future?” By looking in various literature sources certain accepted proposals can be found (see Siller, 2008). Very important skill requirements for students are knowing and understanding basic competencies in mathematics, like – representing, representing mathematically, – calculating, operating, – arguing, reasoning, – interpreting, – creative working. These aims are not new! They can be found for example in Wittmann (1974). The author is describing general educational objectives and basic techniques which help to create the knowledge and understanding for basic mathematical principles and basic mental activities, like Winter (1972, S. 11–12) has stated it. Following these formulated targets, consequences for mathematics education arise. Maaß & Schlöglmann (1992) have formulated central postulations that allow combining the implementation of real-life-problems and the learning of basic mathematical principles: – Mathematics education should convey an extensive and balanced picture of mathematics itself. All aspects – historic, systematic as well as algorithmic – of this subject are important for students, so that they achieve a well known picture – of mathematics concerning reality as a particular science regarding its applicability to certain problems and its effect to society. J. Maasz and J. O’Donoghue (eds.), Real-World Problems for Secondary School Mathematics Students: Case Studies, 273–280. © 2011 Sense Publishers. All rights reserved.

SILLER

– Learning for life! Mathematics education should help students to be able to solve (easy) problems of daily life, without studying the subject itself. Mathematics should be seen as a construction of reality. – Learning for profession! Mathematics education should prepare students for higher education in certain extensiveness. These requirements raise a lot of issues for dedicated teachers. It is necessary that they have some guidelines which they can obey and which are prepared for practical reasons. The idea of mathematical modelling would be an adequate one for such purposes. Other reasons for taking the idea of mathematical modelling into account are arguments of general education, like – evolvement of personality, – exploitation of environment, – equal participation in (all aspects of ) society, – placement of rules and measurements. Following such arguments (mathematical) modelling can be seen as a powerful method for mathematics education. It can be found in a lot of paragraphs in curricula, too. For example in the Austrian curriculum for Mathematics (2004) it is stated: “Mathematics in education should contribute that students are able to enforce their accountability for lifelong learning. This can happen through an education to analytical-coherent thinking and through intervention with mathematical backgrounds which have a necessary fundamental impact in many areas of life. Acquiring these competencies should help students to know the multifaceted aspects of mathematics and its contribution to several different topics. The description of structures and processes in real life with the help of Mathematics allows understanding coherences and the solving of problems through a deepened resulting access which should be a central aim of Mathematics education. […] An application-oriented context points out the usability of Mathematics in different areas of life and motivates to gain new knowledge and skills. Integration of the several inner-mathematical topics should be strived for in Mathematics and through adequate interdisciplinary teaching-sequences. The minimal realization is broaching the issue of application-oriented contexts in selected mathematical topics; the maximal realisation is the constant addressing of application-oriented problems, discussion and reflection of the modelling cycle regarding its advantages or constraints.” The aims of modelling can be realized if a certain schedule – designed by Blum (2005) – is observed (Figure 1).

Figure 1. Circle for mathematical modelling by Blum (2005). 274

MODELLING AND TECHNOLOGY

MODELLING THROUGH THE HELP OF TECHNOLOGY

Using computers in education allows discussing problems which can be taken from students’ life-world. The motivation for mathematics education is affected because students recognize that this subject is very important in their everyday life. If it is possible to motivate students in a way like this it will be easy to discuss and to teach necessary basic or advanced mathematical contents. By using technology difficult (mathematical) procedures in the modelling-circle can be done by the computer. Sometimes the use of technology is even indispensable, especially in – computationally-intensive or deterministic activities, – working, structuring or evaluating large data-sets, – visualizing processes and results, – experimental work. By using technology in education it is possible to teach traditional contents in a manner that is different to conventional methods and it is very easy to pick up new contents for education. The centre of education should be a discussion with open, process-oriented examples. They are characterized by the following points. Open process-oriented problems are examples which – are real applications, e.g. betting in sports (Siller & Maaß, 2009), not vested examples for mathematical calculations, – are examples which develop out of situations, that are strongly analyzed and discussed, – can have irrelevant information , that must be eliminated, or information which must be found, so that students are able to discuss it, – are not able to be solved at first sight. The solution method differs from problem to problem, – need competencies not only in Mathematics. Other competencies are also necessary for a successful treatment, – motivates students to participate, – provokes and opens new questions for further, as well as alternative, solutions. The teacher achieves a new role in his profession. He is becoming a kind of tutor, who advises and channels students. The students are able to detect the essential things on their own. Major goals of education will be met, if technology is used in such a way (cf. Möhringer, 2006): – Pedagogical aims: With the help of modelling cycles it is possible to connect skills in problemsolving and argumentation. Students are able to learn application competencies in elementary or complex situations. – Psychological aims: With the help of modelling the comprehension and the memory of mathematical contents is supported. – Cultural aims: Modelling supports a balanced picture of Mathematics as a science and its impact in culture and society (cf. Maaß, 2005a & Maaß 2005b)

275

SILLER

– Pragmatically aims: Modelling of problems helps us to understand, cope and evaluate known situations. The integration of technology into the modelling cycle can be helpful by leading to an intensive application of technology in education. The way how it could be implemented can be seen in Figure 2.

Figure 2. Extended modelling cycle – regarding technology.

The “technology world” is describing the “world” where problems are solved through the help of technology. This could be a concept of modelling in mathematics as well as in an interdisciplinary context with computer-science-education. This extension meets the idea of Dörfler & Blum (1989, p. 184), who state: “With the help of computers which are used as mathematical additives it is possible to escape routine calculation and mechanical drawings, which can be in particular a big advantage for the increasing orientation of appliance. Because of the fact, that it is possible to calculate all common methods taught in school with a computer, mathematics education meets a new challenge and (scientific) mathematics educators have to answer new questions.” Example Technology enables students to reflect about certain modelling activities which take place in traditional tasks. Through the help of dynamic graphical software traditional solutions can be proved and visualized in a modern way. I want to show one possibility by using a traditional example, in the area of optimization: The owner of a Power-Drink-company is looking at the costs for producing tins in which the soft-drink is sold. He recognizes that the costs are rising to a limit which he is not able to fund. So he is looking for alternatives and asks you, a student of mathematics, for a competitive solution. Think about an answer which you could give to the owner of the company! First step – Constructing a suitable model for the given situation. The given situation is a “real situation” from reality. The owner of a certain company wants to optimize the production costs for tins. First of all the student will look at the volume of a certain tin. Therefore they have to walk to a supermarket and look for tins of a certain soft-drink. For this 276

MODELLING AND TECHNOLOGY

example I want to determine the volume of such a tin with V = 250 ml. After that students have to think about the situation and realize that the surface of such tins should be minimized. Second step – Reducing the parameters. Student shall recognizes that the form of the tin is too complicated for an elementary calculation. The seam at the top and the recess at the bottom are too difficult for an easy explanation. So they should decide to idealize the tin, which could look like a cylinder. There are no seams, no recesses and no break contact at the top. With the help of this idealized tin the surface can be optimized with the help of a mathematical model. Third step – Constructing a mathematical modelling. Because of the students knowledge they describe the surface of the tin with the help of a function in two variables S(r, h) = 2r²π + 2rπh. But the solutions of such a function with elementary calculation wills not succeed. So the constraint 250 = r²πh is needed. Substituting h in terms of r into the function S(r, h) returns a function in one variable S(r), which can be solved easily. Fourth step – Solving the function. The students will calculate the solution of the function with the help of differential calculation and by the help of technology. The found solution can be visualized in a graph (Figure 3).

Figure 3. Graphical solution.

The minimum can be found graphical at the cross position, which is marked by the ellipse. With the help of dynamic graphical software a tool was developed which is able to calculate the solution and to show the solution in one step. The student has to know the software well and then he is able to implement this problem with the help of such software. Students get the possibility to reflect about their mathematical 277

SILLER

solutions and are able to visualize it in a way which shows very impressively the results of such an optimization (Figure 4):

Figure 4. Visualizing the solution.

We know the gradient of the tangent is 0 when the minimal surface area is reached (Figure 5). Therefore the tangent to the point P is sketched and the minimal surface area can be determined. It can be found at the point P (3.41, 4.39). The same value can be found by a calculation.

Figure 5. The result of the optimization.

Fifth step – Interpreting and arguing the solution. Through a detailed view to the results of the radius and the height of the tin it is easy to see, that the minimal surface is reached when the height is equal to two times the radius (h = 2r), which can be shown trough a simple analogue calculation – let us name it a pseudo-proof. 278

MODELLING AND TECHNOLOGY

Sixth step – Validating the solution. Students could search for such tins for instance in a supermarket. They will find such tins quite easily. My hypothesis for finding such surface optimized tins in reality is the following: “If the design is not important to the producers the surface is optimized. If the design or maybe a company logo is important then the tins are not optimized.” I have not proofed this hypothesis but if we have a look at tins in the supermarket it gets obvious. Tins in which drinks are kept are not surface-optimized; tins where eatables are kept, like tins for vegetables, especially (sweet-) corn, are surface optimized, because the design is not important to the buyer. If we have a look at the pictured corn-tin (see Figure 6) my conjecture will be reasonable.

Figure 6. Corn-tin.

Advancements in the model. As we have seen the situation of modelling a tin and calculating the optimal surface area with a given volume has been extremely idealized. As a first approach cycle it is a proper method. But for sure students will 279

SILLER

ask what they have to do, if other parameters like the seam should be considered. For this reason a computer-algebra-system should be used, because it is difficult to implement such conditions with the help of dynamical graphical software. Even for a better understanding of tins in reality it is necessary to discuss it with seams, which was done by Deuber (2005). CONCLUSION

By integrating technology into education routine jobs, like differentiation or integration of functions or manipulating mathematical terms, the focus on conducting routine calculations is diminished. The importance of calculus, which is a typical attribute in common education, loses ground to the importance of interpreting and demonstrating. By including real-life-problems, through the aspect of modelling, those aspects can be enforced with the help of technology. So the role of modelling in education is enforced and students can become aware of the enormous prominence of modelling in Mathematics education. REFERENCES Blum, W. (2005). Modellieren im Unterricht mit der “Tanken”-Aufgabe. In Mathematik lehren, H. 128 (pp. 18–21). Seelze: Friedrich Verlag. Austrian curriculum for Mathematics. (2004). AHS-Lehrplan Mathematik. Wien: BMUKK. Retrieved from http://www.bmukk.gv.at Deuber, R. (2005), Lebenszyklus einer Weissblechdose – Vom Feinblech zur versandfertigen Dose. In http://www.swisseduc.ch/chemie/wbd/docs/modul2.pdf, Baden. Dörfler, W., & Blum, W. (1989). Bericht über die Arbeitsgruppe “Auswirkungen auf die Schule”. In J. Maaß, & W. Schlöglmann, (Eds.), Mathematik als Technologie? - Wechselwirkungen zwischen Mathematik, Neuen Technologien, Aus- und Weiterbildung (pp. 174–189). Weinheim: Deutscher Studienverlag. Maaß, J., & Schlöglmann, W. (1992). Mathematik als Technologie – Konsequenzen für den Mathematikunterricht. In mathematica didactica, 15. Jg. Bd. 2, pp. 38–58. Maaß, K. (2005a). Modellieren im Mathematikunterricht der S I. In JMD 26(2). Wiesbaden: Teubner, pp. 114–142. Maaß, K. (2005b). Stau – eine Aufgabe für alle Jahrgänge! In PM Heft 47(3). Köln: Aulis Verlag Deubner, pp. 8–13. Möhringer, J. (2006). Bildungstheoretische und entwicklungsadäquate Grundlagen als Kriterien für die Gestaltung von Mathematikunterricht am Gymnasium. Dissertation, LMU München. Siller, H.-St. (2008). Modellbilden – eine zentrale Leitidee der Mathematik. Schriften zur Didaktik der Mathematik und Informatik an der Universität Salzburg, Aachen: Shaker Verlag. Siller, H.-St., & Maaß, J. (2009). Fußball EM mit Sportwetten. In A. Brinkmann, & R. Oldenburg, (Eds.), ISTRON – Anwendung und Modellbildung im Mathematikunterricht (pp. 95–113), Hildesheim: Franzbecker. Winter, H. (1972). Über den Nutzen der Mengenlehre für den Arithmetikunterricht in der Grundschule. In Die Schulwarte 25, Heft 9/10. pp. 10–40. Wittmann, E. Ch. (1974). Grundfragen des Mathematikunterrichts. Wiesbaden: vieweg.

Hans-Stefan Siller IFFB – Dept. for Mathematics and Informatics Education University of Salzburg

280

LIST OF CONTRIBUTORS

Manfred Borovcnik

[email protected]

Ramesh Kapadia

[email protected]

Astrid Brinkmann

[email protected]

Klaus Brinkmann

[email protected]

Tim Brophy

[email protected]

Jean Charpin

[email protected]

Simone Göttlich

[email protected]

Thorsten Sickenberger

[email protected]

Günter Graumann

[email protected]

Ailish Hannigan

[email protected]

Herbert Henning

[email protected]

Benjamin John

[email protected]

Patrick Johnson

[email protected]

John O’Donoghue

[email protected]

Astrid Kubicek

[email protected]

Jim Leahy

[email protected]

Juergen Maasz

[email protected]

Hans-Stefan Siller

[email protected]

Thomas Schiller

[email protected]

281

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close