So now we have 2
models that we can use to compute an average.
Both are mathematically correct and we expect to avoid some large
numbers with our second model (the model in which we divide each number first
by the count of all the numbers). This
requires a suspension of disbelief because our maximum number we can handle is
4 billion and I know the testers out there have already asked, "But what
if the average is expected to be over 4 billion?" We'll come back to that after we test the
basic cases for the second model.
Let's start with
some Python code here. Nothing special
for the first model we had in which we add all the numbers first and divide by the
count of the numbers:
sum = 0 #this is the running total
k=0 #this is how
many items there are total
for i in
range(1,100): #this will go from 1 to
100.
sum += i # add each number to the total,
1+2+3+4+5 ….+99
k+=1 #count how many numbers there are -
there are 99 because python skips the 100 from above
print(sum / k) # print the average
And when this is run
Python prints out 50.0. This is what we
expected (except maybe we only expected 50 without the "point 0" at the end).
If we keep the k we
had above (99) then we can add a second bit of code to get the second model
working. Remember, the second model is the one in which we divide each number by the count of all the numbers there are and then add to a running total. That running total will be the average value when done.
sum = 0 #reset the sum to zero
for i in
range(1,100): #again, loop from 1 to 99
sum += i/k
#now add 1/99 + 2/99 + 3/99 + 4/99 + 5/99 …. + 99/99
print(sum)
And we get an
incorrect result:
49.99999999999999
This is a short
example of floating point errors cropping up.
We haven't even gotten to the test to see if our numbers were getting
too large and we already need to deal with this.
Questions, comments, concerns and criticisms always welcome,
John
No comments:
Post a Comment