This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the content is also available at Berkeley Python Numerical Methods.

The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!

# Round-off Errors¶

In the previous section, we talked about how the floating point numbers are represented in computers as base 2 fractions. This has a side effect that the floating point numbers can not be stored with perfect precision, instead the numbers are approximated by finite number of bytes. Therefore, the difference between an approximation of a number used in computation and its correct (true) value is called round-off error. It is one of the common errors usually in the numerical calculations. The other one is truncation error which we will introduce in Chapter 18. The difference is that truncation error is the error made by truncating an infinite sum and approximating it by a finite sum.

## Representation error¶

The most common form round-off error is the representation error in the floating point numbers. A simple example will be to represent $$\pi$$. We know that $$\pi$$ is an infinite number, but when we use it, we usually only use a finite digits. For example, if you only use 3.14159265, there will be an error between this approximation and the true infinite number. Another example will be 1/3, the true value will be 0.333333333…, no matter how many decimal digits we choose, there is an round-off error as well.

Besides, when we rounding the numbers multiple times, the error will accumulate. For instance, if 4.845 is rounded to two decimal places, it is 4.85. Then if we round it again to one decimal place, it is 4.9, the total error will be 0.55. But if we only round one time to one decimal place, it is 4.8, which the error is 0.045.

## Round-off error by floating-point arithmetic¶

From the above example, the error between 4.845 and 4.8 should be 0.055. But if you calculate it in Python, you will see the 4.9 - 4.845 is not equal to 0.055.

4.9 - 4.845 == 0.055

False


Why does this happen? If we have a look of 4.9 - 4.845, we can see that, we actually get 0.055000000000000604 instead. This is because the floating point can not be represented by the exact number, it is just approximation, and when it is used in arithmetic, it is causing a small error.

4.9 - 4.845

0.055000000000000604

4.8 - 4.845

-0.04499999999999993


Another example shows below that 0.1 + 0.2 + 0.3 is not equal 0.6, which has the same cause.

0.1 + 0.2 + 0.3 == 0.6

False


Though the numbers cannot be made closer to their intended exact values, the round function can be useful for post-rounding so that results with inexact values become comparable to one another:

round(0.1 + 0.2 + 0.3, 5)  == round(0.6, 5)

True


## Accumulation of round-off error¶

When we are doing a sequence of calculations on an initial input with round-off error due to inexact representation, the errors can be magnified or accumulated. The following is an example, that we have the number 1 add and subtract 1/3, which gives us the same number 1. But what if we adding 1/3 for many times and subtract the same number of times 1/3, do we still get the same number 1? No, you can see the example below, the more times you doing this, the more errors you are accumulating.

# If we only do once
1 + 1/3 - 1/3

1.0

def add_and_subtract(iterations):
result = 1

for i in range(iterations):
result += 1/3

for i in range(iterations):
result -= 1/3
return result

# If we do this 100 times

1.0000000000000002

# If we do this 1000 times

1.0000000000000064

# If we do this 10000 times

1.0000000000001166