Floating point numbers are inherently imprecise. This can be problematic when we try to unit test numerical algorithms. Let’s see this on an example (JVM/Scala, ScalaTest used as a testing framework):
This test works just fine… until someone decides to
do a “harmless refactoring” and replaces
Now the test fails with the following message:
The expected and the actual values differ by ~5.6E-17. Doubles offer precision of about 15 significant digits in a result. All the other digits after 15th digit are just noise that should be ignored.
To make our unit-test more robust we have two strategies. The first strategy is to know the precision that is guaranteed by the algorithm that we are using, and to round the result to that precision before returning it to the client:
The second strategy is to use assertions intended to work with floating point numbers. Again to use them correctly we need to be aware of the precision of our algorithm:
In this case it is good to define the precision as a global constant (or as a constant per algorithm).
Personally I prefer the first strategy, but with either of them our tests will be more robust and refactoring-friendly.
Troubles with NaN
Totally different set of problems are connected to
On JVM operator
inconsistently when comparing
Unit testing frameworks often do not help here much. For example the following test:
will fail with a rather unhelpful message:
According to ScalaTest guidelines we should use
to check if a value is
We experience similar troubles when we try to
compare case classes containing double fields with
I do not have a good solution for this problem.
We can either create a custom assertion for a given case class ourselves,
define a custom equality using
or we can use
None of the solutions is great.
The last thing to remember is that we cannot