# In-depth: That's not normal - the performance of odd floatsIn-depth: That's not normal - the performance of odd floats

In this reprinted <a href="http://altdevblogaday.com/">#altdevblogaday</a> in-depth piece, Valve Software's Bruce Dawson concludes his series on the floating point format by examining important values like denormals, NaNs, and infinites.

Bruce Dawson, Blogger

May 23, 2012

[In this reprinted #altdevblogaday opinion piece, Valve Software's Bruce Dawson concludes his series on the floating point format by examining important values like denormals, NaNs, and infinites.] Denormals, NaNs, and infinities round out the set of standard floating-point values, and these important values can sometimes cause performance problems. The good news is, it's getting better, and there are diagnostics you can use to watch for problems. In this post I briefly explain what these special numbers are, why they exist, and what to watch out for. This article is the last of my series on floating-point. The complete list of articles in the series is:

The special float values include: Infinities Positive and negative infinity round out the number line and are used to represent overflow and divide-by-zero. There are two of them. NaNs NaN stands for Not a Number and these encodings have no numerical value. They can be used to represent uninitialized data, and they are produced by operations that have no meaningful result, like infinity minus infinity or sqrt(-1). There are about sixteen million of them, they can be signaling and quiet, but there is otherwise usually no meaningful distinction between them. Denormals Most IEEE floating-point numbers are normalized – they have an implied leading one at the beginning of the mantissa. However this doesn't work for zero so the float format specifies that when the exponent field is all zeroes there is no implied leading one. This also allows for other non-normalized numbers, evenly spread out between the smallest normalized float (FLT_MIN) and zero. There are about sixteen million of them and they can be quite important. If you start at 1.0 and walk through the floats towards zero, then initially the gap between numbers will be 0.5^24, or about 5.96e-8. After stepping through about eight million floats the gap will halve – adjacent floats will be closer together. This cycle repeats about every eight million floats until you reach FLT_MIN. At this point what happens depends on whether denormal numbers are supported. If denormal numbers are supported then the gap does not change. The next eight million numbers have the same gap as the previous eight million numbers, and then zero is reached. It looks something like the diagram below, which is simplified by assuming floats with a four-bit mantissa: With denormals supported the gap doesn't get any smaller when you go below FLT_MIN, but at least it doesn't get larger. If denormal numbers are not supported then the last gap is the distance from FLT_MIN to zero. That final gap is then about 8 million times larger than the previous gaps, and it defies the expectation of intervals getting smaller as numbers get smaller. In the not-to-scale diagram below, you can see what this would look like for floats with a four-bit mantissa. In this case, the final gap, between FLT_MIN and zero, is sixteen times larger than the previous gaps. With real floats the discrepancy is much larger: If we have denormals then the gap is filled, and floats behave sensibly. If we don't have denormals then the gap is empty and floats behave oddly near zero. The need for denormals One easy example of when denormals are useful is the code below. Without denormals it is possible for this code to trigger a divide-by-zero exception:

float GetInverseOfDiff(float a, float b)
{
if (a != b)
return 1.0f / (a - b);
return 0.0f;
}

#include <float.h>
// Flush denormals to zero, both operands and results
_controlfp_s( NULL, _DN_FLUSH, _MCW_DN );
…
// Put denormal handling back to normal.
_controlfp_s( NULL, _DN_SAVE, _MCW_DN );