WTF ?

Andy Armstrong andy at hexten.net
Mon Jan 15 17:11:08 GMT 2007


On 15 Jan 2007, at 16:37, David Cantrell wrote:
>
> Determining whether a value can be precisely represented is left as an
> exercise for the reader, but to start with, any number constructed  
> from
> 1/(2^n) is (again, provided it's in range, and for integer n (and I
> think for any n that is precisely represented in floating point ...)).

Generally it'll be any number that can be expressed as the sum of  
some of the powers of 2 between N and M where N-M is less than number  
of bits in the mantissa and N is within the range of the exponent.

A quick scan of this (which may or may not be authoritative)

http://webster.cs.ucr.edu/AoA/Windows/HTML/RealArithmetica2.html

suggests that Intel FPUs can detect loss of precision when they  
attempt to store an 80 bit FPU internal value into a float or double  
(32 or 64 bits) but probably that they can't detect loss of precision  
(mantissa overflow) during calculations. As I say I don't know if the  
article linked to above is actually authoritative.

Maybe I'm being too old skool about it but I don't actually see much  
problem with teaching people that conventional floating point can  
only represent only a small subset of real numbers and advising them  
to choose a representation that's appropriate to the task at hand. It  
doesn't seem that big a price to pay for good performance and fixed  
size memory allocations.

-- 
Andy Armstrong, hexten.net



More information about the london.pm mailing list