[ExI] Floating-point math

Anders Sandberg anders at aleph.se
Tue Apr 26 11:24:02 UTC 2016


The classic article is "What Every Computer Scientist Should Know About 
Floating-Point Arithmetic" by David Goldberg:
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Whatever method you use, you will have to make trade-offs. Usually the 
trade-off is that you do not want to think about floating point (and how 
bad could errors get, anyway?), so you ignore the whole problem (trading 
debugging or disaster time for programming effort). Sometimes clever 
trade-offs or recognizing the requirements give huge wins (deep learning 
turns out to require very limited precision weights, which makes 
implementation and hardware way better: http://arxiv.org/abs/1602.02830 
). Sometimes people obsess over numeric precision endlessly while 
ignoring the massive errors induced by the model itself (the Sleipnir A 
1991 disaster was due to software that gave lots of significant digits, 
but was run with the wrong grid resolution, producing an answer 47% off).

In my current project I ran into a result that turned out to be a false 
result of numerical cancellation; drawing on my ancient numerics 
knowledge I managed to update the code to not produce subtle nonsense 
when dealing with probabilities less than 10^-100 (Taylor expansions for 
the win!) But I would not have noticed it if not a philosopher colleague 
had looked over my shoulder at my graph and said "That's funny..."

(I love when my papers force me to exponentiate something to the 10^10th 
power).

On 2016-04-25 14:06, David Lubkin wrote:
> http://ubiquity.acm.org/article.cfm?id=2913029
> The End of (Numeric) Error: An interview with John L. Gustafson
>
> Walter Tichy (best-known for the Revision Control System) interviews 
> computer arithmetic expert John Gustafson on his new format for 
> encoding floating point.
>
> I haven't scrutinized it closely but the concept makes sense. At first 
> glance, he's taking some of the ideas behind recent Internet data 
> formats and applying them to number-crunching.
>
> In a nutshell, in the early decades of computing, each manufacturer 
> had their own way of representing a number like
>
> 1.6025698833263803956335865659143 × 10²²³
>
> and using it in calculations. This inconsistency made it very tedious 
> to be sure that an engineer or scientist was getting the correct 
> results. And, therefore, that planes don't crash, we have accurate 
> weather forecasts, an illness is diagnosed correctly, etc.
>
> The answer was an industry-wide standard, known as IEEE floating 
> point. But the answer has a few problems. Gustafson discusses what's 
> wrong and his idea for how to fix it.
>
>
> -- David.
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat


-- 
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University




More information about the extropy-chat mailing list