Article 54HMR Novel and extended floating point

Novel and extended floating point

by
John
from John D. Cook on (#54HMR)

My first consulting project, right after I graduated college, was developing floating point algorithms for a microprocessor. It was fun work, coming up with ways to save a clock cycle or two, save a register, get an extra bit of precision. But nobody does that kind of work anymore. Or do they?

There is still demand for novel floating point work. Or maybe I should say there is once again demand for such work.

Companies are interested in low-precision arithmetic. They may want to save memory, and are willing to trade precision for memory. With deep neural networks, for example, quantity is more important than quality. That is, there are many weights to learn but the individual weights do not need to be very precise.

And while some clients want low-precision, others want extra precision. I'm usually skeptical when someone tells me they need extended precision because typically they just need a better algorithm. And yet some clients do have a practical need for extended precision.

Some clients don't aren't primarily interested precision, but they're interested in ways to reduce energy consumption. They're more concerned with watts than clock cycles or ulps. I imagine this will become more common.

For a while it seemed that 64-bit IEEE floating point numbers had conquered the world. Now I'm seeing more interest in smaller and larger formats, and simply different formats. New formats require new math algorithms, and that's where I've helped clients.

If you'd like to discuss a novel floating point project, let's talk.

More floating point postssTr40JsFIJI
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments