Yesterday I mentioned that Haskell CSS Syntax uses a seperate module to parse any floating point numbers.

Decoding numbers involves converting to the computer's native base from our of base 10. For integers this is a simple multiply and add, as was implemented by Haskell CSS Syntax.

But for floating point we still have the same issue. "Scientific Notation" is coefficient*10^exponent, whereas computers use coefficient*2^exponent. So how does this work?


Haskell CSS Syntax yields to my code either unbounded Integers or Scientific values at it's convenient, both of which I convert to CPU-native Floats. The Scientific in turn stores a fixed-bitsize base 10 exponent and an unfixed-bitsize coefficient.

To perform the conversion for small enough exponents Scientific consults a lookup table to decide what it should multiply (for positive exponents) or take the remainder (for negative) by the coefficient.


Show thread

CORRECTION: It is ofcourse division, not remainder, for negative exponents. I got confused by somewhat more correct mathematical notation as opposed to other programming languages.

Sign in to participate in the conversation

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).