IEEE 754‐1985, Binary 64, Floating‐Point Examiner
Chris W. Johnson
2022‐10‐02
Imprecision is inevitable in numeric data types, because none can represent every possible number. It is especially common in floating‐point types, like IEEE 754, which are designed to preserve magnitude by sacrificing precision. (Conversely, integer data types preserve precision by sacrificing magnitude.) Loss of precision, for whatever reason, can have important and surprising effects, especially when people are unaware of the issue. Lack of that awareness is a common problem (even among computer programmers) in part because decimal representations of these numbers are pervasively created with the intent of hiding inaccuracies.
It is hoped this tool will aid understanding of the causes and effects of those issues.
The “IEEE 754‐1985, binary 64”
format is a very common floating‐point format used by
Java¹ to represent numbers in its
double
primitives and Double
objects, and for all floating‐point calculations. By extension, any other languages executed in the
Java Virtual Machine (JVM) will use it.
It is also used by languages unrelated to Java and the JVM. Notably, C and C++ use it wherever it is the
host CPU’s native representation (prime examples are Intel’s CPUs, so it is a very common case),
JavaScript uses it for essentially all numbers,
C#
float
and double
types use it,
Python uses it on most platforms,
and so on.
TL;DR
Examine values 0.1 (⅒) and 0.125 (⅛). Observe the former is approximated, while the latter is exact. Why? 0.1 is not a power of two, while 0.125 is 2⁻³, and this numeric format’s values are quotients from divisions by 2⁵², multiplied by powers of two. (Hence the format’s “binary” designation.) Didn’t know all that? Don’t feel bad; most computer programmers don’t. Now, be one of the exceptions. (It’s important.)
️
Input Formats
General rules:
- Leading and trailing whitespace is ignored.
- Sign, if explicit, is first character. Supported signs: ‘+’, ‘-’ (U+002D, HYPHEN-MINUS), and ‘−’ (U+2212, MINUS SIGN).
- Input letter case is irrelevant. Uppercase, lowercase, and mixed‐case are accepted.
Input Example | Interpretation |
---|---|
INFINITY | Infinity. |
INF | Infinity. |
∞ | Infinity (U+221E). |
NaN | Not‐a‐Number. (Result of
“0.0 / 0.0 ”, “0.0 * ∞ ”, and other
problematic calculations.)
|
? | Decimal number. Locale aware. |
? | Decimal number with digit grouping. Locale aware, and grouping with ‘_’ (U+005F) works in all locales. |
0x0123456789AbCdEf | Hexadecimal with “0x” prefix.² Interpreted as big-endian representation of IEEE 754-1985 binary 64 format (not an integer). 1–16 hexadecimal digits, optionally grouped with ‘_’ (U+005F). Optional “L” suffix. |
10602c1299 | Hexadecimal, probably. Guess based on character ‘A’–‘F’, fraction separator’s absence, and ≤ 16 digits. Interpreted as big-endian representation of IEEE 754-1985 binary 64 format (not an integer). Optional grouping with ‘_’ (U+005F). Optional “L” suffix. |
04 | Hexadecimal, potentially. Guess based on leading zero‘s presence, fraction separator’s absence, and ≤ 16 digits. Interpreted as big-endian representation of IEEE 754-1985 binary 64 format (not an integer). Optional grouping with ‘_’ (U+005F). Optional “L” suffix. |
Floating‐Point Format | IEEE 754‐1985, binary 64 | |
---|---|---|
Value, Typical Representation, Decimal | ||
Value, Actual, Binary | Bits 63–0 | |
Value, Actual, Hexadecimal | Bits 63–0 | |
Sign | Bit 63 | |
Exponent | Bits 62–52 | |
Mantissa | Bits 51–0 | |
Value Calculation | = Full‐Precision Result ⩰ (low‐precision) |
Change the Bits, See What Happens…
Click on the bits to change their values. This page updates immediately.
± | Exponent | Mantissa | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — |
Conventions
- All hexadecimal and binary representations are big‐endian.
- Binary (base 2) representations are suffixed with a subscripted “2”. Their bits are identified using numbers 63 to zero, assigned from left to right.
- Hexadecimal (base 16) representations are suffixed with a subscripted “16”.
- Hexadecimal representations are used when shorter than decimal.
Footnotes
- Java Virtual Machine versions prior to 15 use IEEE 754‐1985. Versions 15 and above use IEEE 754‐2019, which changes neither the binary format, nor its interpretation, as examined here.
- In Java, for example, such values are obtained using
“
System.out.printf("0x%x", Double.doubleToLongBits(double));
”.