Bit-Level Floating-Point Conversion
IEEE 754 floating-point is the foundation of modern floating-point calculation. I'm not going to explain it here. This page gives an interactive utility to unpack and pack 16-bit 'half', 32-bit 'float', and 64-bit 'double' floating-point.
Converter
IEEE 754 16-Bit 'half'
| Sign | Biased Exp (5) | Mantissa[1] (1+10) | |
|---|---|---|---|
15 | 1410 | 98 70 | |
| Dec: | |||
| Bin: | |||
| Hex: | |||
| Sci | = | ⨯ 2^() ⨯ | ||
|---|---|---|---|---|
| Value | ≈ | Type: | ||
| Bin | = | Exp Bias: | +15 | |
| Hex | = | |||
IEEE 754 32-Bit 'float'
| Sign | Biased Exp (8) | Mantissa[1] (1+23) | |
|---|---|---|---|
31 | 3024 23 | 2216 158 70 | |
| Dec: | |||
| Bin: | |||
| Hex: | |||
| Sci | = | ⨯ 2^() ⨯ | ||
|---|---|---|---|---|
| Value | ≈ | Type: | ||
| Bin | = | Exp Bias: | +127 | |
| Hex | = | |||
IEEE 754 64-Bit 'double'
| Sign | Biased Exponent (11) | Mantissa[1] (1+52) | |
|---|---|---|---|
63 | 6256 5552 | 5148 4740 3932 3124 2316 158 70 | |
| Dec: | |||
| Bin: | |||
| Hex: | |||
| Sci | = | ⨯ 2^() ⨯ | ||
|---|---|---|---|---|
| Value | ≈ | Type: | ||
| Bin | = | Exp Bias: | +1023 | |
| Hex | = | |||
Load a Special Value
| Value | 16-bit | 32-bit | 64-bit |
|---|---|---|---|
| Smallest Subnormal: | |||
| Largest Subnormal: | |||
| Smallest Normal: | |||
| Largest Normal: | |||
| Exact Integers Through[2]: |
The NaNs provided are an example (there are many possible NaNs)[3].
Loose Ends
Please contact me for suggestions or bug reports.
Thanks to h-schmidt.net's converter for inspiring the project. It served my needs for a long time.
Simple code for converting among floating-point, (hex) integer, and bytes representations (on Python):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import struct
def f32_as_intrep ( val:float ) -> int: return struct.unpack( "I", struct.pack("f",val) )[0]
def f32_as_intrep_str( val:float ) -> str: return hex( f32_as_intrep(val) )
def intrep_as_f32( intrep:int ) -> float: return struct.unpack( "f", struct.pack("I",intrep) )[0]
def f32_to_B( val:float, big_endian:bool=True ) -> bytes:
if big_endian: return struct.pack(">f",val)
else : return struct.pack("<f",val)
def B_to_f32( B:bytes, big_endian:bool=True ) -> float:
if big_endian: return struct.unpack(">f",B)[0]
else : return struct.unpack("<f",B)[0]
def f64_as_intrep ( val:float ) -> int: return struct.unpack( "Q", struct.pack("d",val) )[0]
def f64_as_intrep_str( val:float ) -> str: return hex( f64_as_intrep(val) )
def intrep_as_f64( intrep:int ) -> float: return struct.unpack( "d", struct.pack("Q",intrep) )[0]
def f64_to_B( val:float, big_endian:bool=True ) -> bytes:
if big_endian: return struct.pack(">d",val)
else : return struct.pack("<d",val)
def B_to_f64( B:bytes, big_endian:bool=True ) -> float:
if big_endian: return struct.unpack(">d",B)[0]
else : return struct.unpack("<d",B)[0]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#include <cstdint>
#include <bit>
[[nodiscard]] constexpr uint32_t f32_to_intrep( float val ) noexcept
{
return std::bit_cast<uint32_t>(val);
}
[[nodiscard]] constexpr float intrep_to_f32( uint32_t intrep ) noexcept
{
return std::bit_cast<float>(intrep);
}
[[nodiscard]] constexpr uint64_t f64_to_intrep( double val ) noexcept
{
return std::bit_cast<uint64_t>(val);
}
[[nodiscard]] constexpr double intrep_to_f64( uint64_t intrep ) noexcept
{
return std::bit_cast<double>(intrep);
}
