math-float32-bits
Returns a string giving the literal bit representation of a single-precision floating-point number.
Returns a string giving the literal bit representation of a single-precision floating-point number.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a string giving the literal bit representation of a double-precision floating-point number.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Euler's number.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Difference between one and the smallest value greater than one that can be represented as a single-precision floating-point number.
Difference between one and the smallest value greater than one that can be represented as a double-precision floating-point number.
Natural logarithm of the square root of 2π.
Natural logarithm of 10.
Natural logarithm of 2.
Base 10 logarithm of Euler's number.
Base 2 logarithm of Euler's number.
Maximum single-precision floating-point number.
Maximum double-precision floating-point number.
Effective number of bits in the significand of a double-precision floating-point number.
Square root of 1/2.
Square root of 2.