@stdlib/number-float64-base-assert
Base double-precision floating-point number assert functions.
Base double-precision floating-point number assert functions.
Return an integer corresponding to the unbiased exponent of a double-precision floating-point number.
Create a double-precision floating-point number from a literal bit representation.
Convert a signed 64-bit integer byte array to a double-precision floating-point number.
Create a double-precision floating-point number from a higher order word and a lower order word.
Return an unsigned 32-bit integer corresponding to the more significant 32 bits of a double-precision floating-point number.
Return an unsigned 32-bit integer corresponding to the less significant 32 bits of a double-precision floating-point number.
Return a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Convert a double-precision floating-point number to the nearest single-precision floating-point number.
Convert a double-precision floating-point number to a signed 32-bit integer.
Convert a double-precision floating-point number to an unsigned 32-bit integer.
Split a double-precision floating-point number into a higher order word and a lower order word.
Convert a signed 32-bit integer to an unsigned 32-bit integer.
Base utilities for unsigned 16-bit integers.
Set the more significant 32 bits of a double-precision floating-point number.
Set the less significant 32 bits of a double-precision floating-point number.
Return a boolean indicating if the sign bit for a double-precision floating-point number is on (true) or off (false).
Return a string giving the literal bit representation of a double-precision floating-point number.
Convert an integer-valued double-precision floating-point number to a signed 64-bit integer byte array according to host byte order.
Base utilities for signed 32-bit integers.