@stdlib/number-float32-base-significand
Return an integer corresponding to the significand of a single-precision floating-point number.
Return an integer corresponding to the significand of a single-precision floating-point number.
Return a string giving the literal bit representation of a single-precision floating-point number.
Convert a single-precision floating-point number to a signed 32-bit integer.
Convert a single-precision floating-point number to an unsigned 32-bit integer.
Create a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Return a string giving the literal bit representation of an unsigned 32-bit integer.
Convert an unsigned 32-bit integer to a signed 32-bit integer.
Create an unsigned 8-bit integer from a literal bit representation.
Return a string giving the literal bit representation of an unsigned 8-bit integer.
Create an unsigned 16-bit integer from a literal bit representation.
Return a string giving the literal bit representation of an unsigned 16-bit integer.
Create an unsigned 32-bit integer from a literal bit representation.
Size (in bytes) of a half-precision floating-point number.
Return a double-precision floating-point number with the magnitude of x and the sign of y.
Return a single-precision floating-point number with the magnitude of x and the sign of y.
Return a double-precision floating-point number with the magnitude of x and the sign of x*y.
Return a single-precision floating-point number with the magnitude of x and the sign of x*y.
Size (in bytes) of a single-precision floating-point number.
Mask for the sign bit of a single-precision floating-point number.
Mask for the significand of a single-precision floating-point number.