math-float32-bits
Returns a string giving the literal bit representation of a single-precision floating-point number.
Returns a string giving the literal bit representation of a single-precision floating-point number.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Computes a factorial.
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a string giving the literal bit representation of a double-precision floating-point number.
Gamma function.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Computes cos(πx).
Digamma function.
Dirichlet eta function.
Inverse complementary error function.
Returns an integer corresponding to the unbiased exponent of a single-precision floating-point number.
Splits a single-precision floating-point number into a normalized fraction and an integer power of two.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Returns the next representable single-precision floating-point number after x toward y.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).
Returns an integer corresponding to the significand of a single-precision floating-point number.
Returns an unsigned 32-bit integer corresponding to the IEEE 754 binary representation of a single-precision floating-point number.
Computes exp(x) - 1.