math-gamma
Gamma function.
Gamma function.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Returns a string giving the literal bit representation of an unsigned 32-bit integer.
Returns a normal number `y` and exponent `exp` satisfying `x = y * 2^exp`.
Sets the more significant 32 bits of a double-precision floating-point number.
Returns a string giving the literal bit representation of a double-precision floating-point number.
Returns a string giving the literal bit representation of a single-precision floating-point number.
Computes a factorial.
Riemann Zeta function.
Signum function.
Computes the tangent of a number.
Returns a string giving the literal bit representation of an unsigned 16-bit integer.
Computes cos(πx).
Digamma function.
Dirichlet eta function.
Beta function.
Natural logarithm of the beta function.
Creates a single-precision floating-point number from an unsigned integer corresponding to an IEEE 754 binary representation.
Returns the next representable single-precision floating-point number after x toward y.
Returns a boolean indicating if the sign bit for a single-precision floating-point number is on (true) or off (false).