Numbers appear frequently in computers. This article, along with all your documents, images, videos, and game saves are represented fundamentally as binary code, a series of 0s and 1s. Computers use the binary, or base-2 system, because a switch has just two states: on and off. A binary digit (0 or 1) is called a bit. Due to this fact, powers of 2 appear frequently in computing, as 2n is the number of values that can be represented in n bits.
The smallest data type in all common programming languages is a byte, which contains 8 bits. The values of a signed byte range from -128 to 127. Early arcade games used an unsigned byte, which ranges from 0 to 255. The most famous example of unsigned byte overflow is the Pac-Man kill screen, in which half of the screen on level 256 would be taken up by internal data and parts of the maze itself, due to the fruit counter overstepping its normal allocated space and attempting to draw 255 fruit. Dig Dug also had a kill screen at level 256, in which an enemy would be placed directly over the player. Donkey Kong's kill screen was similar; an unsigned byte overflow at level 22 causes the timer to be set to 4 seconds, making it impossible to complete the level.
The value corresponding to a color is represented using three bytes, one for red, another for green, and a third for blue, providing 16,777,216 distinct colors. Colors are more often seen as hexadecimal codes, with the first 2 digits corresponding to the red value, the next 2 corresponding to green, and the final two corresponding to blue.
16-bit integers can represent 65,536 distinct values, and range from -32,768 to 32,767.
In programs such as video games, the most common integer datatype is a 32-bit integer. The number of values that can be represented in a 32-bit space is 232, which is 4,294,967,296. A signed 32-bit integer can represent values from -2,147,483,648 to 2,147,483,647. 32-bit integers are ideal for almost all discrete mechanics in everyday programs, as it is nigh inconceivable to reach 2.147 billion units of money, distance, or points in most games without cheating.
Unix time will reach the latter value at 03:14:07 UTC on January 19, 2038, and 32-bit Unix time will wrap around to -2,147,483,648, or December 13, 1901. Systems using 32-bit Unix time would return incorrect results when attempting to perform calculations on dates beyond January 19, 2038. A 32-bit integer overflow also caused the "Far Lands" in the sandbox game Minecraft to appear beginning 12,550,821 blocks from the world origin, in which certain values would be "clumped" to the nearest representable value, resulting in extreme interpolation values and a warped landscape.
A 64-bit space can hold 264, or 18,446,744,073,709,551,616 different values. A signed 64-bit integer datatype can represent values from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
To represent fractional values instead of only integers, floating point numbers are commonly used. In a floating point number, a certain number of bits are allocated to an "exponent". One bit is reserved for the sign of the number itself, and all of the remaining bits are allocated to the "mantissa", which is shifted by the exponent number of bits. Two formats are commonly used: single-precision and double-precision.
A single-precision floating point number consists of 32 bits, with an 8-bit exponent and a 23-bit fraction, and one bit allocated to the sign of the entire number. The exponent string is different from the binary representation of the actual exponent value; an exponent of 01001101, for instance, does not correspond to 77, and actually encodes -50. All integers from -16,777,216 to +16,777,216 can be represented exactly, and at any given value, the gap between exactly representible values is given by 2^(floor(log2(x))-23). As such, single-precision floating point numbers are ideal for mechanics of a program that only involve smaller values that are not always integers.
The largest number that can be represented as a single-precision floating point number is 340,282,346,638,528,859,811,704,183,484,516,925,440, just under 2^128. The exponent of 11111111, which would encode not -1, but 128, is reserved for NaN values.
Meanwhile, a double-precision floating point number consists of 64 bits, with 11 bits allocated to the exponent, and a 52-bit fraction. All integers from -9,007,199,254,740,992 to +9,007,199,254,740,992 can be represented exactly. All JavaScript variables are represented as doubles, which is why the prime factorization tool I use is limited just under the latter value. For mechanics in programs involving values in the millions or billions that may not be integers, double-precision floating point numbers are ideal.
The largest number that can be represented as a single-precision floating point number is just under 2^1024, and is given below.
179,769,313,486,231,570,814,527,423,731,704,356,798,070,567,525,844,996,598,917,476,803,157,260,780,028,538,760,589,558,632,766,878,171,540,458,953,514,382,464,234,321,326,889,464,182,768,467,546,703,537,516,986,049,910,576,551,282,076,254,490,090,389,328,944,075,868,508,455,133,942,304,583,236,903,222,948,165,808,559,332,123,348,274,797,826,204,144,723,168,738,177,180,919,299,881,250,404,026,184,124,858,368
The value above is the largest number that many programs can represent, however, most applications specifically designed for mathematical calculations can go much further, such as Mathematica, whose limit is not until 1.6052167*10^1355718576299609 (2^4503599627370368), a number that would take on the order of a petabyte to store!