You may have heard that all data is ultimately stored as 0s and 1s. Computers store information in bits, which have exactly 2 possible values. (A lot like booleans!)
We typically think of them as 0s and 1s, but this is a bit (haha!) arbitrary. It helps because we often, though not always, interpret the values as binary numbers.
We're used to representing numbers as decimal values. Perhaps we think of the rightmost digit as the "ones" place, with the "tens" place to the left of it, the "hundreds" to the left of it and so on.
In fact, we are using base 10 to represent a number, with the rightmost "insignificant" digit being 100 with 101 to its left and 102 to its left and so on.
Binary numbers use base 2 instead of base 10 to represent a number. The same pattern applies, with the rightmost "insignificant" digit being 20 with 21 to its left and 22 to its left and so on.
Since we as humans tend to work in base 10, computer scientists want to get comfortable with the idea of converting between decimal and binary representations of a number. In fact, it is so common that Python gives us built-in functions to do so!
three:int = 3
binaryThree:str = bin(three)
print( f"The int value {three} in binary is {binaryThree}." )
This prints out
The int value 3 in binary is 0b11.
Note that the function bin returns a string.
It starts with 0b to signal that the string is displaying a binary number.
If we strip that off, we see just the binary number 11, whose decimal value is 21 + 20 = 3.
Computer scientists really like powers of 2, not just because they appear in binary numbers. We often want to "divide and conquer" a problem by splitting it in half. Computers store data as a sequence of 0s and 1s. So, how do we decide how many bits represent a particular piece of data? You may have heard of 32-bit and 64-bit operating systems. You would learn more about this in the OS course, but suffice it to say that it has to do with the number of bits associated to a unit of data.
You've probably also heard of a byte (or rather, megabytes and gigabytes and long ago kilobytes). We'll be working with bytes, which can be used to store the ASCII values of common characters (letters, digits, etc.). Unicode typically requires 2 bytes, so we'll stick with ASCII when we can for simplicity.
A byte is 8 bits
A nibble is half a byte, or 4 bits
A crumb is half a nibble, or 2 bits
(and half a crumb is a bit!)