In computer science our data and instructions are represented (ultimately) as ones and zeroes. But this is an abstraction from the actual, physical information which could be in a variety of forms: electrical impulses over an ethernet cable, light through a fiber-optic cable, electro-magnetic impulses sent via satellites. No matter the actual, physical form the information takes, we can represent it as a series of ones and zeroes.
In this unit we will cover the basics of binary number systems and the impact of changing the number of bits used to represent something.
Here are the notes for this unit.