Understand the concept of image encoding, specifically black-and-white (BW) images.
Learn how RLE can be used to compress image data.
Practice encoding and decoding images using RLE.
Recognize the importance of data compression in computer science.
Students explore how black and white images are represented. Students use the black and white pixelation widget to represent each pixel of an image with black or white light. They learn how to sample an analog image using small squares of uniform size (each represented with a black or white value) and reflect on the pros and cons of choosing a smaller or larger square size when sampling.
Therefore, if we want to represent numbers, letters, words, text, and images, we must first convert it into binary (0 and 1).
To store an image on a computer, the image is broken down into tiny picture elements called pixels.
An image with a resolution of 11 x 7 pixels has 11 times 7 or 77 pixels.
An image composed of just 77 pixels is shown below. Resolution can also refer to the density of the pixels in the display, where the unit of measurement is pixels per inch (PPI).
Given a screen of 250 pixels across 5 inches, the density of pixels is 250 / 5 which is 50 PPI.
The concept of a pixel is similar to a dot in the art form pointillism. The only difference is, however, pixels fit neatly in a grid and are read in a specific sequence to display the image on screen.
Examine the 2 Photos below. Notice how 1 picture is more compressed, with a lower pixel density than the other.
To create a two-dimensional image, each point in the image is assigned a color. A point in 2D can be identified by a pair of numerical coordinates. Colors can also be specified numerically. However, the assignment of numbers to points or colors is somewhat arbitrary. So we need to spend some time studying coordinate systems, which associate numbers to points, and color models, which associate numbers to colors.
In pixel coordinates, the origin (0,0) is a crucial reference point. It represents the top-left corner of an image. The x-axis extends horizontally to the right, with increasing values, while the y-axis extends vertically downwards with increasing values. As you move away from the origin along the x-axis, the x-coordinates increase, and as you move downwards along the y-axis, the y-coordinates increase. The origin is where both x and y are zero. This means that a pixel at (0,0) is in the top-left corner, and you can specify the location of any pixel in the image using its x and y coordinates relative to this origin.
A digital image is made up of rows and columns of pixels. A pixel in such an image can be specified by saying which column and which row contains it. In terms of coordinates, a pixel can be identified by a pair of integers giving the column number and the row number. For example, the pixel with coordinates (3,5) would lie in column number 3 and row number 5.
Conventionally, columns are numbered from left to right, starting with zero. Most graphics systems, number rows from top to bottom, starting from zero. Some, including OpenGL, number the rows from bottom to top instead.