Show understanding of the need for input, output, primary memory and secondary (including removable) storage
Show understanding of embedded systems
Including benefits and drawbacks of embedded systems
Describe the principal operations of hardware devices
Including: Laser printer, 3D printer, microphone, speakers, magnetic hard disk, solid state (flash) memory, optical disc reader/writer, touchscreen, virtual headset
Show understanding of the use of buffers
Explain the differences between Random Access Memory (RAM) and Read Only Memory (ROM)
including their use in a range of devices and systems
Explain the differences between Static RAM (SRAM) and Dynamic RAM (DRAM)
include their use in a range of devices and systems and the reasons for using one instead of the other depending on the device and its use
Explain the difference between Programmable ROM (PROM), Erasable Programmable ROM (EPROM) and Electrically Erasable Programmable ROM (EEPROM)
Show an understanding of monitoring and control systems
including
difference between monitoring and control
use of sensors (including temperature, pressure, infra-red, sound) and actuators
importance of feedback
This is not part of the CIE specification, but an interesting video to see development of communications and protocols.
We have seen a model of a computer system before. We know that for a computer to do useful things, we need to get data into it. We use input devices to do this.
Input devices are often divided into two categories, manual input devices and automatic input devices. We will discuss manual input devices here.
In general terms, a microphone and a speaker are the same device. One (microphones) detects vibrations in the air and registers them as analogue electrical energy, and speakers take an electrical signal to produce sound waves.
At the heart of a microphone is a diaphragm, coil and magnet. The diaphragm is connected to a coil of electrical wire. This coil is wrapped around a stationary permanent magnet. As a soundwaves are detected by the diaphragm, the coil is moved back and forth over the magnet, causing a flow of current in the wire. This analogue signal can be sent to an amplifier to boost the signal before going to the computer, or into an ADC. It is most common for the computer to convert the analogue signal into digital.
There are different types of microphone, with the most common being a dynamic microphone.
We have said that an interface is often required in control and data logging applications. You certainly need one if you want to attach any analogue transducers to a computer. An interface may perform other functions:
An interface may convert voltage signals from analogue into digital (ADC) or digital into analogue (DAC).
An interface may be used like a switch, so that e.g. a small dc voltage from the computer can be used to switch on and off a large motor that uses 240Vac.
An interface may provide compatible physical connections for a computer and the devices that need to be connected to it. Devices may use plugs and sockets that are physically different to a computer’s and so they cannot be directly attached to it. They have to go via an interface.
The pupil wants to know how the temperature varies in a 24-hour period. A suitable output for this would be a graph. If the pupil set up the computer to take a reading every 10 minutes, then the computer would take 6 readings an hour, or 144 readings in 24 hours. This sample would be more than sufficient to produce a good graph. Of course, the pupil could have set up the computer to take hundreds of readings every second! This would have been unnecessary in this case because it wouldn't have told the pupil any extra information. Taking readings at appropriate times is known as ‘sampling’ or ‘taking a sample’. The trick is to be able to justify for any given problem what an appropriate time interval between sample readings is!
Suppose we have a data logging system set up on the top of a mountain, to record the temperature. The data logger might take a reading every hour and store the value. How can the recorded values be sent back without having to send a person up to collect the readings? It can be achieved using satellite technology. A transmitter attached to the data logger sends a microwave signal to one of a small number of geostationary satellites that cover the planet. The message is amplified and retransmitted back to Earth. The signal is possibly bounced back up to another satellite and down again, so that the data is passed around the planet. The data is then collected and analysed by computer and possibly converted into graphs so trends can be examined.
We can log data in any place where people couldn’t realistically go for long periods, perhaps because they are dangerous, such as in a nuclear reactor, in space or at the bottom of an ocean. Readings can potentially be taken 24/7 and are likely to be far more accurate than readings people take. We might also need data logging equipment where things happen so quickly that people couldn’t take enough readings to analyse, for example, in scientific experiments. Of course, the equipment might breakdown so backup equipment may be necessary.
There are different forms of touchscreen and it must be noted that the touch aspect is separate to the visual display unit.
Infrared
Capacitive (most common)
Surface Acoustic Wave (SAW)
Resistive
NOTE: Touchscreens are two different components - a screen and the touch technology. You should see the OUTPUT section to read up on common types of display.
Touch screens are everywhere, from smartphones to self-serve kiosks at airports. Their versatility is remarkable, making it no surprise that there are various types of touch monitors, each designed for specific applications with unique advantages.
Infrared
Infrared touch panels use an invisible grid of infrared beams projected across the screen. When an object interrupts the beams, the panel detects the touch. This technology allows for larger screen sizes, often up to 150 inches.
Advantages
Highly durable and resistant to wear and tear.
Supports multi-touch functionality.
Works with any input, such as fingers, gloves, or styluses.
Can be applied on top of an existing screen to provide the touch component, or embedded within the screen itself
High clarity of the image, as the touch technology does not interfere with the display output
Disadvantages
Poor performance in direct sunlight or high-temperature environments, as infrared beams can be disrupted by external light sources.
The cost can be higher
Applications
Infrared touch panels are often used in interactive displays and large-format touchscreens for conference rooms, education, and digital signage.
(Projected) Capacitive
Commonly utilised for industrial purposes and consumer devices, capacitive touch screens consist of a glass overlay, coated with a conductive material such as Indium Tin Oxide. Contact with a capacitive screen creates an electrostatic charge that sends information to the touch control in order to perform its function. This type of touch screen has very good clarity and durability, except that they can only respond to the touch of a finger or special gloves (unless they are capacitive charged).
Advantages
Highly accurate and responsive, with excellent multi-touch support
Very durable — the glass surface has no moving parts to wear out
Supports gestures such as pinching and swiping
High optical clarity, as no additional layers are needed over the display
Disadvantages
Only works with a bare finger or a specially designed stylus, since it relies on the electrical conductivity of human skin
Cannot be used while wearing ordinary gloves
More expensive to manufacture than resistive screens
Performance can be affected by water or moisture on the screen
Applications
Smartphones and tablets (e.g. iPhones, iPads, Android devices)
Laptop trackpads
Self-service kiosks in retail environments
Surface Acoustic Wave (SAW)
Ultrasonic sound waves are transmitted across the surface of the screen using transducers. When the screen is touched, the finger absorbs some of the wave energy at that point. Receivers detect where the wave was disrupted, and the controller calculates the touch position from the change in signal.
Advantages
Excellent image clarity, as there is no film or coating over the screen — the display remains sharp and undistorted
Works with any object that can absorb acoustic energy: a finger, a gloved hand, or a soft stylus
Supports up to 256 levels of pressure sensitivity, making it suitable for applications requiring varied input force
Durable in environments with heavy, repeated use
Disadvantages
The transducers and the screen edges are vulnerable to damage from liquids, dirt, and dust — contaminants can trigger false inputs or block detection entirely
Does not work with hard, rigid objects such as a pen or credit card, because these do not absorb the acoustic wave effectively
More sensitive to environmental conditions than capacitive screens
Generally more expensive and complex to calibrate
Applications
ATMs and banking terminals, where durability and clarity are priorities
Industrial control panels
Public information kiosks, such as those found in museums or transport hubs
Medical equipment displays, where gloved use is essential
Resistive
Resistive touch panels are among the most cost-effective options. They operate by detecting pressure applied to the screen, which makes them compatible with various input methods, such as fingertips, styluses, or even gloved hands. These panels consist of two thin, flexible layers separated by a gap. When pressure is applied, the layers make contact, registering the touch.
Advantages
Affordable and widely accessible.
Functional even with non-conductive inputs (e.g., gloves, styluses).
Unaffected by water on the surface, making them reliable in wet environments.
Disavantages
Limited to single-touch functionality.
Can reduce the clarity of the screen image due to the overlay and technology used. Often around 75% of the image clarity remains
Less durable compared to other technologies due to wear over time. You often have to press harder
Typically capped at smaller sizes (around 20 inches).
Applications
Resistive touch panels are commonly used in point-of-sale systems (like grocery store checkout screens) and industrial equipment, where cost-effectiveness and simplicity are key
There are lots of commonly used output devices available for a user to select from.
These types of printers are in widespread use. A laser beam scans across a drum inside the printer to 'paint a pattern of static electricity corresponding to whatever it is you are printing out. The static electricity attracts powdered ink called toner onto the page and then a special unit bonds the toner onto the paper so it is permantently fixed. (This is also how a photocopier works.)
A laser printer produces very high quality black and white hardcopy.
A laser printer costs more to buy and run than ink-jets although costs have been steadily falling in recent years.
The price of colour laser printers has been falling to make them within reach of individuals and small businesses. Colour laser printers work in a similar way to mono laser printers, but the paper travels through 4 different toner drums (CMYK)
Refills are expensive compared to ink-jets.
The following video discusses LED printers (which are rare), but as LED and laser printers work in very similar ways, this is still a valuable video to watch.
Monitors are ideal for displaying data and information to users. They come in a range of sizes. Larger ones such as 21-inch screens, for example, would be ideal for engineers using Computer Aided Design software applications. 15-inch screens are perfectly acceptable for users using a range of generic applications. CRT (Cathode Ray Tube) monitors (similar to televisions) do take up a lot of space on a desk. Flat panel liquid crystal display screens, often referred to as TFT screens, save a lot of space by comparison. They are not quite as sharp as CRT screens, however, and are more expensive. TFT screens produce less radiation than CRT monitors. Excessive exposure to radiation is seen as a potential risk to computer users. They also use about half the power a CRT screen uses. If you multiply up the savings in power use in an organisation with thousands of computers, the cost-savings do become significant.
Touch screens are an important part of modern interfaces, this useful website gives comparisons between the different types of touch screen technology.
There are various types of screen technology:
OLED
LCD (of differing designs such as QLED, TFT)
TFT-LCD (Thin Film Transistor Liquid Crystal Display)
How it works: A backlight (usually LEDs) shines through a layer of liquid crystals. Each pixel contains a tiny transistor that controls the alignment of the crystals. When the crystals align, they allow light through a colour filter (red, green, or blue); when they twist the other way, they block it. The combination of these sub-pixels produces the full range of colours you see.
Advantages
Cheap to manufacture, making them widely available and affordable
Produce very little heat compared to older CRT displays
Slim and lightweight — practical for laptops and desktop monitors
Consistent brightness across the entire screen
Disadvantages
Requires a constant backlight, so true black cannot be displayed — black pixels merely block the light rather than switch off entirely, resulting in lower contrast
Viewing angles can be limited; colours and brightness shift when the screen is viewed from the side
Thicker and heavier than OLED panels
Applications:
Desktop monitors, budget laptops, televisions, and most consumer displays where cost is a priority.
OLED (Organic Light Emitting Diode)
How it works: Each pixel contains an organic compound that emits its own light when an electric current passes through it. There is no backlight — every pixel is independently switched on or off. This is the fundamental technical difference from TFT-LCD.
Advantages
True blacks are achievable because pixels that should display black are simply switched off entirely, producing an effectively infinite contrast ratio
Faster response times than TFT-LCD, making motion appear smoother
Wide viewing angles with no colour distortion
Thinner and more flexible — enabling curved and foldable displays
More energy-efficient when displaying dark content, since dark pixels draw no power
Disadvantages
Organic materials degrade over time, meaning OLED displays have a shorter lifespan than TFT-LCD
Susceptible to burn-in: if a static image is displayed for a long time (e.g. a taskbar), the organic compounds in those pixels wear out faster, leaving a ghost image permanently visible
More expensive to produce
Can have lower peak brightness in very bright environments
Applications:
High-end smartphones, premium televisions, smartwatches, and professional monitors where colour accuracy and contrast are critical.
Some applications such as burglar alarms, factory warning systems and monitoring equipment make use of audio output. Some applications also require sound, such as video-conferencing, using your computer to make phone calls, listening to DVDs or CDs and playing games. There are different ways that audio output from a computer can be achieved.
The cheapest option is simply a pair of speakers powered by the computer. They will plug into the back of the computer, in the speaker output. The quality and level of sound will be perfectly adequate for many applications but they cannot produce a very loud output and cannot produce a very high quality sound.
You could also buy a pair of speakers that come with their own power supply. Although more expensive, they produce a higher quality sound and greater volume.
It is perfectly possible to connect an amplifier to the back of a computer and then pass the amplified signal to some speakers. This is a much more expensive proposition but does produce hi-fi quality sound.
In some noisy environments such as factories, klaxons (sometimes known as 'sirens') are common. These can be computer controlled and can produce a very loud sound that can be heard over noisy machinery.
Speakers work by sending an electrical current to a coil (3). The current varies according to the sound you want to play. A coil that has a current flowing through it becomes an electromagnet, which is attracted and repelled by a permanent magnet (1). The parts above are all contained within a speaker cone (3) that has a thin membrane called a diaphragm (4). The constant act of the coil being magnetised and demagnetised causes the cone to 'pump' sound out of the speaker by causing the diaphragm to vibrate.
There are situations where a user wants to listen to sound in a public place but doesn't want to disturb others. For example, a user in a library might want to listen to CDs on a computer, or a telesales operator might need to concentrate on what a customer is saying. They work in the same way as speakers.
There are lots of short but very interesting videos you can watch to see the possibilities of this technology, which involves designing an object using some software and then sending it to a 3D printer to print out the actual object. It does this by printing out very thin layers of the object one at a time, and binding each layer with the previous one.
Rather than replicate the content, please visit the Explaining the Future website that has a very interesting and informative page listing all of the current types of 3D printer.
Some examples of the use of technology include;
Shooting the world's first 3D-printed gun.
Printing out replacement legs.
Printing out replacement ears and replacement skin.
3D printing in the high street
Storage devices are known as 'non-volatile' devices. This is because when the power is removed from the device, the data files remain. These devices are long-term storage. They are perfect for backing-up files and applications, transporting them and sharing them. If you didn't have storage devices, you would have to reload applications and re-enter files each time you switched your computer back on! There are lots of different types of storage device to choose from.
Details on how each work are found in the 'internal operation' section below.
Hard disk drives (HDD), are direct access devices - you can go straight to a file on an area of disk without having to go through all the other files first. Hard disks are used to store applications and data files. They can hold huge amounts of data compared to a floppy disk. A typical hard disk today might hold 2-6 TB of data but bigger disks seem to come onto the market every few months!
All personal computers have hard disks, although it is possible for workstations on a network to exist without a hard disk - they make use of the server's hard disk to store applications and data. These types of computers can be referred to as 'thin clients'.
Hard disks can also be used as back-up devices. A computer can be fitted with a second hard disk known as a mirror hard disk or a 'raid' data storage system. As the name suggests, this second hard disk is used to keep an identical copy of the main hard disk. Then, if the main hard disk fails (and every hard disk, indeed every storage device, will fail sooner or later), you can use the mirror disk to recover your applications and data with the minimum of effort.
Operating System Usage: Accessing a file
Applications access the HDD via the operating system (OS). First the application will run code that needs to access (reading or writing) the storage device. The program will pass on its file request to the OS and then go into a 'blocked state' (more on this in A2), meaning the application will be paused until the operation is complete. Most HDDs are constantly spinning (although some go into low power state), so if necessary, the OS will spin up the disk platters. If it's a file being read, the OS will search the File Allocation Table (FAT) for the relevant track and sector, where the first part of the file can be found. The head will then move to the correct track,and when the correct sector arrives under the read head, the data from the first cluster of sectors is written into the disk buffer, the disk continues to read successive clusters, writing this data to the buffer. When the file has been read, an interrupt is generated by the disk drive and the OS will then transfer the contents from the buffer to the application's data memory.
Storage capacity issues
You may have noticed that sometimes you files take up more room on the device than their physical size. Why is this?
Take the FAT/FAT32 file system for example, (NTFS and exFAT behave similarly with regards to allocation units). If you have a lot of small files, the space they take up on the storage device can certainly be far greater than their combined size. Consider this:
50,000 files
32 KB cluster size (allocation units), which is the max for FAT32
Ok, now the minimum space taken is 50,000 * 32,000 = 1.6 GB (using SI prefixes, not binary, to simplify the maths). The space each file takes on the disk is always a multiple of the allocation unit size – and here we’re assuming each file is actually small enough to fit within a single unit, with some (wasted) space left over.
If each file averaged 2 KB, you’d get about 100 MB total – but you’re also wasting 15x that (30 KB per file) on average due to the allocation unit size.
In-Depth Explanation
Why does this happen? Well, the FAT32 file system (and all other file systems) needs to keep track of where each file is stored. If it were to keep a list of every single byte, the table (like an address book) would grow at the same speed as the data – and waste a lot of space. So what they do is use “allocation units”, also known as the “cluster size”. The volume is divided into these allocation units, and as far as the file system is concerned, they cannot be subdivided – those are the smallest blocks it can address. Much like you have a house number, but your postman doesn’t care how many bedrooms you have or who lives in them.
So what happens if you have a very small file? Well, the file system doesn’t care if the file is 0 KB, 2 KB, or even 15 KB, it’ll give it the least space it can – in the example above, that’s 32 KB. Your file is only using a small amount of this space, and the rest is basically wasted, but still belongs to the file – much like a bedroom you leave unoccupied.
Why are there different allocation unit sizes? Well, it becomes a trade-off between having a bigger table (address book, e.g. saying John owns a house at 123 Fake Street, 124 Fake Street, 666 Satan Lane, etc.), or more wasted space in each unit (house). If you have larger files, it makes more sense to use larger allocation units – because a file doesn’t get a new unit (house) until all others are filled up. If you have lots of small files, well, you’re going to have a big table (address book) anyway, so may as well give them small units (houses).
Large allocation units, as a general rule, will waste a lot of space if you have lots of small files. There usually isn’t a good reason to go above 4 KB for general use.
This media is ideal for distributing software because they hold a lot of data (650 Mbytes or more) compared to floppy disks (1.44 Mbytes) and nowadays, most computers have a CD-ROM drive. It is much more convenient having software on one CD than having lots of floppy disks because you don't have to keep changing the media when installing the software. One CD is also less bulky than lots of floppy disks. CD-ROMs are direct access media but they do not use magnetic technology! They are optical storage media. They store information on pits in the surface of the CD and then use a laser to scan over the pits. CD-ROMs are read-only devices. The basic CD unit is very cheap. However, you would need to use a CD-R/W device and a special type of CD if you want to write to as well as read from a CD, in much the same way as you would read and write to a floppy disk.
This kind of optical, direct access media is ideal for backing up personal files, especially those involving multimedia. It is suitable for this type of application because of the high storage capacity of CDs. You need to have a special CD-R/W device to write to CDs. The CDs themselves could be of the WORM type (Write Once Read Many times). This means that you can only write to the CD once (sometimes called 'burning a CD') but can read from it many times. This might be suitable if you wanted to make a back-up copy of some software you have bought or wanted to make and distribute some music you had recorded. CDs can also be Read-Write, which means they can be written to many times in the same way a floppy disk can be. A CD that has been created using a CD R/W device can be read from a standard CD-ROM device, usually after the installation of a small utility program. CD-R/W devices are now cheap to buy, of the order of twenty to thirty pounds.
DVDs are rapidly replacing CDs (and video tapes)! This optical, direct access, very fast media can hold approximately 17 Gbytes of data compared to the 650 Mbytes of a standard CD. A DVD player can read CDs as well as DVDs with the addition of extra software. They are typically used for distributing multimedia, especially high quality video. A DVD can store about 8 hours of high quality video! DVD recorders are also available. Although they cost hundreds of pounds their price will inevitably drop.
These plug into a computer's USB port and provide a convenient way of transferring large amounts of data. Although they are relatively strong, you can corrupt the contents (so it is unreadable) by pulling out the pen drive from the PC before properly 'disconnecting ' it using the tool that comes with your operating system.
These small cards can hold very large amounts of data and are ideal for cameras and mobile phones, to hold pictures, videos and music.
Data compression refers to the process of 'squashing' data so that you can store more of it in the same space on a storage device. This can be done by using a utility program from within your operating system or using applications such as WinZip. A lot of data sent over the Internet is also compressed because this reduces the amount of time (and cost) of data transfer. It is compressed by the sending computer and de-compressed by the receiving computer. This happens automatically, without the knowledge of the users.
Many companies these days do not back up their work onto a physical medium such as a tape or DVD but use internet storage instead. When they want to back something up, the data is first compressed to make it smaller. It is then sent over the Internet to a company, who stores it on their computers. This has quite a few advantages. You can set up the back-ups to automatically happen so no one will forget to do them. You don't need to buy expensive equipment to back-up work and you can't lose back-ups. Of course, you need an Internet connection and some companies aren't happy about sending their valuable data to another company to look after for security reasons. However, this way of storing data, known as 'cloud storage' is becoming increasingly popular. Many cloud storage companies offer a certain amount of free storage to individuals. You could do a search and start using cloud storage yourself!
When selecting a storage device to use, you should consider a number of issues. These include:
How fast the media can be accessed (their 'read / write access times').
Whether data can be accessed directly or serially, because this affects the time it takes to access data. Magnetic tape, for example, is serial access and very slow whereas flash drives are direct access and very fast.
How much data can be stored on the media.
What the media might typically be used for.
How commonly used the media is and whether other computers are likely to be able to use that media. Most computers don't use floppy disks or Zip drives anymore, for example, so even if you had these on your computer and wanted to share files, it wouldn't be a good idea to use these.
The cost of the media and the cost of the actual device used to read from or write to it.
Whether the media is read-only or read-write. some devices can only be written to once and then read many times (WORM) like a CD-ROM but other devices can be written to many times, like a pen drive.
Whether the storage medium is 'virtual' and requires an Internet connection or whether it is physical.
How portable and convenient the device is.
There are three main types of technology for storage. These are:
solid state
magnetic
optical.
Solid state devices have no moving parts. That means they can't get worn out and are not quite as easily damaged by bangs and knocks as optical and magnetic devices. They store store data in binary patterns using billions of tiny switches called transistors (using NOR or NAND technology). Another point to note about solid state devices is that they need very little power to work and can get the power that they do need from the device that they are plugged into. Examples include, SD cards, micro SD cards and pen drives. Many computers and laptops are now being sold with solid state hard drives (SSDs). Although typically much smaller (and usually much faster) than magnetic hard drives, as there are no moving parts and latency is cut down to a few miliseconds! A solid state hard drive can breathe new life into an old laptop.
These store binary data patterns as billions of magnetised fields, using the magnetic properties of materials such as iron. These areas can be read from or written to by a special head that moves over the magnetic area. With hard disk drives and floppy disk drives, the disk spins very quickly (thousands of times a minute, usually at 7200 RPM) whilst the head moves over the magnestised areas, reading from and writing data to the disk in a concentric pattern. One of the things to look out for if you ever have to buy a new hard drive is how fast the disk or disks spin (hard disks often have more than one disk inside them called a platter) - the faster they spin, the quick reading and writing data can take place so the faster your computer can work. Cache is also important, so data can be buffered for quicker read/write times. This video provides a lot more information that you will need for you exam, but does show what an amazing device the HDD is.
Modern day hard disk drives use a method of data storage called zone bit recording. As the hard disk reaches the outer parts of the platter, rather than the data density dropping, resulting in wasted storage, the number of sectors is recalculated.
The video below explains the components of a platter in more detail (tracks, cylinders, etc.)
Optical devices store binary patterns using lasers. The lasers shine onto a disk and change whether an area on it can reflect light or not. The laser can then be used to read back patterns by shining a laser on the disk and looking at which areas reflect light and which don't. Some types of disk can be written to just once, although you can read from them many times. These are known as 'WORM' storage devices (Write Once Read Many). CD-ROMs, DVDs and Blu-ray disks are examples of WORM disks. They are often used by manufacturers to distribute software. Other media, such as CD R/W can be written to many times as well as read many times.
With optical media, the wavelength of the laser determines how much data can be stored on the disk. The shorter the wavelength, the more data can be packed into the same space. Disks can also make use of layering, by changing the intensity of the laser beam, data can be written to different disk layers, allowing double the storage capacity per side.
The change in height between pits and lands results in a difference in intensity in the light reflected. By measuring the intensity change with a photodiode, the data can be read from the disc. The digital information is defined as the length of pits and distance between them. The pits and reflective surface represents logic 0 and logic 1. The pits and lands themselves do not directly represent the zeros and ones of binary data. Instead, Non-return-to-zero, inverted (NRZI) encoding is used: a change from pit to land or land to pit indicates a one, while no change indicates a series of zeros.
Note, that dual laser beam systems are mostly replaced with more sensitive photodiodes that are able to detect the differential in light reflectivity from the transition.
The basic building block of any computer is the switch. Computers, however, have millions and millions and millions of electronic switches in them, held in components such as RAM or the processor. Each switch can have one of two positions, on or off, which in computing, we represent as 1 or 0. Each switch can therefore hold one very simple piece of information (1 or 0) and we call each switch a 'bit' (from Binary digIT) and is the smallest unit of storage or memory that you can have. However, when you group these switches together in a certain way, you can represent data as binary codes, such as letters of the alphabet or numbers!
A single bit cannot hold a great range of numbers! It can hold either zero or one. You may have read about nibbles. A nibble is a group of 4 bits. The smallest value a nibble can hold is 0000 in binary and the largest number is 1111 in binary. (0000 in binary is the same as 0 in denary. 1111 in binary is the same as (1 x 8) + (1 x 4) + (1 x 2) + (1 x 1) or 15 in denary. It is also very common to group bits together in groups of 8. A group of eight bits is known as a 'byte'. Bytes are extremely convenient and important units of storage and memory to work with, as you will find out in due course.
We have seen that a byte can be used to represent a number. We will see soon that the number can be thought of as a code that represents a character on a keyboard. Before we look at that, however, we should note that if one byte is going to represent one character on the keyboard then we are going to have to collect together lots of bytes to record a memo, for example. For that reason, we frequently talk about Kilobytes, Megabytes and Gigabytes.
The smallest unit is a bit (0 or 1), written with a lowercase b
There are 4 bits in a nibble
1 byte (written B) is 8 bits or 2 nibbles
1 Kilobyte (1 KB) is 1024 bytes exactly,
1 Megabyte (1 MB) is 1024 KB,
1 Gigabyte (1 GB) is 1024 MB.
1 Terabyte (1 TB) is 1024 GB.
Be careful if you see a lowercase b. E.g. 244Kb, which is 244 kilobits, not kilobytes.
So 15 Kbytes is about 15 thousand bytes. 128 Mbytes is about 128 million bytes. 20 Gbytes is about 20 thousand million bytes and 5 Tbytes is about 5 billion bytes (or 5 million million bytes, if you prefer). More often than not, you don't need to know the exact number of bytes, just an approximation!
The term 'memory' is vague and often used by people to represent the information stored by a computer. In reality, memory comes in different sizes, configurations and speed. There are four main types of memory:
Registers (found on the CPU)
Cache (found on the CPU and in other peripherals. e.g hard disk)
RAM
ROM
The diagram below shows the hierarchy of different memory types. Typical access speeds and cost are now shown as these will be out of date before this page is saved.
The diagram shows that the fastest memory is also the most expensive, and also the smallest. The vast majority of fast access memory (such as cache) is comprised of S-RAM (static RAM) which is very expensive to produce, hence why it is only used where speed is essential.
RAM is also known as main memory and primary storage (as is cache memory). All primary memory, excluding ROM, is known as volatile memory as the information is lost as soon as the computer's power is lost.
The vast majority of the content below is taken from The Teacher website.
Introduction
Computer systems come with two types of primary memory, RAM and ROM. Both types are known as primary memory (or primary storage) because the CPU can access them both directly.
Secondary storage
We often compare primary memory to secondary storage. Secondary storage is the term used for long term storage devices like a hard drive, a CD-ROM or a pen drive. The hard drive in a computer, for example, typically stores your operating system, your applications and all of your files, even if the power is switched off and even if you are not using them at the moment. We often think of secondary storage as a suitcase, a place simply used to store large amounts of data. This data isn’t accessed directly by the CPU whilst it is in the suitcase. If the CPU needs to run an application or access a particular file, then a copy of the application or file is moved into RAM first, and then it accesses it from RAM.
RAM (Random Access Memory)
This is the place where the computer stores programs and files it is using at the moment. Remember, all your programs and files are stored on your hard drive (whether you are using them or not) but the ones that are open, that are being used at the moment, have a copy in RAM. That means that the CPU can access them immediately. For this reason, RAM is often also known as the immediate access store. Computers could be designed so that they accessed instructions and data in a program directly from a hard drive. The problem with this approach is that devices like hard drives and other storage devices are very slow compared to RAM. It wouldn't allow computers to perform as quickly as they do.
You can think of RAM as a table with two columns. The second column is used to store instructions in a program and any pieces of data the program needs. There will be lots of these 'storage units' so each of them needs an address and that is what the first column is used for. The CPU uses these addresses to find and save data and instructions.
ROM (Read Only Memory)
ROM is needed when a computer system is powered up.
ROM holds part of a program that starts running when a computer is switched on.This program has two jobs:
It checks that the computer hardware is all present and working correctly when you power up a computer system. It runs what is known as a 'BIOS' check (Basic Input Output System check)
Then it helps the computer copy the operating system from the hard drive to RAM, so that the computer can then be used.
Starting up a computer is also known as 'booting up' the computer.
RAM v ROM
There are some key differences between RAM and ROM that should be noted, apart from their typical uses.
RAM is volatile. That means that when you switch the power off, all the contents of RAM are lost. ROM is non-volatile - even if you switch the power off, the programs and data in ROM are not lost. They are there waiting for you to power up the computer again! If your computer starts misbehaving itself, one solution is to switch off the power and boot it up again. What you are doing is to clear out the RAM and reload it all again in a nice, organised fashion.
You can read from RAM and write data to RAM as well. With ROM, you can only read from it. You can't write anything to ROM. (Actually, it is possible if you know what you are doing. This process is known as 'flashing'. If something goes wrong whilst you are flashing ROM, the equipment can stop working all together!)
EPROM, EEPROM, etc is covered later on this page
Introduction
There are a number of factors which affect how well a computer performs. One of them is the amount of RAM it has.
Running programs
When you install a program, it is stored on the hard drive. When you open or run a program, a copy of that program is put into RAM. The CPU can then access the program to run it (by fetching, decoding and executing each program's instructions). There are two areas that increasing or decreasing the amount of RAM can have an impact on.
Virtual memory
In an ideal situation, the entire program is put into RAM. If there isn't enough RAM available, however, then some of the hard drive is used as 'pretend RAM' (called virtual memory) and some of a program you are running is put there. This is far from ideal because hard drives are much slower for the CPU to access compared to RAM, although it does allow you to run more programs.
Multi-tasking
You can have more than one program at a time in RAM. For example, you will always have an operating system running, but you might also be writing a letter using a word processing application, at the same time have some music playing in the background using a music playing application and perhaps also have a chat program running. Doing lots of different things at (apparently) the same time is known as multi-tasking. It is 'apparently' because the CPU can only actually work on one instruction from one program at any one time. However, because it can do billions of instructions in a second and can switch between applications really quickly, everything seems to work at the same time!
Increasing the amount of RAM
You can keep opening as many new programs as you like on a computer. Each time you open one, a little bit more RAM (and 'pretend RAM' on the hard drive) is used and eventually, the computer slows right down. The more RAM you have, however, the more programs you can have open and completely copied in RAM without having to use the hard drive at the same time. One of the easiest ways of improving a computer's performance therefore is to increase the amount of RAM it has. You go to a computer shop and buy another stick of RAM and install it onto the computer's motherboard, or replace an existing smaller stick. The limiting factor of how much RAM you can have, however, is usually the computer's motherboard itself. There will be a fixed number of slots on the motherboard and the slots can only take up to a fixed amount of RAM. This information is in each motherboard's documentation. RAM can also be quite expensive so you need to have the funds to buy more RAM.
Introduction
There are two common types of RAM, each with their own characteristics and advantages/disadvantages.
Comparison
DRAM needs the data to be refreshed periodically to retain the data. SRAM doesn't need to do this. As long as there is power on the SRAM, it will hold the data. DRAM has to have extra circuitry to allow the refreshing of data to take place and to get the timing to do this correct and this has implications for the amount of power a DRAM needs to have to run compared to the much simpler SRAM unit. All of this means that data can be read to and from SRAM a lot faster than DRAM.
Another implication of the simplicity of SRAM is that it is a lot easier for people to design hardware that uses it compared to the more complicated DRAM. If you are trying to design an application, you want to use hardware that is easy to use and integrate with your other components.
It isn't all rosy for SRAM, however! Each bit that you need to store in DRAM requires 1 transistor and 1 capacitor. SRAM requires six of each for every bit that is stored. If SRAM needs six times the amount of transistors and capacitors then it clearly is going to be a more expensive design and therefore the cost of buying 8 GB of SRAM is going to be a lot more than the cost of buying 8 GB of DRAM.
The relatively cheaper cost of DRAM is the reason why it is the most commonly found type of RAM in computer systems, despite being slower and needing more power than SRAM. It is used as general purpose memory, to hold the operating system, the applications and files you are actually using at any particular moment in time.
In devices where speed is crucial, SRAM is the memory of choice. The typical use for this is in cache. Programs are made up of instructions. These are fetched from DRAM by the CPU and executed. This is done quickly but still takes time. However, some instructions in DRAM are needed again and again. If you put these instructions in 'super-fast memory' instead (in other words, into SRAM), you can speed up processing and make your computer go much faster. That's what cache is - super-fast memory that holds instructions and data you keep needing. You only get limited amounts of it in a computer system, though, because as we have seen, SRAM is much more expensive than DRAM.
One question that should always be asked when buying a new computer is "How much cache has it got"? It is always worth getting as much cache for a computer as you can afford because it speeds up processing - but at a price!
This video from Computerphile explains how DRAM works. The contents are not essential for revision, but it goes a long way to explain the text above. The diagram drawn in the first part is called a flip-flop, and these are explored in the A2.
If you want to look at how modern devices and memory has evolved, the video below give a fascinating insight.
How it works: A PROM chip is manufactured blank, with every bit set to 1. A device called a PROM programmer (or "PROM burner") is used to write data by sending electrical pulses that permanently blow tiny internal fuses, setting selected bits to 0. Once a fuse is blown, it cannot be restored.
Key characteristics
Can be programmed exactly once after manufacture — it is then permanent
Faster and cheaper to produce than having the data hard-wired at the factory (as with mask ROM)
Non-volatile: retains data without power indefinitely
Once written, it is truly read-only — there is no mechanism for erasure
Applications: Firmware in early embedded systems, device configuration data, and any situation where data needs to be fixed permanently after production.
How it works: Like PROM, an EPROM is programmed electrically. However, the data is stored using floating-gate transistors rather than blown fuses. Crucially, the stored charge can be discharged — and therefore the data erased — by exposing the chip to strong ultraviolet (UV) light for typically 10–20 minutes. The chip has a small quartz window on its surface specifically for this purpose. Once erased, it can be reprogrammed.
Key characteristics
Can be erased and rewritten multiple times, unlike PROM
Erasure is all-or-nothing — you cannot erase individual bytes; the entire chip is wiped by the UV exposure
The chip must be physically removed from the device to be erased
The quartz window is covered with a sticker during normal use to prevent accidental erasure from ambient light
Non-volatile
Applications: Development and testing of embedded systems (where firmware needs updating during prototyping), older BIOS chips, and microcontroller programming during development.
How it works: EEPROM also uses floating-gate transistors, but instead of UV light, it uses electrical signals to both write and erase data. This means it can be erased and reprogrammed whilst still installed in the circuit — no removal required. Critically, erasure can target individual bytes rather than wiping the whole chip.
Key characteristics
Can be erased and rewritten electrically, in-circuit, without removing the chip
Byte-level erasure is possible, giving far greater flexibility than EPROM
Has a limited write cycle lifespan — typically around 100,000 write cycles before the floating gates degrade
Slower write speeds than RAM
Non-volatile
More expensive per byte than EPROM
Applications: BIOS/UEFI firmware on modern motherboards (allows firmware updates without hardware removal), smart card chips, storing configuration settings in embedded devices, and calibration data in sensors.
How is EEPROM different to Flash Memory
Is EEPROM the same as flash memory? Yes, but with an important distinction worth understanding clearly.
Flash memory evolved from EEPROM technology and operates on the same fundamental principle, floating-gate transistors erased electrically. However, flash is not simply EEPROM; it was deliberately re-engineered to be faster and cheaper at the cost of some flexibility.
The critical differences are:
Erasure granularity is the biggest one. EEPROM can erase individual bytes. Flash memory can only erase data in larger blocks (called pages or sectors). This is a deliberate trade-off, erasing in blocks is far faster and allows much higher storage density, but it means you cannot just update a single byte without rewriting the entire block it belongs to.
Cost and density follow from this. Because EEPROM's byte-level erasure requires more complex circuitry per cell, it is expensive per byte and impractical to scale up to gigabytes. Flash strips that complexity out, making large-capacity storage viable and affordable.
Write cycles also differ. EEPROM typically endures around 1,000,000 write cycles per byte. Flash generally manages fewer, around 10,000 to 100,000 per block, though this varies by type (NOR vs NAND flash).
So the relationship is essentially:
Flash memory is a specialised derivative of EEPROM, optimised for bulk storage. EEPROM is the more flexible but smaller-scale technology, whilst flash trades that byte-level flexibility for the density and speed needed in SSDs, USB drives, and SD cards.
For the 9618 spec, you are not required to know the internals of flash beyond its classification as solid-state memory. But knowing that flash descended from EEPROM is genuinely useful context. if an exam question asks you to compare them, saying that flash is a development of EEPROM technology that sacrifices byte-level erasure for greater density and speed is exactly the kind of answer that picks up top marks.