Once you understand binary numbers, the next step is learning how computers work with them. This chapter focuses on binary arithmetic – how values are added, what happens when results are too large, and how bit shifts can be used as shortcuts for multiplying and dividing.
You’ll also explore how text is stored in binary using character sets like ASCII and Unicode. By comparing standard ASCII with extended versions and Unicode, you’ll learn why different systems use different character encodings and how bit size affects the number of characters a system can represent. These skills are crucial for understanding everything from basic text files to global digital communication.
Just like we add decimal numbers in maths, computers add binary numbers. Since binary only uses 0s and 1s, the rules are simple – but they can still create problems when the number is too large to store.
Learning Objectives:
Perform binary addition with up to 8-bit values.
Identify when a carry is needed.
Understand the link between binary addition and overflow.
Just like decimal numbers can be added together, binary numbers follow their own simple set of addition rules. The key difference is that binary only has two digits – 0 and 1 – which makes the addition logic very consistent.
The rules are:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 10 (which means 0, and carry 1 to the next column)
1 + 1 + 1 = 11 (which means 1, and carry 1)
Binary addition is performed right to left, just like in decimal. As you move through each bit, you may need to carry a 1 to the next column. For example, when adding 00110110 (54) and 00011011 (27), the result is 01010001 (81).
Since this topic focuses on 8-bit binary, any value higher than 255 cannot be represented in this system. If an addition result goes above this, it leads to an overflow, which will be covered in the next subtopic.
Binary addition is essential for everything from arithmetic operations in code to how a processor handles logic and control.
Key Terms:
Binary addition: Adding values using binary rules.
Carry: A value passed to the next column when 1 + 1 exceeds 1.
8-bit: A binary number that uses 8 digits.
Bitwise operation: An operation applied to individual bits.
Column method: Aligning bits vertically for addition, right to left.
Revision Focus:
Binary uses only 0 and 1, with specific addition rules
Binary addition starts from the right, with carries as needed
1 + 1 = 0 (carry 1) and 1 + 1 + 1 = 1 (carry 1)
Maximum value in 8-bit binary is 255 (11111111)
Going over 255 will cause an overflow error
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
Add 10110100 and 01001101. Show all carries.
Why is binary addition useful in computer systems?
Explain what happens if two 8-bit numbers are added and the result exceeds 8 bits.
GCSE Exam-style Questions
Add the binary numbers 01100101 and 00011110. Show any carries. Give your answer in 8-bit binary. (3)
Explain why binary addition is important in computer systems. (2)
Overflow happens when a result needs more bits than the system allows. Imagine a cup overflowing with water – the extra water is lost. The same happens with binary numbers, and it can lead to incorrect results.
Learning Objectives:
Define overflow and when it occurs.
Describe what happens when overflow occurs.
Explain risks of overflow in computing.
In computing, an overflow error happens when the result of a calculation is too large to fit into the number of bits available. In an 8-bit binary system, the maximum value that can be stored is 255, which is 11111111 in binary. If you try to add 1 to 255, the answer should be 256 – but that requires 9 bits: 100000000. Since the system can only hold 8 bits, the leftmost (9th) bit is lost, and the result wraps around to 00000000, which is 0.
This creates a serious problem: the computer thinks the answer is 0 when it should be 256. In some systems, this kind of overflow could cause a calculation to restart from zero or produce a completely incorrect result. In real-world scenarios, overflow errors can lead to incorrect data, bugs in programs, or even system crashes if not properly handled.
Overflow is especially important in programming, games, and low-level data operations, where precise control of values is essential. Many systems include overflow flags or checks to prevent or catch this issue.
Key Terms:
Overflow: When a calculation produces a result too large to store.
Bit limit: The maximum value storable with the available bits.
Truncation: Data loss when excess bits are discarded.
Wraparound: When overflow causes the value to cycle back to zero.
Error flag: A signal in hardware or software that overflow has occurred.
Revision Focus:
8-bit binary max = 255 (11111111)
255 + 1 = 256, which needs 9 bits (100000000)
The extra bit is lost → wraps to 00000000 (0)
Overflow can cause incorrect results, bugs, or crashes
Preventing overflow is important in programming and hardware design
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
What is the binary result of 255 + 1 using 8 bits? Why is this a problem?
Describe a situation where an overflow error could cause incorrect program behaviour.
How do computer systems detect and handle overflow errors?
GCSE Exam-style Questions
Explain what happens when adding 11111111 and 00000001 in an 8-bit system. (2)
State one real-world consequence of an overflow error in a program. (1)
Instead of multiplying or dividing, computers often shift binary digits. Shifting bits to the left multiplies the number, while shifting right divides it. This is faster and more efficient in hardware.
Learning Objectives:
Perform binary left and right shifts.
Explain how shifts affect binary values.
Understand when binary shifts are useful.
Binary shifts are a fast and efficient way for computers to multiply or divide binary numbers by powers of two. In a left shift, all the bits in a binary number are moved one place to the left, and a 0 is added on the right. This has the effect of doubling the number. For example, 00001101 (which is 13 in denary) becomes 00011010 (which is 26) after a left shift.
In contrast, a right shift moves all the bits one place to the right. The bit on the far right is lost, and a 0 is usually added on the left (for unsigned values). This effectively divides the number by 2, rounding down. So, shifting 00001101 (13) to the right becomes 00000110 (6).
Each shift multiplies or divides the original number by 2, and multiple shifts can scale numbers up or down very quickly. Binary shifts are often used in low-level programming, graphics processing, and situations where performance is critical, because they are much faster than using multiply or divide commands.
Key Terms:
Binary shift: Moving bits to the left or right.
Left shift: Multiplies a binary number by 2 for each shift.
Right shift: Divides a binary number by 2 for each shift.
Arithmetic shift: A shift that preserves the sign bit (used in signed numbers).
Bitwise efficiency: Using shifts to speed up computation.
Revision Focus:
Left shift = ×2 (adds a 0 on the right, bits move left)
Right shift = ÷2 (drops right bit, bits move right)
Example: 13 → left shift = 26; right shift = 6
Binary shifts are used in fast calculations and performance tuning
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
What is the result of left-shifting 00101010 by 2 positions?
Explain how right shifts can be used for fast division.
Why are binary shifts faster than multiplication or division?
GCSE Exam-style Questions
A number is stored as 00110010. What is the result after a 1-bit left shift? (1)
Explain how right shifts can be used to divide binary numbers. (2)
To store words, names or sentences, computers need to turn letters into numbers. Each character is stored as a binary code using a character set. This includes letters, digits, symbols and more.
Learning Objectives:
Describe how characters are stored in binary.
Identify a character’s binary pattern.
Understand the purpose of character sets.
When computers store and process text, they need a way to represent letters, numbers, and symbols using binary. This is done using a character set – a predefined list that maps each character to a specific binary value. For example, the uppercase letter 'A' is represented by 01000001 in the ASCII character set.
Each character – whether it's a letter, number, space, or symbol – has its own unique binary code in the set. When you type a message or open a text file, the computer reads the binary values and translates them into readable characters using the correct set.
These binary codes must be standardised so different systems and programs can communicate properly. If one computer uses a different set of codes, the text may not appear correctly. That’s why using recognised character sets is so important in software, websites, and data transmission.
Key Terms:
Character set: A defined list of characters and their binary values.
Encoding: The method of converting characters into binary.
Bit pattern: The unique binary code assigned to each character.
Text representation: The binary storage of letters and symbols.
Symbol encoding: Representing punctuation or special characters.
Revision Focus:
Text is stored using binary values assigned to each character
A character set is a list of characters matched to binary codes
Example: 'A' = 01000001 in ASCII
Standardisation ensures correct data exchange between systems
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
What is a character set and why is it important?
Explain how the letter ‘C’ is stored in binary using ASCII.
Why must all systems use the same character encoding standard?
GCSE Exam-style Questions
The ASCII code for 'A' is 01000001. What is the binary for 'B'? (1)
Explain why a standard character set must be used in all computer systems. (2)
ASCII is the most common way of storing letters and symbols in early computers. It gives every character a number. The extended version adds support for more symbols and characters by using 8 bits.
Learning Objectives:
Explain how ASCII represents characters.
Understand the difference between standard and extended ASCII.
Know how many characters each version can store.
ASCII, which stands for the American Standard Code for Information Interchange, is one of the oldest and most widely used character encoding systems. It assigns each character – such as letters, numbers, punctuation, and control codes – a unique binary value so that text can be stored and transmitted by computers.
Standard ASCII uses 7 bits, which allows for 128 different characters (from 0 to 127). This includes all uppercase and lowercase letters, digits 0–9, punctuation marks, and special control codes like "Enter" or "Backspace." For example, the character 'A' is represented by binary 01000001, which is 65 in denary.
To allow for more symbols, Extended ASCII was developed, which uses 8 bits, allowing for 256 characters (0 to 255). This includes all the standard characters plus foreign letters, accented characters, and graphical symbols. Although modern systems use more advanced encodings like Unicode, ASCII is still used in older devices, simple systems, and some forms of data transmission where only basic characters are needed.
Key Terms:
ASCII: A 7-bit character encoding system.
Extended ASCII: An 8-bit version of ASCII with more characters.
Code point: The decimal reference number for each character.
Control characters: Non-printable ASCII characters for device control.
Character mapping: Assigning each character to a binary value.
Revision Focus:
Standard ASCII = 7 bits → 128 characters (0–127)
Extended ASCII = 8 bits → 256 characters (0–255)
ASCII includes letters, digits, punctuation, and control codes
Extended ASCII adds foreign symbols and graphics
Still used in legacy systems, sensors, and embedded devices
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
Compare standard ASCII and extended ASCII in terms of bits and characters.
Why is extended ASCII useful in older systems?
How does ASCII represent both letters and control characters?
GCSE Exam-style Questions
How many characters can be represented using standard ASCII? (1)
Give one advantage of extended ASCII over standard ASCII. (1)
Unicode was designed to fix the limitations of ASCII. It allows us to store thousands of characters in many languages – including emojis, Arabic, Chinese, and more. It’s essential for modern apps and websites.
Learning Objectives:
Describe why Unicode is needed.
Compare Unicode to ASCII.
Understand Unicode’s global advantages.
As computing became global, ASCII was no longer enough to represent the world’s languages and symbols. This led to the creation of Unicode – a universal character set designed to represent almost every character used in modern and historic writing systems. Unicode can currently represent over 143,000 characters, including alphabets, numbers, punctuation, non-Latin scripts, mathematical symbols, and even emojis.
Unicode supports a wide range of languages including Arabic, Chinese, Hindi, and Greek, making it essential for international communication, websites, and global software. It also ensures that text files display correctly across different devices and platforms.
One of the most commonly used encoding formats for Unicode is UTF-8 (Unicode Transformation Format – 8 bit). It is designed to be backwards-compatible with ASCII, meaning the first 128 characters are identical to standard ASCII. UTF-8 uses 1 to 4 bytes depending on the character, allowing it to store simple English text efficiently while still supporting more complex characters when needed.
Key Terms:
Unicode: A character encoding system for all global languages.
Multilingual support: Ability to represent characters from different scripts.
UTF-8: A common Unicode format using 1 to 4 bytes per character.
Encoding scheme: A standard used to assign binary values to characters.
Backward compatibility: UTF-8's ability to support ASCII values as-is.
Revision Focus:
Unicode stores 143,000+ characters from all languages and scripts
Includes symbols, emojis, and non-Latin alphabets
UTF-8 is the most common encoding format
UTF-8 is backwards-compatible with ASCII
Unicode allows reliable global communication in digital systems
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
Explain how Unicode improves on ASCII.
Why is UTF-8 the most widely used format?
Give three examples of character types that Unicode supports but ASCII does not.
GCSE Exam-style Questions
State two advantages of using Unicode over ASCII. (2)
Explain why UTF-8 is widely used on the internet. (2)
The more bits we use, the more characters we can store. ASCII only uses 7 or 8 bits, while Unicode can use 16 or more. This matters when you need to represent new languages, emojis, or symbols.
Learning Objectives:
Link bit size to the number of characters.
Calculate how many characters can be represented with a given bit size.
Compare limitations of small vs large character sets.
The number of characters a computer can represent depends on how many bits are used for each character. This is because each bit can be either a 0 or 1, and the total number of possible combinations increases as you add more bits. The formula used to calculate the total number of combinations is 2ⁿ, where n is the number of bits.
Using this formula:
7 bits gives 2⁷ = 128 characters (used in standard ASCII)
8 bits gives 2⁸ = 256 characters (used in extended ASCII)
16 bits gives 2¹⁶ = 65,536 characters (used in Unicode formats like UTF-16)
The more bits you use, the more characters can be represented – which is essential for supporting different languages, symbols, and technical characters. However, more bits per character also means that more memory is used, especially in large files. Designers of character sets and systems must therefore balance storage efficiency with the need for flexibility and global communication.
Key Terms:
Bit length: Number of bits used to store each character.
Representable characters: Total unique characters a system can encode.
Permutation: A possible arrangement of bits.
2ⁿ formula: Used to calculate how many characters can be stored.
Encoding capacity: The character range a system can support.
Revision Focus:
Use the formula 2ⁿ to calculate number of possible characters
7 bits = 128, 8 bits = 256, 16 bits = 65,536 characters
More bits = more characters but more memory used
Character set size impacts what languages and symbols a system can support
Add a new section to your WBK with the title of this part of the course.
Copy the questions into your WBK and answer them in full sentences.
How many characters can be stored using 10 bits?
What are the trade-offs of using more bits per character?
Explain the formula 2ⁿ and how it relates to character encoding.
GCSE Exam-style Questions
Calculate how many characters can be represented using 12 bits. (1)
Describe one advantage and one disadvantage of using a 16-bit character set. (2)
In this chapter, you learned how computers handle binary arithmetic, including how to add binary numbers and what happens when those numbers are too big. You explored how bit shifts can act as fast multiplication and division, and how they change values by powers of two.
You also learned how computers represent characters using binary codes, from early systems like ASCII to modern, global standards like Unicode. You saw how the number of bits used affects how many characters can be stored – and why character encoding is essential for digital communication in all languages.