On this web page, you'll find revision activities covering 1.1.1 Systems Architecture to help you review key concepts before the end-of-unit assessment. Use these activities alongside the FLASHCARDS and topic content from each of the lessons.
Chech the list of required RED TASKS for this Unit which must be completed in your workbook and submitted BEFORE the EOU begins. It is extremely important that you structure your workbook correctly - clearly indicate the lesson and activity that each question relates to.
Exam-Style Questions
Describe the function of each of the following CPU components:
ALU
Control Unit (CU)
Registers
Explain the role of the Accumulator (ACC) and the Program Counter (PC) during instruction execution.
Compare and contrast the Memory Address Register (MAR) and Memory Data Register (MDR).
Why is cache memory important for CPU performance? Describe the differences between L1, L2, and L3 cache.
Explain how the system bus facilitates data transfer within the CPU. What are the roles of the data bus, address bus, and control bus?
Timed Pair Activity (10 Minutes)
Work with a partner to create a step-by-step process for how the CPU executes a simple instruction, such as adding two numbers.
Use at least five components from the key terminology list in your explanation.
One person will present the steps verbally, while the other will listen and correct any mistakes.
After 10 minutes, switch roles with another instruction (e.g., storing data in memory).
Small Group Activity (20 Minutes)
In groups of 3-4, design a labeled diagram that illustrates the key CPU components and their interactions during data processing.
Include at least six components from the lesson.
Each group will explain their diagram to the class.
The class will ask questions, and the presenting group must answer using the correct terminology.
Exam-Style Questions
Describe the three main stages of the Fetch-Decode-Execute cycle.
Explain the role of the following registers in the FDE cycle:
Program Counter (PC)
Memory Address Register (MAR)
Memory Data Register (MDR)
Current Instruction Register (CIR)
Accumulator (ACC)
What is the role of the Control Unit in the FDE cycle? How does it coordinate CPU activities?
How do the Address Bus, Data Bus, and Control Bus contribute to the FDE cycle? Provide an example.
Modern CPUs use techniques such as pipelining to improve the efficiency of the FDE cycle. Predict how pipelining might impact the execution of instructions.
Timed Pair Activity (10 Minutes)
One partner acts as a CPU, while the other acts as memory.
The "memory" partner writes down a basic instruction (e.g., ADD 5 to ACC).
The "CPU" partner follows the Fetch-Decode-Execute cycle to retrieve, decode, and execute the instruction.
Swap roles after completing one cycle.
Small Group Activity (20 Minutes)
In teams of 3-4, create an FDE cycle role-play:
Assign each team member a role: PC, MAR, MDR, CIR, ACC, or Control Unit.
Act out the cycle, passing an instruction (written on paper) between each role.
Perform the cycle twice, each time using a different instruction.
Present the process to another group and explain how the cycle worked.
Exam-Style Questions
Explain how clock speed affects CPU performance. What are the risks of overclocking?
Compare single-core and multi-core processors. How do multiple cores improve performance?
What is cache memory, and why is it faster than RAM? Describe the differences between L1, L2, and L3 cache.
Describe two methods used to manage CPU heat. Why is heat management important in processor design?
Define multithreading and hyper-threading. How do these technologies impact CPU efficiency?
Timed Pair Activity (10 Minutes)
One partner researches clock speed, while the other researches cores.
After 5 minutes, explain your topic to your partner in simple terms.
Then, switch and teach each other about cache memory and heat management.
Small Group Activity (20 Minutes)
Each group will design a CPU specification sheet for an imaginary processor.
Include details such as:
Clock speed
Number of cores
Cache memory (L1, L2, L3)
Heat management methods
Power efficiency features
Present your CPU design to the class and justify the choices made.
Exam-Style Questions
Explain the key differences between Von Neumann and Harvard architectures.
What are the advantages and disadvantages of the Von Neumann architecture?
Compare RISC and CISC architectures. How does each approach impact CPU performance?
What is a memory bottleneck, and why is it a concern in Von Neumann architecture?
Why are embedded systems more likely to use Harvard architecture? Provide two examples.
Timed Pair Activity (10 Minutes)
One partner describes Von Neumann architecture, and the other describes Harvard architecture.
After 5 minutes, test each other by asking questions about the differences.
Switch roles and explain RISC vs. CISC in a similar manner.
Small Group Activity (20 Minutes)
In groups of 3-4, debate which architecture (Von Neumann or Harvard) is better for modern computing.
Half of the group must argue in favor of Von Neumann, while the other half defends Harvard.
Use key points such as memory bottlenecks, speed, complexity, and efficiency.
Present the strongest arguments to the class.
Exam-Style Questions
Explain the concept of pipelining in CPU processing.
Describe the five main stages of a pipelined processor and their functions.
What are three types of hazards in pipelining? Provide an example for each.
How does branch prediction help reduce pipeline stalls?
Discuss one real-world example where pipelining has improved CPU performance.
Timed Pair Activity (10 Minutes)
Each partner picks one type of pipeline hazard (data, control, or structural).
After 5 minutes of research, explain the hazard to your partner using an example.
Then, switch and explain how branch prediction or hazard reduction techniques can solve the issue.
Small Group Activity (20 Minutes)
Each group creates a real-world analogy for pipelining (e.g., an assembly line in a factory).
Illustrate and explain how different pipeline stages correspond to real-life processes.
Present the analogy to another group and see if they can identify the pipeline stages.
Functions of CPU Components
ALU: Carries out arithmetic (e.g., addition) and logical (e.g., AND, OR) operations.
Control Unit: Manages execution of instructions and directs the flow of data within the CPU.
Registers: Small, fast storage locations used to hold data and instructions temporarily.
Role of ACC and PC
ACC: Stores the result of calculations or data currently being processed.
PC: Holds the address of the next instruction to be fetched from memory.
MAR vs MDR
MAR: Holds the address in memory of the data/instruction to be accessed.
MDR: Holds the actual data or instruction fetched from or to be written to memory.
Cache Memory Importance
Stores frequently accessed data close to the CPU for faster access.
L1: Smallest and fastest, inside the CPU core.
L2: Larger, slightly slower, often shared between cores.
L3: Even larger, shared across all cores, slower than L1/L2 but still faster than RAM.
System Bus Roles
Data Bus: Transfers data between CPU, memory, and peripherals.
Address Bus: Carries memory addresses from CPU to RAM.
Control Bus: Sends control signals (e.g., read/write) across components.
Three Stages of FDE
Fetch: Instruction is retrieved from memory using the PC and MAR.
Decode: Instruction is interpreted by the CU.
Execute: Instruction is carried out by the ALU or relevant part of CPU.
Register Roles
PC: Stores address of next instruction.
MAR: Stores the address to access.
MDR: Holds the fetched instruction or data.
CIR: Stores the current instruction being executed.
ACC: Stores intermediate or final result.
Control Unit Role
Decodes instructions and coordinates movement of data and execution steps.
System Buses in FDE
Address Bus: Sends address from PC to MAR.
Data Bus: Transfers instruction to MDR.
Control Bus: Ensures correct timing (e.g., read/write).
Example: During fetch, PC sends address via Address Bus; MDR gets data via Data Bus.
Effect of Pipelining
Allows overlapping of fetch, decode, and execute stages.
Increases CPU efficiency and instruction throughput.
Clock Speed
Higher speed = more instructions per second.
Overclocking increases speed but risks overheating and instability.
Single-core vs Multi-core
Single-core: One instruction at a time.
Multi-core: Multiple instructions in parallel = better multitasking and performance.
Cache vs RAM
Cache is closer to the CPU and faster.
L1, L2, L3 (as explained in Lesson 1).
Heat Management Methods
Heatsinks and fans: Dissipate heat physically.
Liquid cooling: Transfers heat away more efficiently.
Importance: Prevents overheating which can damage CPU or reduce speed.
Multithreading vs Hyper-threading
Multithreading: Single core runs multiple threads.
Hyper-threading: Simulates extra cores by handling two threads per physical core.
Improves efficiency by filling idle CPU time.
Von Neumann vs Harvard
Von Neumann: Shared memory for instructions and data.
Harvard: Separate memory for instructions and data.
Von Neumann Pros and Cons
Pros: Simpler design, cost-effective.
Cons: Slower due to memory bottlenecks.
RISC vs CISC
RISC: Simple instructions, faster execution, more instructions needed.
CISC: Complex instructions, fewer needed but slower to decode.
Memory Bottleneck
Occurs in Von Neumann where one bus serves both data and instructions, slowing performance.
Embedded Systems & Harvard
Use Harvard for speed and simplicity.
Examples: Digital watches, microwave controllers.
Pipelining Concept
Splits instruction execution into stages.
Each stage handled simultaneously for different instructions.
Five Stages of a Pipeline
Fetch: Get instruction.
Decode: Interpret instruction.
Execute: Perform calculation.
Memory Access: Read/write to memory.
Write Back: Store result.
Pipeline Hazards
Data Hazard: Dependency on previous result. Example: ADD followed by SUB using same data.
Control Hazard: Branch instruction disrupts flow. Example: IF condition changes path.
Structural Hazard: Hardware conflict. Example: Two stages needing same unit.
Branch Prediction
Guesses outcome of branches to preload correct instruction.
Reduces delays from control hazards.
Real-World Example
Mobile processors (e.g., ARM CPUs) use pipelining to maximise performance and battery life.