Define RISC and give one advantage. (2)
Define CISC and give one disadvantage. (2)
Compare how multiplication is handled in CISC vs RISC. (4)
Explain pipelining and how it benefits RISC processors. (3)
Suggest one real-world use for RISC and one for CISC. (3)
State two features of a RISC instruction set. (2)
Why might CISC require fewer instructions in a program? (2)
What is meant by ‘complex instruction’ in CISC? Give an example. (2)
Why is RISC more suitable for pipelining than CISC? (2)
What is microcode, and how does it relate to CISC? (2)
How does RISC improve battery life in smartphones? (2)
Which processor type is found in Apple M1 chips and why? (2)
What is an instruction set and why does it matter? (2)
Name two devices that use RISC architecture. (2)
Explain why CISC processors dominated earlier desktop computing. (2)
Define parallel processing. (1)
What is a multicore processor? (2)
Give two advantages of multicore processors. (2)
Explain Amdahl’s Law using an example. (4)
Why doesn’t doubling cores always double performance? (2)
What is inter-core communication and why is it a limitation? (2)
What is concurrent processing and how does it differ from parallel? (2)
How do multicore processors improve video editing? (3)
What software design issue can limit multicore benefits? (2)
What is meant by ‘core’? (1)
Give one advantage of multicore systems for mobile devices. (1)
Explain the trade-off between power and performance in multicore chips. (2)
Why are multicore systems used in AI or simulations? (2)
What does Amdahl’s Law say about parallelisation limits? (2)
How does heat affect multicore performance? (1)
Define a GPU and its main purpose. (2)
Explain what parallel processing means in GPUs. (2)
Compare CPUs and GPUs in terms of task design. (3)
What is SIMD? (2)
Define GPGPU and give one example. (2)
Give two examples of non-graphics GPU use. (2)
Why are GPUs better than CPUs for training neural networks? (2)
What is memory bandwidth and why is it important for GPUs? (2)
What are CUDA cores? (2)
Name one GPU vendor and one API used in GPGPU. (2)
Give two limitations of GPUs. (2)
Explain why GPUs are used in self-driving cars. (3)
How is a GPU different from a TPU? (2)
What’s one common mistake about GPUs? (1)
Why do GPUs require cooling systems? (1)
RISC uses simple instructions that each execute in a single clock cycle. Advantage: energy efficient. (1+1)
CISC uses complex instructions that take multiple cycles. Disadvantage: more power and heat. (1+1)
CISC uses one instruction like MULT to load, multiply, and store. RISC uses 3–4 simple instructions. (2+2)
Pipelining overlaps instruction phases (fetch, decode, execute), increasing efficiency. (3)
RISC: smartphones (energy saving). CISC: desktops (legacy software compatibility). (1+1+1)
Fixed-length instructions and one cycle per instruction. (1+1)
Each instruction does more work, combining multiple operations. (2)
An instruction that performs multiple steps, e.g. MULT loads, multiplies, and stores. (1+1)
Uniform instruction size in RISC avoids delays in pipeline stages. (2)
Microcode translates complex CISC instructions into simpler steps. (2)
Less power-hungry instructions mean lower energy use. (2)
RISC-based ARM processors for performance and energy efficiency. (1+1)
It’s the list of all commands a CPU can perform. Defines software compatibility. (1+1)
Smartphones and tablets. (1+1)
They needed fewer instructions, saving memory and supporting older compilers. (2)
Running multiple tasks or instructions at the same time. (1)
A CPU with multiple processing units (cores) that work simultaneously. (1+1)
Improved multitasking and increased speed for parallel tasks. (1+1)
Speed gain is limited by the part of the task that must run sequentially. Example: 20% of task limits overall speed no matter how many cores. (2+2)
Overheads from inter-core communication and sequential task parts. (1+1)
Time spent sharing and syncing data between cores. Reduces efficiency. (1+1)
Concurrent = multiple tasks in progress; parallel = multiple tasks at the same instant. (1+1)
Frames are split between cores, reducing render time. More cores = faster export. (3)
Single-threaded software can’t divide tasks among cores. (2)
An independent processor unit within a CPU. (1)
Energy efficient multitasking. (1)
More cores = more power use and heat, which must be managed. (1+1)
They handle large amounts of data in parallel, improving speed. (1+1)
Even small non-parallel parts limit max speedup. (2)
Too much heat can cause thermal throttling. (1)
GPU = Graphics Processing Unit. Used for parallel data processing and graphics. (1+1)
Running many operations at once across thousands of cores. (1+1)
CPUs: better for complex, varied tasks. GPUs: better for simple, repeated tasks in parallel. (1+1+1)
Single Instruction, Multiple Data – same command on multiple data points. (2)
Using GPUs for general-purpose tasks like AI training. (1+1)
Weather forecasting, deep learning. (1+1)
GPUs handle matrix ops in parallel, speeding up training. (2)
Speed of data transfer between GPU memory and cores. Affects performance. (1+1)
NVIDIA GPU units designed for parallel processing. (2)
Vendor: NVIDIA. API: OpenCL. (1+1)
High power use and limited for sequential tasks. (1+1)
They process image/sensor data fast for real-time decisions. (3)
TPUs are specialised for AI, more efficient for tensor ops. (1+1)
That they’re only for gaming. (1)
High core count generates heat during parallel processing. (1)