Certain central processing units (CPUs) use parallel processing and caching to improve performance.
Parallel processing is a form of computation in which many calculations are carried out simultaneously
Parallel processing uses multiple cores/cores/processors (to process a single task).
It operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently
Parallel processing in computer programs is more complex to design and to write than sequential computer programs
Communication and synchronisation between the different subtasks are typically some of the greatest obstacles to getting efficient parallel program performance
Amdahl's law: Accepted not expected: (This is A2)
The maximum possible speed-up of a single program as a result of parallelisation is known as Amdahl's law:
Disadvantages of Parallel Processing
Much more difficult to write programs that take advantage of a multi-core system
Data must be up-to-date, and processing units will need to change their calculations based on the actions of other processing units
Cannot split sequential tasks
Concurrency means more software bugs to deal with
Describe different types of memory and caching
it has fast read and write access and is volatile
used to store data and currently running programs
RAM is needed because most data on computers is stored in much slower "storage media" such as hard disks, solid state drives or flash memory
For the processor to be able to work on data or run programs at any reasonable speed, the programs or data need to be copied into RAM first.
Example of data in RAM would be a running program such as an application or the operating system
ROM is non-volatile and in ROM, the data is fixed during manufacture or is permanent and cannot be deleted or amended whereas RAM is where data can be added, amended or deleted.
Example of data in ROM would be the boot strap loader, other systems software or hardware (system) settings / BIOS
(The BIOS Short for Basic Input/Output System, the BIOS (pronounced bye-oss) is a ROM chip found on motherboards that lets you access and set up your computer system at the most basic level).
Certain central processing units (CPUs) use caching to improve performance
Cache memory Characteristics
is similar to RAM, except it resides on or close to the CPU/typically integrated directly within the CPU
Can also be placed on a separate chip/located between the CPU and RAM (that has a separate bus to connect with the CPU).
is faster than RAM and is also volatile
used to store frequently used data from main memory / stores program instructions that are frequently re-referenced by software during operation
As a CPU processes data, it looks in the cache memory first to see if the instructions are there from a previous reading of data.
used by the processor to avoid having to slow down to the speed of the RAM all the time
used to store intermediate results to calculations
usually quite low-capacity (a few megabytes), so RAM is still needed in order to avoid constantly accessing things from slow storage media.
Different levels of cache memory which denote speed and characteristics - Cache memory is categorised as “levels” that describe its closeness and accessibility to the CPU.
Cache memory is expensive compared to conventional RAM.
Cache memory is smaller than RAM.
Least used data/instructions are overwritten when the cache is full.
Von Neumann bottleneck solution - The von Neumann bottleneck is the idea that computer system throughput is limited due to the relative ability of processors compared to top rates of data transfer. According to this description of computer architecture, a processor is idle for a certain amount of time while memory is accessed.
There are a number of cache levels. Cache memory is categorised as "levels" that describe its closeness and accessibility to the CPU.
•Cache memory is memory that a CPU can access more quickly than it can access regular RAM.
•Cache memory ensures fast access to these instructions which increases the overall speed of the software program.
•Computers with slower processors but larger caches tend to be faster than computers with faster processors but more limited cache space.
Accepted but not expected at AS Level
•Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in the CPU.
•Level 2 (L2) cache often has a higher capacity than L1 and may be located on the CPU or on a separate chip so that it is not slowed down by traffic on the main system bus.
•Level 3 (L3) cache is typically specialised memory that works to improve the performance of L1 and L2.
•Level 3 (L3) cache can be significantly slower than L1 or L2, but faster than RAM.
•In multicore CPUs, each core may have its own dedicated L1 and L2 cache, but share a common L3 cache.
•RAM cache is memory between the CPU and main memory (sometimes referred to as L2 or L3 cache) where sections of (recently or frequently used) data and/or programs are stored
Disc cache is a section of main memory between the CPU and disc where data recently read from disc or about to be written to disc is (temporarily) stored (before being transferred RAM).
Internet or web cache has advantage of:
•view previously viewed page to speed up viewing (as they are read from disc which (is usually) quicker that downloading them again)
•storing ‘pre-fetched’ pages (from information provided with page being viewed other pages are downloaded and cached in anticipation that the user might view them)
•storing pages in anticipation of not having internet access in future so pages can still be viewed