Extract
We luckily are living in one of the most fortunate decade in the world of Computer Architecture. Computers are running at a speed that can be never imagined and their performance exceeding beyond anyone’s expectations. All this started with a prediction that was named as Moore’s Law, stating that number of transistors in an integrated circuit doubles in every two years. This prediction was a perfect one and the world of computers is following this from last 40 years. But now it is reaching a saturation point. It was 1990’s when two comments came from masters of computer architectures,
Surprisingly now in 2013 we have reached that stage when more integration is not possible, because we have reached to the size of transistors that is comparable to the size of atomic level and going beyond this is not possible, atleast till now.
All this accounts for the development in field of fabrication, another significant part of an overall system is processors. Again Processors are also matching up with the pace of today’s world and performances of these are reaching to such a level where no word can be “exaggerating”. From the first ever computer ENIAC whose clock speed was 100 KHz we have reached a time where clock speed is crossing 3.50 GHz which is almost 1000 times faster.
Today’s processors have reached to such a level where all possible Instruction Level Parallelism is achieved, which in turns provide ultra high performance. All possible Superscalar, Supper pipelined cores have been developed. Multiple cores in a system are so common that these can be seen in every system that surrounds us. Many GPUs have thousands of cores. All this shows a great development in the field of Computer Architecture.
Holdup! Do you think that we really are growing so fast, so efficiently? The answer to this is a bit ambivalent. There is another very significant part of our overall smart system that is Memory. Inspite of great development in the field of fabrication or processor’s performance, still an aspect that is holding us behind is our “SLOW” and “VERY SLOW” Memory system.
The kind of growth that is described above is of no use if our memory systems are not growing with them. In short our slow Memory Systems are Restricting the growth of our Smart World. The kind of speed they provide when there is a cache miss can make all the ILP or any other high end feature of a processor of no use.
Project idea:
There are many new and upcoming researches that are trying to solve this basic and very important problem. This project discusses some of these and evaluates a very effective and interesting concept of keeping a part of excessively used memory on-chip, this is termed as Scratch-pad technique.
This project makes use of DineroIV cache simulator and extends it to simulate scratchpad of varied size. It then tries to simulate this new kind of memory system and tries to find the effectiveness of it and the optimum size that is most helpful.
This projects aims to understand and make a simulating environment for Scratchpad implementation. A brief list of Aims to the project is listed below:
Aim 1 - To Understand On-Chip Scratchpad technique and various other techniques to improve our memory systems.
Aim 2 - To extend DineroIV cache simulator to simulate On-Chip Scratchpad
Aim 3 - To check the effectiveness of On-Chip Scratch pad technique on few benchmarks
Project Report
To view full project report click here
Project Presentation
To get a illustrative vision of the project, presentation can be seen by clicking here
Guidance
The project was done as a part of the curriculum offered under the Computer Architecture Course. The project was accomplished under the wise guidance of Prof. David Kaeli