🖥️ Comprehensive Cache Sweep: Evaluated cache sizes from 64 KB to 2 MB and associativity from 1-way to 64-way.
⚙️ Three Core Metrics: Analyzed area of the data array, total dynamic read energy per access, and cache access time.
🔄 Controlled Parameter Studies: Studied cache size at fixed associativity and associativity at fixed cache size to isolate architectural trends.
📊 Trend-Based Evaluation: Produced graph-based observations to explain scaling behavior and design trade-offs in cache memory.
Showed that the cache data-array area increases strongly with cache size at fixed 4-way associativity, rising from 1.27551 mm² at 64 KB to 31.8043 mm² at 2 MB. This highlighted the direct area cost of increasing on-chip storage capacity.
Demonstrated that increasing associativity at fixed 128 KB cache size also increases area, especially at high associativities, with area rising from 1.779 mm² at 1-way to 32.822 mm² at 64-way. This shows the hardware overhead of more complex set-associative organizations.
Observed that dynamic read energy per access increases with both cache size and associativity. For example, at 4-way associativity, read energy increases from 0.4054 nJ at 64 KB to 1.71865 nJ at 2 MB, while for a fixed 128 KB cache, read energy rises from 0.279 nJ at 1-way to 8.732 nJ at 64-way.
Found that access time also increases with both larger cache sizes and higher associativity. At 4-way associativity, access time rises from 1.51102 ns at 64 KB to 5.0595 ns at 2 MB, and for a fixed 512 KB cache, it increases from 2.615 ns at 1-way to 15.3 ns at 64-way.
Used CACTI 7.0 to perform systematic cache design analysis across area, energy, and timing dimensions. The assignment specifically centered on studying the impact of associativity and cache size using CACTI.
Modified cache configuration files to sweep design parameters and extract architectural metrics for multiple design points.
Built a report-driven evaluation flow that translated raw CACTI outputs into engineering observations about efficiency and design trade-offs.
Identified that cache efficiency is not simply improved by making caches larger or more associative; instead, both changes introduce nontrivial penalties in chip area, energy, and latency.
CACTI 7.0 for cache modeling and extracting area, energy, and access-time metrics.
Cache configuration files (cache.cfg) for varying cache size and associativity.
Git for cloning the CACTI repository.
Make / build tools for compiling CACTI.
Scripting / plotting workflow to automate configuration edits, collect outputs, and generate plots, as encouraged by the assignment.
Cache and Memory Hierarchy Design: Understanding practical trade-offs in on-chip cache design.
Computer Architecture Education and Research: Using analytical tools to study area, energy, and latency together instead of in isolation.
Hardware Cost Optimization: Identifying cache design points that avoid unnecessary increases in energy, delay, or silicon area.
This project provided a structured cache design-space exploration workflow and demonstrated that cache optimization requires balancing multiple competing metrics. The results showed that larger caches improve storage capacity but significantly increase area and access time, while higher associativity introduces additional logic overhead that raises both energy and latency. Overall, the work strengthened intuition for architecture-level trade-offs and showed how CACTI can be used as a practical tool for early-stage hardware design analysis.