TAZ-TFM-2012-919


Efficient instruction and data caching for high-performance low-power embedded systems

Ferrerón Labari, Alexandra
Suárez Gracia, Darío (dir.)

Alastruey Benedé, Jesús (ponente)

Universidad de Zaragoza, EINA, 2012
Informática e Ingeniería de Sistemas department, Arquitectura y Tecnología de Computadores area

Máster Universitario en Ingeniería de Sistemas e Informática

Abstract: Although multi-threading processors can increase the performance of embedded systems with a minimum overhead, fetching instructions from multiple threads each cycle also increases the pressure on the instruction cache, potentially harming the performance/consumption ratio. Instruction caches are responsible of a high percentage of the total energy consumption of the chip, which for battery-powered embedded devices becomes a critical issue. A direct way to reduce the energy consumption of the first level instruction cache is to decrease its size and associativity. However, demanding applications, and specially applications with several threads running together, might suffer a dramatic performance slow down, or even increase the total energy consumption of the cache hierarchy, due to the extra misses incurred. In this work we introduce iLP-NUCA (Instruction Light Power NUCA), a new instruction cache that replaces the conventional second level cache (L2) and improves the Energy–Delay of the system. We provided iLP-NUCA with a new tree-based transport network-in-cache that reduces both the cache line service latency and the energy consumption, regarding the former LP-NUCA implementation. We modeled in our cycle-accurate simulation environment both conventional instruction hierarchies and iLP-NUCAs. Our experiments show that, running SPEC CPU2006, iLP-NUCA, in comparison with a state–of–the–art high performance conventional cache hierarchy (three cache levels, dedicated L1 and L2, shared L3), performs better and consumes less energy. Furthermore, iLP-NUCA reaches the performance, on average, of a conventional instruction cache hierarchy implementing a double sized L1, independently of the number of threads. This translates into a reduction of the Energy–Delay product of 21%, 18%, and 11%, reaching 90%, 95%, and 99% of the ideal performance for 1, 2, and 4 threads, respectively. These results are consistent for the considered applications distribution, and bigger gains are in the most demanding applications (applications with high instruction cache requirements). Besides, we increase the performance of applications with several threads without being detrimental for any of them. The new transport topology reduces the average service latency of cache lines by 8%, and the energy consumption of its components by 20%.


Free keyword(s): computer architecture ; cache memory ; multi-thread ; embedded systems.
Tipo de Trabajo Académico: Trabajo Fin de Master
Notas: Abstract also available in Spanish.

Creative Commons License



El registro pertenece a las siguientes colecciones:
Academic Works > Trabajos Académicos por Centro > escuela-de-ingeniería-y-arquitectura
Academic Works > End-of-master works




Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)