Abstract
Wide-issue processors issuing tens of instructions per cycle, put heavy stress on the memory system, including data caches. For wide-issue architecture, data cache needs to be heavily multi-ported with extremely wide data-paths. This paper studies a scalable solution to achieve multi-porting with short data-paths and less hardware complexity at higher clock-rates. Our approach divides memory streams into multiple independent sub-streams with the help of prediction mechanism before they enter the reservation stations. Partitioned memory-reference instructions are then fed into separate memory pipelines, each of which is connected to a small data-cache, called access region cache. The separation of independent memory references, in an ideal situation, facilitates the use of multiple caches with smaller number of ports and thus increases the data-bandwidth. We describe and evaluate a wide-issue processor with distinct memory pipelines, driven by a prediction mechanism. The potential performance of the proposed design is measured by comparing it with existing multi-porting solution as well as an ideal multi-ported data cache.
Original language | English |
---|---|
Pages | 293-300 |
Number of pages | 8 |
Publication status | Published - 2001 |
Externally published | Yes |
Event | IEEE International Conference on: Computer Design: VLSI in Computers and Processors (ICCD 2001) - Austin, TX, United States Duration: 2001 Sept 23 → 2001 Sept 26 |
Other
Other | IEEE International Conference on: Computer Design: VLSI in Computers and Processors (ICCD 2001) |
---|---|
Country/Territory | United States |
City | Austin, TX |
Period | 01/9/23 → 01/9/26 |
ASJC Scopus subject areas
- Hardware and Architecture
- Electrical and Electronic Engineering