PMOP: Efficient Per-Page Most-Offset Prefetcher

Kanghee KIM  Wooseok LEE  Sangbang CHOI  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E102-D   No.7   pp.1271-1279
Publication Date: 2019/07/01
Publicized: 2019/04/12
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2018EDP7328
Type of Manuscript: PAPER
Category: Computer System
Keyword: 
memory hierarchy,  cache,  hardware prefetching,  

Full Text: PDF(933.9KB)>>
Buy this Article




Summary: 
Hardware prefetching involves a sophisticated balance between accuracy, coverage, and timeliness while minimizing hardware cost. Recent prefetchers have achieved these goals, but they still require complex hardware and a significant amount of storage. In this paper, we propose an efficient Per-page Most-Offset Prefetcher (PMOP) that minimizes hardware cost and simultaneously improves accuracy while maintaining coverage and timeliness. We achieve these objectives using an enhanced offset prefetcher that performs well with a reasonable hardware cost. Our approach first addresses coverage and timeliness by allowing multiple Most-Offset predictions. To minimize offset interference between pages, the PMOP leverages a fine-grain per-page offset filter. This filter records the access history with page-IDs, which enables efficient mapping and tracking of multiple offset streams from diverse pages. Analysis results show that PMOP outperforms the state-of-the-art Signature Path Prefetcher while reducing storage overhead by a factor of 3.4.