An Overflow-free Quantized Memory Hierarchy in General-purpose Processors

Marzieh Lenjani, Patricia Gonzalez, Elaheh Sadredini, M. Arif Rahman, Mircea R. Stan

Research output: Contribution to book or proceedingConference articlepeer-review

5 Scopus citations

Abstract

Data movement comprises a significant portion of energy consumption and execution time in modern applications. Accelerator designers exploit quantization to reduce the bitwidth of values and reduce the cost of data movement. However, any value that does not fit in the reduced bitwidth results in an overflow (we refer to these values as outliers). Therefore accelerators use quantization for applications that are tolerant to overflows. We observe that in most applications the rate of outliers is low and values are often within a narrow range, providing the opportunity to exploit quantization in general-purpose processors. However, a software implementation of quantization in general-purpose processors has three problems. First, the programmer has to manually implement conversions and the additional instructions that quantize and dequantize values, imposing a programmer's effort and performance overhead. Second, to cover outliers, the bitwidth of the quantized values often become greater than or equal to the original values. Third, the programmer has to use standard bitwidth; otherwise, extracting non-standard bitwidth (i.e., 1-7, 9-15, and 17-31) for representing narrow integers exacerbates the overhead of software-based quantization. The key idea of this paper is to propose a hardware support in the memory hierarchy of general-purpose processors for quantization, which represents values by few and flexible numbers of bits and stores outliers in their original format in a separate space, preventing any overflow. We minimize metadata and the overhead of locating quantized values using a software-hardware interaction that transfers quantization parameters and data layout to hardware. As a result, our approach has three advantages over cache compression techniques: (i) less metadata, (ii) higher compression ratio for floating-point values and cache blocks with multiple data types, and (iii) lower overhead for locating the compressed blocks. It delivers on average 1.40/1.45/1.56\times speedup and 24/26/30% energy reduction compared to a baseline that uses full-length variables in a 4/8/16-core system. Our approach also provides 1.23\times speedup, in a 4-core system, compared to the state of the art cache compression techniques and adds only 0.25% area overhead to the baseline processor.

Original languageEnglish
Title of host publicationProceedings of the 2019 IEEE International Symposium on Workload Characterization, IISWC 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages203-215
Number of pages13
ISBN (Electronic)9781728140452
DOIs
StatePublished - Nov 2019
Externally publishedYes
Event15th IEEE International Symposium on Workload Characterization, IISWC 2019 - Orlando, United States
Duration: Nov 3 2019Nov 5 2019

Publication series

NameProceedings of the 2019 IEEE International Symposium on Workload Characterization, IISWC 2019

Conference

Conference15th IEEE International Symposium on Workload Characterization, IISWC 2019
Country/TerritoryUnited States
CityOrlando
Period11/3/1911/5/19

Scopus Subject Areas

  • Computer Graphics and Computer-Aided Design
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'An Overflow-free Quantized Memory Hierarchy in General-purpose Processors'. Together they form a unique fingerprint.

Cite this