Lightning Talk: Memory-Centric Computing
O Mutlu - 2023 60th ACM/IEEE Design Automation Conference …, 2023 - ieeexplore.ieee.org
2023 60th ACM/IEEE Design Automation Conference (DAC), 2023•ieeexplore.ieee.org
Modern computing systems are processor-centric. Data processing (ie, computation)
happens only in the processor (eg, a CPU, GPU, FPGA, ASIC). As such, data needs to be
moved from where it is generated/captured (eg, sensors) and stored (eg, storage and
memory devices) to the processor before it can be processed. The processor-centric design
paradigm greatly limits the performance & energy-efficiency, as well as scalability &
sustainability, of modern computing systems. Many studies show that even the most …
happens only in the processor (eg, a CPU, GPU, FPGA, ASIC). As such, data needs to be
moved from where it is generated/captured (eg, sensors) and stored (eg, storage and
memory devices) to the processor before it can be processed. The processor-centric design
paradigm greatly limits the performance & energy-efficiency, as well as scalability &
sustainability, of modern computing systems. Many studies show that even the most …
Modern computing systems are processor-centric. Data processing (i.e., computation) happens only in the processor (e.g., a CPU, GPU, FPGA, ASIC). As such, data needs to be moved from where it is generated/captured (e.g., sensors) and stored (e.g., storage and memory devices) to the processor before it can be processed. The processor-centric design paradigm greatly limits the performance & energy-efficiency, as well as scalability & sustainability, of modern computing systems. Many studies show that even the most powerful processors and accelerators waste a large fraction (e.g., >60%) of their time simply waiting for data and energy on moving data between storage/memory units to the processor. This is so even though most of the hardware real estate of such systems is dedicated to data storage and communication (e.g., many levels of caches, DRAM chips, storage systems, and interconnects).Memory-centric computing aims to enable computation capability in and near all places where data is generated and stored. As such, it can greatly reduce the large negative performance and energy impact of data access and data movement, by fundamentally avoiding data movement and reducing data access latency & energy. Many recent studies show that memory-centric computing can greatly improve system performance and energy efficiency. Major industrial vendors and startup companies have also recently introduced memory chips that have sophisticated computation capabilities.This talk describes promising ongoing research and development efforts in memory-centric computing. We classify such efforts into two major fundamental categories: 1) processing using memory, which exploits analog operational properties of memory structures to perform massively-parallel operations in memory, and 2) processing near memory, which integrates processing capability in memory controllers, the logic layer of 3D-stacked memory technologies, or memory chips to enable high-bandwidth and low-latency memory access to near-memory logic. We show both types of architectures (and their combination) can enable orders of magnitude improvements in performance and energy consumption of many important workloads, such as graph analytics, databases, machine learning, video processing, climate modeling, genome analysis. We discuss adoption challenges for the memory-centric computing paradigm and conclude with some research & development opportunities.
ieeexplore.ieee.org
Showing the best result for this search. See all results