A Revolution in Memory
See how the Hybrid Memory Cube (HMC) enables radically higher performance and amazingly low power consumption.
Multi-core processing is bringing incredible advances to supercomputing and advanced networking systems— advances that require a new level of memory efficiency and performance. In October, 2011, we joined a group of industry leaders to develop an entirely new memory architecture—the Hybrid Memory Cube (HMC)—which maximizes the full potential of these high-performance systems. HMC represents a fundamental change in how memory is used in the system. This elemental change is paramount. By tightly coupling intelligent memory with CPUs, GPUs, and ASICs, systems can enable dramatic improvements in efficiency and power optimization.
The HMC Consortium (HMCC) is led by eight developers—Altera, ARM, IBM, SK Hynix, Micron, Open-Silicon, Samsung, and Xilinx—and drives broad agreement of HMC standards with the help of more than 100 consortium adopters. The HMC 1.0 Specification was finalized and released to the public on April 2, 2013. The end result is a high bandwidth, low energy, high-density memory system that’s unlike anything on the market today.
How HMC Works
At the core of the HMC is a small, high-speed logic layer that sits below vertical stacks of DRAM die that are connected using through-silicon-via (TSV) interconnects. The DRAM has been designed solely to handle data, with the logic layer handling all DRAM control within the HMC. System designers have the option of using the HMC as either “near memory,” mounted directly adjacent to the processors for best performance, or in a scalable module form factor as “far memory,” for optimized power efficiency.
HMC has been recognized by industry leaders and influencers as the long-awaited answer to the growing gap between the performance improvement rate of DRAM versus processor data consumption rate—a dilemma known as the “memory wall.”