FPGAs and HMC Devices: What You Need to Know

By Mark Bingeman | Oct 25, 2017

You might have heard that FPGAs are well-suited to interface with HMC devices, but do you understand why? In this article we explain what an HMC is, what the advantages of using them are, and how FPGAs and HMC devices work together.

A Hybrid Memory Cube (HMC) is a memory device that contains multiple memory dies stacked together using Through-Silicon Via (TSV) technology. Each memory die is divided into partitions. The stack of partitions is called a vault. Each vault has a vault controller on the base die to communicate with all the memory in the vault.  

Figure 1 shows the block diagram of an example implementation of the HMC (from the HMC Specification 2.1 – Figure 2 – available from the Hybrid Memory Cube Consortium i). The logic base consists of separate vault controllers for each memory vault, a number of link interfaces, and a cross-point switch. The HMC uses a packet based protocol over high-speed serial links to access memory.  An I2C or SPI interface provides configuration and health monitoring access.

 

FPGAs-and-HMC.png
Figure 1: HMC Block Diagram Example Implementation


HMC Advantages over DDR3/4 Memories:

1.    Higher Memory Bandwidth

Each link interface consists of 16 high-speed serial transmit and 16 receive signals. HMC v1.x supports link speeds of 10, 12.5 or 15 Gb/s. HMC v2.x supports link speeds of 12.5, 15, 25, 28, or 30 Gbps. This provides an aggregate bandwidth per link (transmit and receive) of 40 GB/s, up to 120 GB/s.

FPGAs-and-HMC-table-1.png
Table 1: HMC and DDR4 Bandwidth Comparison


2.    Internal Memory Controller

The vault controllers handle the memory specific interfacing requirements, off-loading this functionality from the external processor. The current HMC devices are SDRAM based, but other memory technologies can be supported in the future, without having to change the external interface or protocol.

3.    Multiple Links

HMC v1.x support 4 or 8 link interfaces and HMC v2.x support 2 or 4 link interface. These link interfaces can be used to connect multiple HMC devices in a 2D array to expend the memory depth, or can be used to enable more than one processor to access the memory at the same time. In the past, computer architecture linked a memory resource to one processor. With multiple HMC link interfaces, this is no longer the case.


HMC Applications

The higher bandwidth is very useful in image processing and data processing applications that require significant data access bandwidth such as temporal image filtering which requires access to multiple past image frames.

Multiple HMC link interfaces can be leveraged in applications where processing is divided across multiple processors. Each processor can still access the full HMC memory map, or the memory map can be divided between processors.


FPGAs and HMC

FPGAs are well-suited to interface with HMC devices because high-speed serial transceivers are common elements in FPGAs. FPGA vendors have matured their hard-core transceivers and continue to increase their transceiver rates. FPGAs are often used for prototyping and early adoption of new interface standards.

Both Intel (Altera) and Xilinx have HMC controller IP to interface with an HMC device. The Xilinx HMC controller IP ii supports a native FLIT based interface which contains many of the fields in the packet-base protocol.  The Xilinx HMC controller also supports an AXI4 interface for seamless system integration using the Xilinx Vivado design tools. The Intel HMC controller IP iii currently only supports a native FLIT based interface.  This requires the user to design and implement a block to translate memory reads and writes into the FLIT interface format. Hopefully, Intel will add an Avalon interface option soon in order to enable seamless integration in their Qsys system integration tool.

In addition to the HMC controller, the example designs include other elements such as PLLs and an I2C master in order to interface with the HMC device. The HMC controller IP uses a significant amount of FPGA resources.  This needs to be accounted for in the full FPGA resource usage budget.
 

The Future of FPGAs and HMC

Currently, there are only a few HMC devices available. In the near term, FPGAs will provide the vital role of HMC device interfacing until processor manufacturers begin to support HMC interfaces directly. In the longer term, if HMC adoption becomes more mainstream, then perhaps FPGA vendors will consider adding HMC controller hard-cores to their devices.

If you think HMC devices may be a fit for your application, Nuvation Engineering can provide FPGA, hardware, and embedded software design services to move your product from conception through to manufacturing.