June 7, 2023


Compute Express Link or CXL is dramatically changing the way memory is used in computer systems. Tutorials at the IEEE Hot Chip Conference and the recent SNIA Storage Developers Conference explored how CXL works and how it will change the way we do computing. Additionally, a recent announcement by Colorado-based startup IntelliProp on its Omega Memory Fabric chip paves the way for implementing CXL to enable memory pools and composable infrastructure.

The original application of CXL was memory expansion for a single CPU, but CXL will have the greatest impact in sharing many different types of memory technologies (DRAM and persistent memory) between CPUs. The diagram below (from the CXL Hot Chips tutorial) shows the different ways of sharing memory with CXL.

As Samsung Electronics VP Yang Seok Ki said at SNIA SDC, CXL is an industry-supported cache-coherent interconnect for processors, memory expansion and accelerators. CXL versions 1.0 and 2.0 have been released (for use with PCIe 5.0), and at the Flash Memory Summit in early August, CXL version 3.0 was announced for use with the faster PCIe 6.0 interconnect. CXL 3.0 also supports multi-level swap and memory structures and point-to-point direct memory access.

The presentation also provides an overview of how CXL provides mid-range memory with version 2.0 support to the CPU via the native CXL connection and via the CXL version 3.0 switch network, as shown below.

Near memory is directly connected to the CPU. Some of the first CXL products available were mid-range memory expander products that provided additional memory for the CPU. CXL opens the door to memory tiering, offering similar performance and cost tradeoffs as storage tiering.

IntelliProp just announced its Omega Memory Fabric chip. The chips combine the CXL standard with the company’s fabric management software and network attached memory (NAM) system. IntelliProp also announced three Field Programmable Gate Array (FPGA) products that integrate its Omega Memory Fabric. The company says its memory-agnostic innovations will facilitate the adoption of composable memory, which can significantly improve data center energy consumption and efficiency. The company says its Omega memory fabric has the following characteristics:

Omega memory fabric features, combined with the CXL standard

  • Dynamic multipathing and memory allocation
  • E2E security using AES-XTS 256 for added integrity
  • Supports peer-to-peer non-tree topologies
  • Management scaling for large deployments using multiple fabrics/subnets and distributed managers
  • Direct Memory Access (DMA) allows efficient movement of data between memory layers without locking up CPU cores
  • Memory independent, 10 times faster than RDMA

These three FPGA solutions connect CXL devices to CXL hosts, which are adapters, switches, and fabric managers. IntelliProp said the ASIC solution will be available in 2023. The company says these solutions connect CXL appliances to CXL hosts, enabling data centers to increase performance, scale to tens to thousands of host nodes, and consume less energy as data travels in fewer hops and Enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory).

Based on the 2022 Hot Chips tutorial and presentations at SNIA SDC, CXL is poised to change the way memory is used in computer architecture. IntelliProp introduced the company’s open memory fabric technology and three FPGA solutions for CXL-enabled memory fabrics.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *