CXL in the news

Expect many CXL 3 features, such as DCD, to be available before PCIe 6

by Pankaj Mehra (Elephance Memory)

Answering questions at Rambus Design Summit Day 1 on July 18 2023, Mark Orthodoxu said CXL3 introduces 64GT/s and PAM5 signalling when used atop PCIe 6, it can also run as a protocol atop PCIe 5 electricals. He expects CXL 3 capabilities, such as DCD Dynamic capacity device,  to be pulled in on largely CXL 2 PCIe.

Parsing through marketing speak, this may indicate both CXL 2 adoption challenges and PCIe 6 delays beyond the rosy outlooks for both projected by CXL consortium in the early days.

Unifabrix pushes NVMe-over-Memory

https://www.unifabrix.com/technology

Access is via NVMe-oM (NVMe over Memory)


Clipped from: https://blocksandfiles.com/2022/11/07/startup-unifabrix-exists-stealth-with-cxl-3-0-smart-memory-device-demo/

Israeli startup UnifabriX is demonstrating a CXL 3.0-connected Smart Memory device for data center and HPC memory pooling

The demo is taking place at SC22 and UnifabriX said it will involve both memory pooling and sharing, as well as performance measures. The aim is to provide multi-core CPUs with the memory and memory bandwidth needed to run compute/memory-intensive AI and machine learning workloads in data centers. Existing CPUs have socket-connected DRAM and this limits the memory capacity and bandwidth. Such limits can be bypassed with CXL memory expansion and pooling.

Ronen Hyatt, CEO and co-founder of UnifabriX, said: “We are setting out to showcase significant improvements and the immediate potential that CXL solutions have to upend HPC performance and close the gap between the evolution of processors and memory”

The company reckons its CXL-based products achieve exceptional performance and elasticity in bare-metal and virtualized environments over a wide range of applications, including the most demanding tasks.

UnifabriX was started in January 2020 by Hyatt and CTO Danny Volkind. Seed funding was provided by VCS and angel investors. Hyatt is an ex-platform architect in Intel’s Data Center Group who joined Huawei as a Smart Platforms CTO in 2018. He left to launch UnifabriX. Volkind was a system architect at Intel and was employed by Huawei as a Chief Architect. Both were at PMC Sierra before Intel and attended Israel’s Technion Institute of Technology.

CXL background

A UnifabriX white paper looks at CXL 1.1 and 2.0 differences, with CXL 1.1 supporting direct-attached memory devices to facilitate memory expansion, and CXL 2.0 adding remote memory device support via CXL switching. The switch allows multiple hosts to connect to a memory device and see their own virtual CXL memory resources. The memory device can be sub-divided into logical domains, each of which, or groups of which, can be assigned to a separate host. An external Fabric Manager controls the CXL switch or switches. This all supports dynamic memory expansion for servers by allocating them cache-coherent remote memory from the CXL pool.

CXL table of standards and facilities.

The memory devices may be pure memory or intelligent processors with their own memory, such as GPUs or other accelerator hardware. The CXL 3.0 standard uses PCIe gen 6.0 and doubles per-lane bandwidth to 64 gigatransfers/sec (GT/sec). It enables more memory access modes – sharing flexibility and complicated memory sharing topologies – than CXL 2.0.

In effect, CXL-connected memory becomes a composable resource. According to Wells Fargo analyst Aaron Rakers, Micron has calculated that CXL-based memory could grow into a $2 billion total addressable market (TAM) by 2025 and more than $20 billion by 2030.

Demo product

UnifabriX CXL 3.0 fabric memory pooling device.

The SC200 demo is of a Smart Memory device powered by UnifabriX’ RPU (Resource Processing Unit) built from the silicon up with UnifabriX hardware and software. The RPU is described as an evolution of the DPU (Data Processing Unit) and is used to improve host CPU utilization, memory capacity and system-wide bandwidth. UnifabriX will demo enhanced performance related to all three items using a “recognized HPC framework.”

This Smart Memory Node is a 2 or 4 rack unit box with E-EDSFF E3 media bays and up to 128TB of memory capacity. The memory can be DDR5 or DDR4 DRAM, or NVMe-connected media. 

Access is via NVMe-oM (NVMe over Memory) and there can be up to 20 CXL FE/BE ports with node stacking via a BE Fabric. The box is compliant with CXL 1.1, CXL 2.0, PCIe Gen 5 and is CXL 3.0-ready. It has enterprise RAS (Reliability, Availability and Scalability) and multi-layer security.

Two such units will be connected across a CXL 3.0 fabric to several servers. This is said to be the first ever CXL 3.0 fabric. Hyatt said: “Setting out to achieve and document CXL-enabled performance in a real environment according to industry standard HPC benchmarking is a difficult task that we are excited to demonstrate live at SC22.”



Just How Bad Is CXL Memory Latency?

Clipped from: https://www.nextplatform.com/2022/12/05/just-how-bad-is-cxl-memory-latency/

Conventional wisdom says that trying to attach system memory to the PCI-Express bus is a bad idea if you care at all about latency. The further the memory is from the CPU, the higher the latency gets, which is why memory DIMMs are usually crammed as close to the socket as possible.

Logically speaking, PCI-Express is miles away. And as PCI-Express bandwidth doubles with each subsequent generation, without the help of retimers that also add latency, the distance it can travel diminishes as well. This isn’t a big deal for most kinds of memory that we are used to attaching to PCI-Express. It is not uncommon for flash storage to have latencies measured in the tens of microseconds, making a few hundred extra nanoseconds incurred by the interconnect a moot point. However, DDR and other forms of volatile memory aren’t so forgiving.

Previous attempts at memory expansion have been mired with compromise, especially with respect to latency. For instance, GigaIO says its FabreX architecture can already do memory pooling across PCI-Express using DMA, but doing so requires applications that can tolerate latencies of 500 nanoseconds to 1.5 microseconds.

Similarly, before Intel unceremoniously axed its Optane persistent memory business this summer, deploying the tech meant incurring roughly 350 nanoseconds of latency, according to our sister site Blocks and Files. While usable, especially in a tiered-memory configuration, that’s considerably more than the sub-100 nanosecond roundtrip latency you’d expect from DDR memory that’s attached directly the CPU.

Enter The CXL Memory Ecosystem

This brings is us to the first generation of memory expansion modules using the Compute Express Link protocol, or CXL. Systems based on AMD’s Epyc 9004 “Genoa” processors are among the first, boasting 64 lanes of CXL connectivity– distinct from its 128 to 160 overall PCI-Express lanes – that can be divided up to anywhere from four to sixteen devices. As for how Intel will implement CXL on its “Sapphire Rapids” Xeon SP processors, we’ll have to wait until they arrive early next year.

Complimenting these servers are the first of what we are sure will be many CXL memory expansion modules. While it is true that CXL will eventually allow for full disaggregated systems where resources can be shared throughout the rack over a high-speed fabric, those days are still few years off.

For its first foray into the datacenter, CXL is squarely focused at memory expansion, tiered memory, and some early memory pooling applications. For the moment, we are just looking at memory expansion because at this early stage it’s arguably the simplest and most practical, especially when it comes to attaching memory at usable latencies.

Samsung and Astera Labs have already shown off CXL memory modules they say can add terabytes of memory to a system simply by slotting them into a compatible PCI-Express 5.0 slot. From a system perspective, they look and behave just like regular DDR DRAM memory that is attached to an adjacent socket over the memory bus.

For the longest time, once you reached the limits of the CPU’s memory controller, the only way to add more memory was to add more sockets. If the workload could take advantage of the extra threads, all the better, but if not, it becomes an awfully expensive way to add memory. In effect, the extra socket is just a memory controller with a bunch of expensive, unwanted cores attached to it.

Memory expansion modules behave in much the same way, but rather than using proprietary socket-to-socket interconnect, like Intel’s UPI or AMD’s xGMI link, it’s CXL. And this means you can have a whole ecosystem of these devices, and in fact we’re already seeing a rather vibrant, if at times aspirational one, take hold around CXL.

CXL actually encompasses three protocols and not all of them are silver bullets for latency, CXL president Siamak Tavallaei told The Next Platform at SC22. “CXL.io still has the same kind of latency as you expect (from PCI-Express), but the other two protocols — CXL.cache and CXL.mem — take a faster path through the protocol, and they reduce the latency.”

How Bad Is The CXL Memory Latency Really?

If the folks at Astera are to be believed, the latency isn’t as bad as you might think. The company’s Leo CXL memory controllers are designed to accept standard DDR5 memory DIMMs up to 5600 MT/sec. They claim customers can expect latencies roughly on par with accessing memory on a second CPU, one NUMA hop away. This puts it in the neighborhood of 170 nanoseconds to 250 nanoseconds. In fact, as far as the system is concerned, that’s exactly how these memory modules show up to the operating system.

Most CXL memory controllers add about 200 nanoseconds of latency, give or take a few tens of nanoseconds for additional retimers depending on how far away the device is from the CPU, Tavallaei explains. This is right in line with what other early CXL adopters are seeing as well. GigaIO chief executive officer Alan Benjamin tells The Next Platform that most of the CXL memory expansion modules it has seen are closer to 250 nanoseconds of latency than 170 nanoseconds.

However, as Tavallaei points out, this is still an improvement over four-socket or eight-socket systems where applications may have to contend with multiple NUMA hops just because they need the memory. (Although, to be fair, IBM and Intel have added more and faster links between CPUs to reduce the hops and the latencies per hop.)

With that said, many chipmakers are quick to point out that the CXL ecosystem is only now getting off its feet. AMD’s Kurtis Bowman, who serves on the CXL board of directors, tells The Next Platform many of the early CXL proof of concepts and products are using FPGAs or first-gen ASICs that haven’t yet been optimized for latency. With time, he expects latencies to improve considerably.

If CXL vendors can, as they claim, achieve latencies on par with multi-socket systems outside of show-floor demos, it should largely eliminate the need for application or operating system-specific customizations necessary to take advantage of them. Well, at least as far as memory expansion is concerned. As we’ve seen with Optane, CXL memory tiering will almost certainly require some kind of operating system or application support.

This couldn’t come at a better time as sockets grow larger and fitting more DIMMs on a board is getting harder and harder. There are just fewer places to put them. There are dual-socket systems with room for 32 DIMMs, but as chipmakers add more channels to satiate the bandwidth demands of ever higher core counts, this isn’t scalable.

We are already seeing this to some degree with AMD’s Genoa chips, which despite boosting the number of memory channels to twelve, only supports one DIMM per channel at launch, limiting the number of DIMMs in a dual-socket configuration to 24. And even if you could attach two DIMMs per channel, we are told fitting 48 DIMMs into a standard chassis would be impractical.

As we look to attaching memory at longer distances, across racks for instance, things get more complicated as latency accrued from electrical or optical interconnects must be factored into the equation. But for in-chassis CXL memory expansion, it appears that latency may not be as big a headache as many had feared.

MemVerge Unveils First Software-Defined CXL Memory Applications to Support 4th Gen AMD EPYC(TM) Processors

MILPITAS, Calif., Nov. 10, 2022

Memory Viewer and Memory Machine software run on 4(th) generation AMD EPYC processors supporting CXL 1.1+ to deliver memory that can be dynamically pooled, tiered, and shared

MemVerge(R), pioneers of Big Memory software, today announced the company has developed software-defined Compute Express Link (CXL) memory management products that run on 4(th) Gen AMD EPYC processors featuring support for CXL 1.1+ specifications. AMD has a long history of x86 firsts, and the innovation continues in the 4(th) Gen AMD EPYC processors with CXL1.1+ memory expansion to help support the demand for ever larger in-memory workload capacity.

"Building on the record-breaking performance of 3(rd) Gen AMD EPYC processors, the latest 4(th) Gen AMD EPYC processors help our customers achieve better business outcomes faster and address their most ambitious energy efficiency goals. Our new 'Zen 4' architecture is optimized for modern workloads and delivers the core density, memory bandwidth and sophisticated security features customers demand," said Ram Peddibhotla, corporate vice president, EPYC product management, AMD.

Memory Viewer software from MemVerge represents a new class of application-aware memory tools which adds the capability to see how existing workloads are using their memory. Organizations can then determine the most cost-effective and performant way to expand DDR and CXL memory on new servers with 4(th) Gen AMD EPYC processors. Download Memory Viewer free.

Memory Machine(TM) Cloud Edition software from MemVerge adds the ability to provide transparent access to a pool of DDR and CXL memory, dynamically placing the hottest data in the fastest tier, and guaranteeing quality of service to the most business-critical workloads running on 4(th) Gen AMD EPYC processors.

According to Charles Fan, CEO and co-founder of MemVerge, "4(th) Gen AMD EPYC processors and MemVerge software form a solid CXL platform capable of providing memory-intensive applications with transparent access to tiered memory fabrics and advanced memory services."

About MemVerge

MemVerge is pioneering Big Memory Computing in the cloud and on CXL for big data that needs to be processed quickly. The company's Memory Machine(TM) product is the industry's first commercial memory virtualization software, and introduced the world to memory tiering, pooling, and snapshot-based, in-memory data management. MemVerge is the only company to win both Editor's Choice and People's Choice awards at Bio-IT World. Bioinformaticians at leading organizations such as Analytical Biosciences, Penn State University, SeekGene, and TGen are using Memory Machine software to accelerate time-to-discovery and increase application availability to unlock important new scientific breakthroughs. Learn more about MemVerge, Memory Machine, and CXL by visiting www.memverge.com.

https://www.prnewswire.com/news-releases/memverge-unveils-first-software-defined-cxl-memory-applications-to-support-4th-gen-amd-epyc-processors-301673539.html

SOURCE MemVerge


Purpose-built data and memory connectivity solutions from Astera Labs with 4th Gen AMD EPYC™ Processors help realize the vision of artificial intelligence and machine learning in the cloud

SANTA CLARA, Calif., November 10, 2022 – Astera Labs, a pioneer in purpose-built connectivity solutions for intelligent and accelerated systems, today announced its collaboration with AMD to offer 4th Gen AMD EPYC processors to realize the promise of Compute Express Link™ (CXL). Astera Labs is helping to enable OEM and hyperscale customers to deploy CXL at scale and realize the benefits of memory expansion, increased memory utilization and decreased Total Cost of Ownership (TCO).

Astera Labs’ Leo Memory Connectivity Platform is the industry’s first memory controller to support memory expansion, pooling and sharing for CXL 1.1 and 2.0 capable CPUs. The Leo Smart Memory Controllers and Aries Smart CXL Retimers are designed to seamlessly interoperate with AMD EPYC 9004 Series processors to enable plug-and-play connectivity in new composable and heterogeneous architectures powered by CXL technology.

AMD has a long history of x86 firsts, and the innovation continues in the 4th Gen AMD EPYC processors. The AMD EPYC 9004 Series processors introduce support for highly performant DDR5 DIMMs and fast PCIe 5.0 I/O, which enables the demands of today’s AI and ML applications and the increasing use of accelerators, GPUs, FPGAs, and more. Additionally, the processors include support for CXL 1.1+ memory expansion to help meet the demand for ever larger in-memory workload capacity. With the combination of 4th gen AMD EPYC processors and Astera Labs’ Leo Smart Memory Controllers, memory pooling can also be supported to reduce memory stranding.

Sanjay Gajendra, chief business officer, Astera Labs, said, “Our Leo Memory Connectivity Platform and Aries Smart Retimer for CXL is purpose-built with a low-latency, high-bandwidth architecture targeting AI/ML workloads and in-memory database applications for cloud-scale deployment. We value our strategic collaboration with AMD as we work together to deliver performant and reliable CXL solutions for our mutual customers today and continue to innovate and meet the needs of future data centers.”

Ram Peddibhotla, corporate vice president, EPYC product management, AMD, said, “4th Gen AMD EPYC processors continue to raise the bar for workload performance in the modern data center. 4th Gen AMD EPYC processors with CXL 1.1+ support and technologies like Astera Labs’ Leo Smart Memory Controllers will transform our customers’ data center operations by reducing memory bandwidth and capacity bottlenecks, driving lower total cost of ownership, and helping enterprises to address their sustainability goals.”

See CXL memory expansion in action at SC’22

Astera Labs and AMD will demonstrate CXL memory expansion at SC’22 in the CXL Consortium Booth #2838, taking place November 13-18 in Dallas, Texas. Attendees will learn how Leo overcomes processor memory bottlenecks and capacity limitations to increase performance and reduce TCO for applications ranging from Artificial Intelligence and Machine Learning to in-memory databases. To meet with Astera Labs’ memory connectivity experts at SC’22, email info@asteralabs.com.


PROCEEDING 2022 IEEE CONFERENCE ON NETWORKING ARCHITECTURE AND STORAGE

Yang, Qirui ; Jin, Runyu ; Davis, Bridget ; Inupakutika, Devasena ; Zhao, Ming 

This is a paper by Samsung authors that looks into the possibility of building a CXL-attached DRAM+SSD far memory hierarchy. It is a simulation based study that examines how applications deal with the latency increase and variability. Compute bound applications such as transcoding show less impact than memory bound applications.


Disruptive Platform Combines Marvell’s CXL Technology with New 4th Gen AMD EPYC Processors to Accelerate Cloud Data Center Architecture Revolution

SANTA CLARA, Calif. - November 10, 2022 - Marvell Technology, Inc. (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced a Compute Express Link™ Development Platform for cloud data center operators and server OEMs. The platform pairs the company’s advanced CXL technology with the latest CXL CPUs, including the new 4th Gen AMD EPYC™ processors, demonstrating multi-host memory pooling on these processors. Today, memory is a common bottleneck in cloud data center performance, as memory performance does not scale at the same rate as CPU performance. CXL technology eliminates this bottleneck by allowing flexible expansion and pooling of memory resources. Addressing the memory-scaling constraint is critical for compute- and memory-intensive applications such as artificial intelligence, machine learning, analytics, and large-scale search. With the new Marvell CXL Development Platform, cloud operators can begin to optimize their infrastructure and enable their applications to take advantage of this cutting-edge technology.

The platform provides for two CXL functions: memory expansion and memory pooling. Expansion enables the addition of memory resources at will, without the bandwidth degradation associated with traditional memory expansion using dual-inline memory module (DIMM) slots. Pooling allows memory to be shared and dynamically allocated across CPUs rather than allocated to a specific CPU. Both functions result in higher system-wide memory resource utilization, including the ability to make use of previously stranded memory.

The announcement of the development platform is Marvell’s first public step towards CXL productization following the company’s recent acquisition of CXL-specialist Tanzanite. Marvell’s vision for the next-generation cloud data center is one in which the architecture is disaggregated and fully composable. The integration of CXL technology across the company’s comprehensive, cloud-optimized portfolio of compute, electro-optics, networking, security and storage silicon will facilitate new data center architectures with significant efficiency and performance benefits.

“Data center memory directly being tied to processors is limiting cloud infrastructure scaling and overall efficiencies. CXL is going to change that,” said Thad Omura, vice president marketing, Flash Business Unit, Marvell. “We’re committed to giving our customers and partners the tools they need to integrate CXL technology into their designs as quickly as possible. Together with AMD and their 4th Gen EPYC processors, we’re enabling them to do just that. With our new development platform cloud operators and OEMs are on the path to better system memory utilization and lower DRAM/memory costs.”

“4th Gen AMD EPYC processors continue to raise the bar for workload performance in the modern data center while simultaneously delivering exceptional energy efficiency,” said Ram Peddibhotla, corporate vice president, EPYC product management, AMD. “4th Gen AMD EPYC processors will transform our customers’ data center operations by accelerating time to value, driving lower total cost of ownership, and helping enterprises to address their sustainability goals.”

Look out for CXL coverage at upcoming Linux Plumbers conference [Jon Masters, Kernel Watch in LXF291, Aug 2022]

Work continues to prepare for the upcoming Linux Plumbers conference, including many different microconferences. Among these are RISC-V and CXL (Compute eXpress Link), a new standard built in some aspects on PCI and used to attach disaggregated memory and coherent accelerators. CXL patches have been landing in Linux, most recently including support for CXL hotplug.


NEWSPAPER ARTICLE

SANTA CLARA, Calif., Sept. 26, 2022 /PRNewswire/ -- Elastics.cloud, a Smart Interconnect technology company focused on enabling efficient and performant composable architectures, today announced it is the first in the industry to demonstrate symmetric host-to-host memory pooling with Compute Express Link(TM)(CXL(TM)).

The demonstration, which will be exhibited at Intel Innovation this week, showcases two-node symmetric memory pooling and expansion. Two CXL-enabled servers are equipped with FPGA cards running Elastics.cloud IP. The servers are connected via a CXL interface over a cable. This configuration allows the first server to access not only its own direct-attached memory, but also expanded CXL-attached memory within the same server, as well as CXL-attached memory in the second server. Concurrently, the second server in the pair can access its own direct-attached memory, its own expanded CXL-attached memory, and the first server's CXL-attached memory.

Elastics.cloud's Tiered Memory Dashboard shows latency and memory allocation statistics for the live traffic running on the demonstration servers at all memory tiers: direct-attached memory, CXL-attached local memory, and CXL-attached remote memory.

As a contributing member of the CXL Consortium, Elastics.cloud is among 200+ member companies improving system architectures.

"Elastics.cloud's symmetric memory pooling solution demonstrates the power of CXL to drive system efficiency through the pooling of memory resources among multiple hosts," said Siamak Tavallaei, president, CXL Consortium. "As the CXL ecosystem continues to grow, we're excited to see Elastics.cloud provide innovative solutions that will increase resource utilization in servers and across data centers to enhance overall system performance."

The demonstrated two-node symmetric memory pooling use case is just one of the multi-host configurations supported by Elastics.cloud's currently available FPGA-based solution. In a rack-scale solution, up to eight servers can perform multi-host memory expansion and pooling using Elastics.cloud IP. Elastics.cloud is also developing an ASIC solution for CXL-enabled systems.

"CXL has ignited a revolution in system architecture, and CXL-enabled symmetric memory pooling is a key step on the road to full composability," said Elastics.cloud CEO George Apostol. "Our solutions are leading-edge and will provide customers greater flexibility and utilization, better performance, and reduced TCO at scale."

Elastics.cloud will exhibit this demonstration at Intel Innovation this week at the San Jose Convention Center on Tuesday, September 27 from 3pm to 7pm and on Wednesday, September 28 from 10:30am to 3pm. Please stop by booth #114 in the PCIe and CXL Technology Zone to learn more about Elastics.cloud and to see the live demonstration of this solution.

Elastics.cloud, Inc. is a Smart Interconnect technology company focused on enabling efficient and performant architectures to create flexible, scalable, low latency composable systems. The company provides silicon, hardware, and software which leverages the Compute Express Link (CXL(TM)) interconnect standard to provide high-performance connectivity to a broad ecosystem of components, and is first to market with CXL(TM)-enabled symmetric memory pooling.

www.elastics.cloud

Compute Express Link(TM) and CXL(TM) are trademarks of the CXL(TM) Consortium. All other trademarks are the property of their respective owners.

Contact:

Kishore Moturi, Sr. Director Corporate Strategy

Kishore.Moturi@elastics.cloud

408-396-5962

View original content to download multimedia: https://www.prnewswire.com/news-releases/elasticscloud-first-to-demonstrate-cxl-enabled-symmetric-multi-host-memory-pooling-and-expansion-301633093.html

SOURCE Elastics.cloud

Copyright: COPYRIGHT 2022 PR Newswire Association LLC

http://www.prnewswire.com/


IntelliProp First to Market with Memory Fabric Based on CXL; Driving Most Disruptive Technology to Hit Data Centers in Decades

Unveils IntelliProp Omega Memory Fabric Chips Which Allows for Dynamic Allocation and Sharing of Memory Across Compute Domains - Both In and Out of Server

September 21, 2022 08:00 AM Eastern Daylight Time

LONGMONT, Colo.--(BUSINESS WIRE)--IntelliProp, a leading innovator of composable data center transformation technology, today announced its intent to deliver its disruptive Omega Memory Fabric chips. The chips incorporate the Compute Express Link™ (CXL) Standard, along with IntelliProp’s innovative Fabric Management Software and Network Attached Memory (NAM) system. In addition, the company announced the availability of three field-programmable gate array (FPGA) solutions built with its Omega Memory Fabric.

“History tends to repeat itself. NAS and SAN evolved to solve the problems of over/under storage utilization, performance bottlenecks and stranded storage. The same issues are occuring with memory”

The Omega Memory Fabric eliminates memory bottleneck and allows for dynamic allocation and sharing of memory across compute domains both in and out of the server, delivering on the promise of Composable Disaggregated Infrastructure (CDI) and rack scale architecture, an industry first. IntelliProp’s memory-agnostic innovation will lead to the adoption of composable memory and transform data center energy, performance, efficiency and cost.

As data continues to grow, database and AI applications are being constrained on memory bandwidth and capacity. At the same time billions of dollars are being wasted on stranded and unutilized memory. According to a recent Carnegie Mellon / Microsoft report [1], Google stated that average DRAM utilization in its datacenters is 40%, and Microsoft Azure said that 25% of its server DRAM is stranded.

“IntelliProp’s efforts in extending CXL connection beyond simple memory expansion demonstrates what is achievable in scaled out, composable data center resources,” said Jim Pappas, Chairman of the CXL Consortium. “Their advancements on both CXL and Gen-Z hardware and management software components has strengthened the CXL ecosystem.”

Experts agree that memory disaggregation increases memory utilization and reduces stranded or underutilized memory. Today’s remote direct memory access (RDMA)-based disaggregation has too much overhead for most workloads and virtualization solutions are unable to provide transparent latency management. The CXL standard offers low-overhead memory disaggregation and provides a platform to manage latency.

“History tends to repeat itself. NAS and SAN evolved to solve the problems of over/under storage utilization, performance bottlenecks and stranded storage. The same issues are occuring with memory,” stated John Spiers, CEO, IntelliProp. “Our trailblazing approach to CXL technology unlocks memory bottlenecks and enables next-generation performance, scale and efficiency for database and AI applications. For the first time, high-bandwidth, petabyte-level memory can be deployed for vast in-memory datasets, minimizing data movement, speeding computation and greatly improving utilization. We firmly believe IntelliProp’s technology will drive disruption and transformation in the data center, and we intend to lead the adoption of composable memory.”

Omega Memory Fabric/ NAM System, Powered by IntelliProp’s ASIC

IntelliProp’s Omega Memory Fabric and Management Software enables the enterprise composability of memory, and CXL devices, including storage. Powered by IntelliProp’s ASIC, the Omega Memory Fabric based NAM System and software expands the connection and sharing of memory in and outside the server, placing memory pools where needed. The Omega NAM is well suited for AI, ML, big data, HPC, cloud and hyperscale / enterprise data center environments, specifically targeting applications requiring large amounts of memory.

“In a survey IDC completed in early 2022, almost half of enterprise respondents indicated that they anticipate memory-bound limitations for key enterprise applications over time,” said Eric Burgener, research vice president, Infrastructure Systems, Platforms and Technologies Group, IDC. “New memory pooling technologies like what IntelliProp is offering with their NAM system will help to address this concern, enabling dynamic allocation and sharing of memory across servers with high performance and without hardware slot limitations. The composable disaggregated infrastructure market that IntelliProp is playing in is an exciting new market that is expected to grow at a 28.2 percent five-year compound annual growth rate to crest at $4.8 billion by 2025.”

With IntelliProp’s Omega Memory Fabric and Management Software, hyperscale and enterprise customers will be able to take advantage of multiple tiers of memory with predetermined latency. The system will enable large memory pools to be placed where needed, allowing multiple servers to access the same dataset. It also allows new resources to be added with a simple hot plug, eliminating server downtime and rebooting for upgrades.

“IntelliProp is on to something big. CXL disaggregation is key, as half of the cost of a server is memory. With CXL disaggregation, they are taking memory sharing to a whole new level,” said Marc Staimer, Dragon Slayer analyst. “IntelliProp’s technology makes large pools of memory shareable between external systems. That has immense potential to boost data center performance and efficiency while reducing overall system costs.”

Omega Memory Fabric Features, incorporating the CXL Standard

“AI is one of the world’s most demanding applications, in terms of compute and storage. The prospects of using ML in genomics, for example, requires exascale compute and low latency access to petabytes of storage. The ability to dynamically allocate shareable pools of memory over the network and across compute domains is a feature we are very excited about,” says Nate Hayes, Co-Founder and Board Member at RISC AI. “We think the fabric from IntelliProp provides the latency, scale and composable disaggregated infrastructure for the next generation AI training platform we are developing at RISC AI, and this is why we are planning to integrate IntelliProp’s technology into the high performance RISC-V processors that we will be manufacturing.”

Omega Memory Fabric Solutions Bring Future CXL Advantages to Data Centers

IntelliProp unveiled three FPGA solutions as part of its Omega Fabric product suite. The solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale across dozens to thousands of host nodes, consume less energy since data travels with fewer hops and enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory), allowing for lower total cost of ownership (TCO).

Omega Memory Fabric Solutions

Availability

The IntelliProp Omega Memory Fabric solutions are available as FPGA versions and will have the full features of the Omega Fabric architecture. The IntelliProp Omega ASIC based on CXL technology will be available in 2023.

Resources

Intelliprop Executive Team

More about Intelliprop Omega Fabric Solutions

1 Source: Carnegie Mellon University, Microsoft Research and Microsoft Azure report, First-generation Memory Disaggregation for Cloud Platforms, March 2022.

About IntelliProp

IntelliProp is a Colorado-based company founded in 1999 to provide ASIC design and verification services for the data storage and memory industry. Today, IntelliProp is leading the composable data center transformation, fundamentally changing the performance, efficiency and cost of data centers. IntelliProp continues to gain recognition as a leading expert in the data storage industry and actively participates in standards groups developing next-generation memory infrastructure. www.intelliprop.com

IntelliProp and the IntelliProp logo are trademarks of IntelliProp, Inc.

©2022 IntelliProp, Inc. All rights reserved.

Contacts

IGNITE Consulting, on behalf of IntelliProp

Kathleen Sullivan, 720-480-5501

Linda Dellett, 303-439-9398

IntelliPropPR@igniteconsultinginc.com