Research

Supporting Persistent Memory in Future Computer Systems

Sponsor: ONR, NSF, and Intel

The continual performance growth of data center servers is critical to the nation's economic competitiveness and as a catalyst for progress in scientific endeavors. Two important data center components are the main memory, which has fast access but can only store data temporarily, and storage, which keeps data permanently but suffers from slow access. Recent technology advances have brought in a new non-volatile memory, now commercially available, that can both host permanent data and be accessed quickly. However, to reach its potential, utilizing these new memory technologies requires rethinking of how data should be persistently and efficiently stored.

This proposal describes a new abstraction for storing persistent data in non-volatile memory: hyperfiles, which are long lived, provide fast access, and can be quickly attached to and detached from a process address space. Hyperfiles provide naming and permission characteristics similar to (but faster than) files, and speed closer to memory. They are accessed directly through loads/stores to avoid system call overhead. This project also investigates new sharing semantics for hyperfiles, allowing non-cooperating processes to share them simultaneously and safely, while keeping the crash recovery property. Architecture support to accelerate hyperfile sharing will also be designed and evaluated.

Link to Project Page

Efficient Memory Persistency on GPUs

Sponsor: NSF

Persistent memory is expected to increasingly augment or replace DRAM as main memory, and such a change will likely affect Graphics Processing Unit (GPU) based computing systems. In order to fully realize its potential, research on persistency models on GPUs is needed. This project investigates integrated software and hardware techniques to enable GPUs to make efficient use of non-volatile memory. Successful outcomes of this project will lead to faster access to data by reducing overheads involved with file access. The research in this project answers the question: on a GPU system, what architecture supports are needed to achieve efficient persistency programming on GPUs with persistent memory (PM) as their device memory? The research contributions include: (1) an open-source GPU PM benchmark suite that is representative of various application domains; (2) an exploration of persistency models in GPUs and Instruction Set Architecture support; (3) optimizations on the persistency models by removing the need for logging; (4) a compiler pass and performance tuner to automatically determine the best-performing memory persistency and recovery model, and transform the code accordingly.

Link to Project Page.