Research

Persistent Memory Objects in Non-Volatile Main Memories

Sponsor: NSF

The continual performance growth of data center servers is critical to the nation's economic competitiveness and as a catalyst for progress in scientific endeavors. Two important data center components are the main memory, which has fast access but can only store data temporarily, and storage, which keeps data permanently but suffers from slow access. Recent technology advances have brought in a new non-volatile memory, now commercially available, that can both host permanent data and be accessed quickly. However, to reach its potential, utilizing these new memory technologies requires rethinking of how data should be persistently and efficiently stored.

This proposal describes a new abstraction for storing persistent data in non-volatile memory: hyperfiles, which are long lived, provide fast access, and can be quickly attached to and detached from a process address space. Hyperfiles provide naming and permission characteristics similar to (but faster than) files, and speed closer to memory. They are accessed directly through loads/stores to avoid system call overhead. This project also investigates new sharing semantics for hyperfiles, allowing non-cooperating processes to share them simultaneously and safely, while keeping the crash recovery property. Architecture support to accelerate hyperfile sharing will also be designed and evaluated.

Link to Project Page.

Efficient Memory Persistency for GPUs

Sponsor: NSF

Scientific progress often depends on computer technology providing ever faster computers capable of processing ever increasing amounts of data. The growth in memory capacity and density of current computer systems, however, is in peril as Dynamic Random Access Memory (DRAM), the current dominant main memory technology, faces serious roadblocks in scaling. Non-volatile memory or persistent memory is an emerging alternative technology that offers high integration density, speed similar to current main memory, byte addressability similar to current main memory, and lower standby power than current main memory. Hence, persistent memory is expected to increasingly augment or replace DRAM as main memory, and such a change is also expected to happen in Graphics Processing Unit (GPU) based computing systems which are the dominant accelerators for high performance computing. However, in order to fully realize its potential, research on persistency models on GPUs is needed. This project investigates integrated software and hardware techniques to enable GPUs to make efficient use of non-volatile memory. Successful outcomes of this project will lead to faster access to data by reducing overheads involved with file access. The software produced (persistent GPU benchmarks, compiler, and tuner) and prototyping platform will be made available to other researchers. Education and outreach activities in this project seek to train the next generation of programmers in this discipline.

The research in this project answers the question: on a GPU system, what architecture supports are needed to achieve efficient persistency programming on GPUs with persistent memory (PM) as their device memory? The research contributions include: (1) an open-source GPU PM benchmark suite that is representative of various application domains; (2) an exploration of persistency models in GPUs and Instruction Set Architecture support; (3) optimizations on the persistency models by removing the need for logging; (4) a compiler pass and performance tuner to automatically determine the best-performing memory persistency and recovery model, and transform the code accordingly.

Link to Project Page.

Analysis of Configurable Software

Sponsor: NSF

Highly-configurable software, e.g., the Linux kernel, form our most critical infrastructure, underpinning everything from high-performance computing clusters to Internet-of-things devices. Keeping these systems secure and reliable with automated tools is essential. However, their high degree of configurability leaves most critical software without comprehensive tool support. The problem is that most software tools do not scale to the colossal number of configurations of large systems. With millions of configurations in complex systems like Linux, there are simply too many to analyze individually. Instead, my goal is to make tools that work on all configurations simultaneously. Previous work includes parsing both C proper and the C preprocessor together [PLDI 2012] and analyzing all configurations of the Kbuild build system [ESEC/FSE 2017].

Continuing work includes finding new programming language constructs to replace the preprocessor [ICSE-NIER 2019] and simulating variability-aware analysis [ESEC/FSE 2019] using configuration sampling tools developed with collaborators [TR 2018, TR 2019]. Ongoing work includes variability-aware analyses and bug finding, configuration sampling strategies, and preprocessor usage analysis and translation.

This work is supported by a grant from the NSF.

Link to Project Page.