My research interests are centered around Computer Systems. I am currently focusing on storage and persistent memory technologies, with an emphasis on their impact on the way we manage large-scale data for emerging workloads in Data Science and the Internet of Things.
I completed my PhD in Computer Science at the University of Sydney in 2020, advised by Prof. Willy Zwaenepoel. My dissertation research was on the design and implementation of efficient key-value stores for future hardware and performance requirements. My PhD was generously supported by The University of Sydney Faculty of Engineering and IT Dean's Postgraduate Research Scholarship and by the EPFL Fellowship for Doctoral Studies. I have earned my Bachelors and Masters degrees in Computer Science from EPFL. During my graduate studies, I am grateful to have collaborated with ABB Research Switzerland, HP Vertica Cambridge MA, Brown University, and Nutanix (San Jose CA and Bangalore offices).
I have open positions for PhD students! These positions come with Research Assistant (RA) Scholarships. I also welcome McGill Masters and Undergraduate students for research projects (MS Thesis, MS Research Project, COMP-396, COMP-400, ...).
Interested in ground-braking computer systems research? Have a look at the research focus areas and email me your CV, transcript, and availability if you are motivated, have a strong academic record, and you'd like to join the team!
Current Research Focus
Data powers everything we do and we are collecting it at unprecedented rates. The driver for my current research is to create a storage infrastructure that enables us to gain insights from this data in a fast and energy-conscious manner. I am particularly interested in the following areas:
Storage Systems for Data Science
Data Science and AI workloads are ubiquitous. From taking care of our health, to running businesses, to managing our energy systems and transport planning, we leverage data science and learning to make more informed decisions. We are able to obtain these insights through the combination of smart algorithms and access to vast amounts of data. Data management now poses a bottleneck that can unfortunately slow down the entire pipeline. The way data is stored and accessed strongly influences how fast the algorithms can provide us with useful insights.
Research Direction: The needs of Data Science workloads are poorly met by current general-purpose storage systems. Consequently, to obtain fast results systems rely on heuristics, approximations, or use of stale information. I am interested in designing and building a new storage system that (1) scales with the vast datasets used by Data Science workloads, (2) ingests and cleans new incoming data at high throughput, (3) serves data with low latency, and (4) is energy-efficient. This challenging goal entails many research directions, such as identifying opportunities to reduce data movement, designing adaptable data structures that harmonize with Data Science workloads, and the creative use of new storage resources (e.g., NVRAM, fast SSDs, etc.).
Contact me for more details.
Data Management for the Internet of Things
The Internet of Things (IoT) is a fast growing field which produces vast amounts of data. In fact, it is estimated that the data produced by IoT workloads alone in 2025 will be larger than all of the data we will produce in 2020. Naturally, this is an excellent opportunity for storage research.
Research Direction: From a storage systems perspective, IoT workloads pose serious challenges in terms of resource management. Numerous IoT settings make use of battery-powered devices with limited energy, low storage and data processing capacities, and unreliable connectivity. An interesting direction is determining at what granularity such systems should store data at the sensor-, edge-, and cloud-levels, while developing energy-efficient schemes for data-filtering and data-movement between these layers. In addition, the nature of the collected IoT data raises compelling questions as well. One possible avenue is designing data layouts that are suitable for storing vast amounts of noisy data, which may also contain high levels of redundancy (e.g., in video surveillance systems).
Contact me for more details.
Reimagining Storage Building Blocks for Fast Devices
Emerging storage technologies challenge fundamental assumptions in computer systems design. One major assumption being challenged is the significant performance gap between memory and persistent storage access. This gap is now bridged by Byte-addressable persistent memory (e.g., Intel 3D XPoint). Another assumption is that I/O bandwidth is the main bottleneck in storage systems. This too has changed with the development of new fast drives (e.g., Intel Optane NVMe SSDs) shifting the bottleneck to the CPU. In addition, the storage stack is getting deeper and more heterogeneous. It is likely that in a typical server developers and system administrators will have to manage RAM, persistent memory, different types of SSDs and hard disks.
Research Direction: These hardware advances provide an opportunity to redesign the basic storage building blocks, such as file systems, caching policies, key-value stores, and relational databases, as well as re-questioning the appropriate level of support that should be ensured by the Operating System. Ultimately, given that the hardware and the workloads keep evolving, my long-term vision is to create a framework that automatically generates storage systems which meet the desired performance requirements, given the workload profile and a set of generic hardware characteristics (e.g., sequential access speed relative to the random access speed, bandwidth, storage capacity etc.) as inputs.
Contact me for more details.
Key-value stores (KVs) are a crucial component in cloud computing because they can efficiently handle large-scale, diverse data (e.g., deployed in the infrastructure of Google, Apple, Facebook, and Amazon). In my dissertation, Redesigning Persistent Key-Value Stores for Future Workloads, Hardware, and Performance Requirements, I proposed new techniques to improve persistent KVs. I designed and built four novel open-source systems: TRIAD, FloDB, SILK, and KVell. For an overview, have a look at my job talk.
TRIAD focuses on the disk utilization of KVs. Through its three complementary techniques acting at the memory, disk and commit log levels, TRIAD drastically reduces write amplification in persistent storage and the effect of KV maintenance operations. The reduced write amplification leads to a commensurate throughput improvement for the client-facing workload. Industry Impact: This work is currently used in production at Nutanix and was featured on Mark Callaghan's Small Datum blog. [pdf] [code][slides]
FloDB addresses the issue of scalability with the memory size and with the number of threads in persistent KVs, again resulting in important gains in throughput for client workloads. The main contribution is a new two-layer data-structure design which is highly concurrent and improves the data flow from clients, to memory, to disk. [pdf] [code][slides]
SILK addresses the issue of tail latency in log-structured merge KVs, stemming from significant interference between client work and KV maintenance operations. The interference creates a bottleneck at the I/O bandwidth level. SILK prevents tail latency spikes through a novel opportunistic I/O bandwidth scheduling mechanism. Academic Impact: This work received one of three Best Paper Awards in USENIX ATC '19 (top 3 out of 356 submissions). [pdf] [code][slides][talk-MSR]
KVell provides surprising insights into new storage technologies and their impact on current persistent KV designs. The emergence of fast drives shifts the bottleneck from I/O bandwidth to the CPU, making it necessary to revisit previous fundamental design assumptions, such as maintaining the sorted order of data and making use of complex synchronization primitives. [pdf] [code][slides][talk-SOSP by Baptiste Lepers]
[Thesis] Redesigning Persistent Key-Value Stores for Future Workloads, Hardware, and Performance Requirements. Oana Balmau. Doctoral Dissertation, The University of Sydney, 2020. Advised by Prof. Willy Zwaenepoel. PhD Committee: Dr. Ricardo Bianchini, Prof. Vijay Chidambaram, Prof. Frans Kaashoek. [pdf]
[TOCS '20] SILK+: Preventing Latency Spikes in Log-Structured Merge Key-Value Stores Running Heterogeneous Workloads. ACM Transactions on Computer Systems Special Issue. O. Balmau, F. Dinu, W. Zwaenepoel, K. Gupta , R. Chandhiramoorthi, D. Didona. Invited paper.
[SOSP '19] KVell: the Design and Implementation of a Fast Persistent Key-Value Store. Symposium on Operating Systems Principles 2019 (14% acceptance ratio). B. Lepers, O. Balmau, K. Gupta , W. Zwaenepoel. [pdf] [code][slides][talk]
[USENIX ATC '19] SILK: Preventing Latency Spikes in Log-Structured Merge Key-Value Stores. USENIX Annual Technical Conference 2019 (19% acceptance ratio). O. Balmau, F. Dinu, W. Zwaenepoel, K. Gupta , R. Chandhiramoorthi, D. Didona. Best Paper Award! Invited to publish in ACM Transactions on Computer Systems (TOCS) special issue. [pdf] [code][slides][talk-MSR]
[NETYS '19] The Fake News Vaccine. The International Conference on Networked Systems 2019. O. Balmau, R. Guerraoui, A-M. Kermarrec, A. Maurer, M. Pavlovic, W. Zwaenepoel. [pdf-arXiv]
[USENIX ATC '17] TRIAD: Creating Synergies Between Memory, Disk and Log in LSM Key-Value Stores. USENIX Annual Technical Conference 2017 (21% acceptance ratio). O. Balmau, D. Didona, R. Guerraoui, W. Zwaenepoel, H. Yuan, A. Arora, K. Gupta, P. Konka. [pdf] [code][slides]
[EuroSys '17] FloDB: Unlocking Memory in Persistent Key-Value Stores. The European Conference on Computer Systems 2017 (20% acceptance ratio). O. Balmau, R. Guerraoui, V. Trigonakis, I. Zablotchi. [pdf] [code][slides]
[SPAA '16] Fast and robust memory reclamation for concurrent data structures. ACM Symposium on Parallelism in Algorithms and Architectures (24% acceptance ratio). O. Balmau, R. Guerraoui, M. Herlihy, I. Zablotchi. [pdf][code]
[SmartGridComm '14] Evaluation of RPL for medium voltage power line communication. IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids. O. Balmau, D. Dzung, A. Karaağaç, V. Nesovic, A. Paunovic, Y-A. Pignolet, N. Tehrani.
[SmartGridComm '14] Recipes for faster failure recovery in Smart Grid communication networks. IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids. O. Balmau, D. Dzung, Y-A. Pignolet.
Awards and Honors
USENIX ATC 2019 Best Paper Award.
University of Sydney Faculty of Engineering and IT Dean’s Postgraduate Research Scholarship.
EPFL Fellowship for Doctoral Studies.
EPFL Teaching Assistant Award for Teaching Excellence.
Brown University Presidential Fellowship for Incoming Graduate Students.
EPFL Excellence Fellowship for the Master Studies.
EuroSys '21 Program Committee member.
SIGMOD '21 research track Program Committee member.
Outside of research, I enjoy:
Yoga. I am a certified Hatha and Yin Yoga teacher (400h), maintaining a daily asana, pranayama and meditation practice.
Dancing. I practice a variety of styles including salsa, modern, and bellydancing.
Scuba Diving and Hiking. I love exploring the underwater world and the mountains during my travels.