Rabab Alomairy
Researcher in Julia Lab @ MIT
Researcher in Julia Lab @ MIT
I am a Postdoctoral Fellow at MIT’s JuliaLab and a recipient of the prestigious KAUST Ibn Rushd Fellowship. I earned her PhD in Computer Science from King Abdullah University of Science and Technology (KAUST), where I was part of the Extreme Computing Research Center. My research spans high-performance computing, task-based numerical libraries, GPU programming, and AI-accelerated scientific applications, with emphasis on performance optimization for multicore and manycore architectures. I have collaborated with leading institutions, including Oak Ridge National Laboratory, the Innovative Computing Laboratory at the University of Tennessee, and MINES ParisTech, contributing to the DOE-funded SLATE project during my internship at UTK. I have also led the first Julia tutorial for productive high-performance computing at the Supercomputing Conference, reflecting my commitment to community building and education. My work has scaled across the world’s top supercomputers and earned honors including two ACM Gordon Bell Prize Finalist awards, the IEEE CS TCHPC Early Career Researchers Award for Excellence in High Performance Computing, the Gauss Award, and the KAUST Research Excellence Award. In recognition of my impactful work, I was named a Rising Star in Computational and Data Sciences by the U.S. Department of Energy in 2022.
My research focuses on algorithm–hardware co-design for emerging AI-era supercomputers. I work at the intersection of high-performance numerical algorithms, task-based parallelism, and mixed-precision computing, with the goal of building scalable and hardware-aware methods that run efficiently on heterogeneous architectures. I develop GPU-accelerated and tensor-core–optimized kernels, recursive and taskified linear algebra algorithms, and symbolic task-graph execution models that reduce data movement and improve concurrency. My work has been deployed and scaled on leading systems—including Fugaku, Frontier, Shaheen lII, HAWK, and Alps—giving me a deep understanding of performance portability across ARM-, IBM-, Intel-, AMD-, and NVIDIA-based architectures. I am also interested in bridging HPC and AI by enabling fast, energy-efficient computation for large-scale scientific applications such as Gaussian process regression, materials modeling, genomics, and climate analytics. Ultimately, my work aims to create performance-portable, self-optimizing numerical software that can adapt to future CPUs, GPUs, and accelerator technologies.