Hi! I'm currently a visualization scientist at the Visualization Core Lab at King Abdullah University of Science and Technology.
My main research interests are large-scale data visualization and analysis, immersive analytics, and computer graphics. In particular, I develop new algorithms for representing, processing, and visualizing large-scale data such as gigapixel images, and high-resolution volumes and meshes. I also explore new techniques and develop open-source tools that leverage augmented / mixed / virtual reality technology towards novel ways to experience and analyze data.
Below, you'll find some of my project highlights. You can also check out my complete projects and publications, and find out more about me.
DXR is a Unity package that makes it easy to create interactive data-driven graphics in augmented reality (AR), mixed reality (MR), and virtual reality (VR) or XR for short, i.e., immersive data visualizations. Inspired by the Vega ecosystem, DXR uses a concise declarative JSON specification to rapidly generate immersive visualizations. Users have the option to switch between three authoring modes: 1) via an interactive in-situ graphical user interface (think Polestar), 2) via high-level programming (think Vega-Lite), and 3) via low-level programming (think D3).
This project is currently being prepared for a paper submission. Learn more about DXR.
Fiducial marker tracking is a popular low-cost approach for enabling tangible user interfaces (TUIs). However, in previous marker tracking-based TUIs, the paper on which markers are typically printed on has rarely been utilized beyond serving as the medium for printing markers on. Paper Tags is a class of tangible desktop input devices based on fiducial marker tracking that are enhanced by exploiting the tangibility of markers’ paper component, i.e., tangible paper markers. Paper Tags can be a versatile yet low-cost do-it-yourself (DIY) approach for augmenting the standard desktop interface, without requiring additional complex hardware.
This project is currently being prepared for a paper submission.
A sparse PDF volume is a multi-resolution representation of very high-resolution volume data that enables interactive and consistent multi-resolution volume rendering. Each voxel in any resolution level sparsely encodes the probability density function (pdf) of the intensity values in its footprint in the original data. This information in the pdf is leveraged to make multi-resolution algorithms, e.g., volume rendering, more accurate and consistent.
This work was published and presented at the IEEE Vis conference in 2014.
A sparse PDF map, or sPDF-map for short, is a multi-resolution representation of very high-resolution images, i.e., gigapixel images with potentially billions of pixels. In this representation, each pixel in any resolution level sparsely encodes the probability density function (pdf) of the intensity values in its footprint in the original data. This information in the pdf enables interactive yet more accurate computations of non-linear multi-resolution algorithms, e.g., bilateral filters, local Laplacian filters, and color-mapping.
This work was published and presented at the SIGGRAPH Asia conference in 2012. Learn more about sparse PDF maps.