Hi! I'm currently a postdoctoral researcher at the Visual Computing Group at Harvard University.

My main research interests are immersive data visualization, large-scale data analysis, and computer graphics. In particular, I explore new techniques and develop open-source tools that leverage augmented / mixed / virtual reality technology towards novel ways to experience and analyze data. I also have several years of research experience in representing, processing, and visualizing large-scale data such as gigapixel images, and high-resolution volumes.

Below, you'll find some of my project highlights. You can also check out my complete projects and publications, and find out more about me.

DXR is a Unity package that makes it easy to create interactive data-driven graphics in augmented reality (AR), mixed reality (MR), and virtual reality (VR) or XR for short, i.e., immersive data visualizations. Inspired by the Vega ecosystem, DXR uses a concise declarative JSON specification to rapidly generate immersive visualizations. Users have the option to switch between three authoring modes: 1) via an interactive in-situ graphical user interface (think Polestar), 2) via high-level programming (think Vega-Lite), and 3) via low-level programming (think D3).

This project is currently being prepared for a paper submission. Learn more about DXR.

papertags.mp4

Paper Tags: DIY Desktop Input Devices

Fiducial marker tracking is a popular low-cost approach for enabling tangible user interfaces (TUIs). However, in previous marker tracking-based TUIs, the paper on which markers are typically printed on has rarely been utilized beyond serving as the medium for printing markers on. Paper Tags is a class of tangible desktop input devices based on fiducial marker tracking that are enhanced by exploiting the tangibility of markers’ paper component, i.e., tangible paper markers. Paper Tags can be a versatile yet low-cost do-it-yourself (DIY) approach for augmenting the standard desktop interface, without requiring additional complex hardware.

This project is currently being prepared for a paper submission.

Sparse PDF Volumes

A sparse PDF volume is a multi-resolution representation of very high-resolution volume data that enables interactive and consistent multi-resolution volume rendering. Each voxel in any resolution level sparsely encodes the probability density function (pdf) of the intensity values in its footprint in the original data. This information in the pdf is leveraged to make multi-resolution algorithms, e.g., volume rendering, more accurate and consistent.

This work was published and presented at the IEEE Vis conference in 2014.

A sparse PDF map, or sPDF-map for short, is a multi-resolution representation of very high-resolution images, i.e., gigapixel images with potentially billions of pixels. In this representation, each pixel in any resolution level sparsely encodes the probability density function (pdf) of the intensity values in its footprint in the original data. This information in the pdf enables interactive yet more accurate computations of non-linear multi-resolution algorithms, e.g., bilateral filters, local Laplacian filters, and color-mapping.

This work was published and presented at the SIGGRAPH Asia conference in 2012. Learn more about sparse PDF maps.