Kishore Rathinavel

I am a Ph.D. student in the Computer Science department at UNC-Chapel Hill, where I work with Prof. Henry Fuchs on novel Near-Eye Display technologies. I received my B.Tech. in Electrical Engineering (minor in Computer Science) from IIT Gandhinagar, India, in 2012 and my M.S. in Computer Science from UNC-Chapel Hill in 2016. In 2012-13, I was an Associate Engineer at Ricoh Innovations, Bangalore, India, where I worked on computational imaging and image recognition by compressed sensing. I have interned at Microsoft Research and NVIDIA Research.


Email: kishore AT cs.unc.edu

Resume | Google Scholar | LinkedIn

Research Projects

An Extended Depth-of-Field Volumetric Near-Eye Augmented Reality Display

We introduce an optical design and a rendering pipeline for a full-color volumetric near-eye display which simultaneously presents imagery with near-accurate per-pixel focus across an extended volume ranging from 15cm (6.7 diopters) to 4M (0.25 diopters), allowing the viewer to accommodate freely across this entire depth range. This is achieved using a focus-tunable lens that continuously sweeps a sequence of 280 synchronized binary images from a high-speed, Digital Micromirror Device (DMD) projector and a high-speed, high dynamic range (HDR) light source that illuminates the DMD images with a distinct color and brightness at each binary frame. Our rendering pipeline converts 3-D scene information into a 2-D surface of color voxels, which are decomposed into 280 binary images in a voxel-oriented manner, such that 280 distinct depth positions for full-color voxels can be displayed.

paper | video

Steerable Application-Adaptive Near-Eye Displays

The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front

Award: Best in Show at SIGGRAPH 2018 Emerging Technologies

abstract | video | website

The RealityMashers: Augmented Reality Wide Field-of-View Optical See-Through Head Mounted Displays

Optical see-through (OST) displays can overlay computer generated graphics on top of the physical world, effectually fusing the two worlds together. However, current OST displays have a limited (compared to the human) field-of-view (FOV) and are powered by laptops which hinders their mobility. Furthermore the systems are designed for single-user experiences and therefore cannot be used for collocated multi-user applications. In this paper we contribute the design of the RealityMashers, two wide FOV OST displays that can be manufactured using rapid-prototyping techniques. We also contribute preliminary user feedback providing insights into enhancing future RealityMasher experiences. By providing the RealityMasher’s schematics we hope to make Augmented Reality more accessible and as a result accelerate the research in the field.

paper | video

Pinlight Displays: Wide Field of View Augmented Reality Eyeglasses using Defocused Point Light Sources

We present a novel design for an optical see-through augmented reality display that offers a wide field of view and supports a compact form factor approaching ordinary eyeglasses. Instead of conventional optics, our design uses only two simple hardware components: an LCD panel and an array of point light sources (implemented as an edge-lit, etched acrylic sheet) placed directly in front of the eye, out of focus. We code the point light sources through the LCD to form miniature see-through projectors. A virtual aperture encoded on the LCD allows the projectors to be tiled, creating an arbitrarily wide field of view. Software rearranges the target augmented image into tiled sub-images sent to the display, which appear as the correct image when observed out of the viewer’s accommodation range. We evaluate the design space of tiled point light projectors with an emphasis on increasing spatial resolution through the use of eye tracking, and demonstrate a preliminary human viewable display.

paper | video | website

Other Projects

Single-Layer Occlusion Masks for Near-Eye AR Displays

Today’s optical-see-through AR displays do not offer the ability to make virtual content appear opaque, that is, to show correct occlusion when a real object is behind a virtual object. We demonstrate here with wide field-of-view, and compact form factor, that even a simple imperfect occlusion mask such as an LCD panel showing silhouettes of virtual objects can dramatically improve the virtual imagery albeit for some unintended transparency near the transitions between virtual objects and real world background. While this is not a perfect occlusion mask, it is better than nothing and yields a system which adds only a single optical layer to the standard AR headset. Opaque virtual objects can be generated by coupling the occlusion masks described here with a variety of virtual image generation mechanisms, including wide field-of-view AR displays. For the image generation mechanism in our prototypes, we use Lumus DK-32 optical see-through displays.

video

Telepresence using VR headgear

This was my Ph.D. qualifiers project.

We built a telepresence system that allows multiple, distant individuals to share the experience of a special event such as a rock concert, a lecture, or even a surgical procedure - any event that is observed from assigned places such as seats. This simplification enables immersion with only 360◦ panorama video systems rather than more complicated 3D reconstruction. Live imagery of distant friends is merged into the panorama at appropriate places. The overall impression each user gets is of being immersed at the special event, sitting next to the distant friend. At any time, users can look around to see nearby and distant people at the event itself.

summary | report | video

Wearable AR Display

This was my first-semester's project at UNC-Chapel Hill. I made a wearable AR display using Lumus DK-32 and HiBall tracking system. The project involved writing software for binocular rendering, integrating tracking with the display, and calibration of a user wearing a display to the world-coordinates of the tracking system. Recent AR displays (e.g., HoloLens, Meta) do this in a built-in manner.

This was done in parallel to Pinlight Displays to demonstrate that narrow field-of-view augmented reality displays are ineffective at providing an immersive experience to users.

video

A multispectral palmprint recognition system

This report describes a recognition algorithm for multispectral (MSI) palmprint images. The recognition algorithm proposed in this work is primarily based on the concept of compressive sensing. The sparse signal to be reconstructed is assumed to be the vector containing the scores of likelihood of the test image matched to the set of training images. We also propose a simple physiology driven registration step to extract the region of interest (ROI) as opposed to other approaches which use fiducial markers or complicated geometric models which describe the shape of the hand. We consistently report recognition percentages over 98% with an Equal Error Rate (EER) under 0.03. We also report the performance of our algorithm on single channel grayscale images to conclusively prove that recognition capabilities are greatly improved in the case of MSI data.

report

Recursive Meta-Clustering for Social Networks

These papers propoes a recursive fuzzy meta-clustering algorithm applied to primary and secondary data of social networks to create more meaningful profiles of social networks.

paper 1 | paper 2

Miscellaneous

I enjoy reading books (fantasy, non-fiction). My wife is a researcher in Information Theory: website.