I am a Computational Photography Engineer at Apple. I enjoy working at the intersection of Computer Science, Physics, and Applied Math. I received my Masters (in 2016) and Ph.D. (in 2020) in Computer Science at UNC-Chapel Hill, where I developed some interesting Virtual and Augmented Reality Displays. I received my B.Tech. in Electrical Engineering and a minor in Computer Science from IIT Gandhinagar, India, in 2012. I have previously worked at Samsung Research and Ricoh Innovations. I have interned at Microsoft Research and NVIDIA Research and was a visiting student at Stanford University.
Email: kishore.r.318 AT gmail.com
Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics
Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.
An Extended Depth-of-Field Volumetric Near-Eye Augmented Reality Display
We introduce an optical design and a rendering pipeline for a full-color volumetric near-eye display which simultaneously presents imagery with near-accurate per-pixel focus across an extended volume ranging from 15cm (6.7 diopters) to 4M (0.25 diopters), allowing the viewer to accommodate freely across this entire depth range. This is achieved using a focus-tunable lens that continuously sweeps a sequence of 280 synchronized binary images from a high-speed, Digital Micromirror Device (DMD) projector and a high-speed, high dynamic range (HDR) light source that illuminates the DMD images with a distinct color and brightness at each binary frame. Our rendering pipeline converts 3-D scene information into a 2-D surface of color voxels, which are decomposed into 280 binary images in a voxel-oriented manner, such that 280 distinct depth positions for full-color voxels can be displayed.
Steerable Application-Adaptive Near-Eye Displays (SIGGRAPH ETech) &
Manufacturing Application-Driven Foveated Near-Eye Displays (IEEE VR)
The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front
Award: Best in Show at SIGGRAPH 2018 Emerging Technologies
Best paper award nominee at IEEE VR 2019
The RealityMashers: Augmented Reality Wide Field-of-View Optical See-Through Head Mounted Displays
Optical see-through (OST) displays can overlay computer generated graphics on top of the physical world, effectually fusing the two worlds together. However, current OST displays have a limited (compared to the human) ﬁeld-of-view (FOV) and are powered by laptops which hinders their mobility. Furthermore the systems are designed for single-user experiences and therefore cannot be used for collocated multi-user applications. In this paper we contribute the design of the RealityMashers, two wide FOV OST displays that can be manufactured using rapid-prototyping techniques. We also contribute preliminary user feedback providing insights into enhancing future RealityMasher experiences. By providing the RealityMasher’s schematics we hope to make Augmented Reality more accessible and as a result accelerate the research in the ﬁeld.
Pinlight Displays: Wide Field of View Augmented Reality Eyeglasses using Defocused Point Light Sources
We present a novel design for an optical see-through augmented reality display that offers a wide field of view and supports a compact form factor approaching ordinary eyeglasses. Instead of conventional optics, our design uses only two simple hardware components: an LCD panel and an array of point light sources (implemented as an edge-lit, etched acrylic sheet) placed directly in front of the eye, out of focus. We code the point light sources through the LCD to form miniature see-through projectors. A virtual aperture encoded on the LCD allows the projectors to be tiled, creating an arbitrarily wide field of view. Software rearranges the target augmented image into tiled sub-images sent to the display, which appear as the correct image when observed out of the viewer’s accommodation range. We evaluate the design space of tiled point light projectors with an emphasis on increasing spatial resolution through the use of eye tracking, and demonstrate a preliminary human viewable display.
Volumetric Display System
DLi Innovations: DLP® Technology Powers Unique Volumetric Mixed Reality Display Created at UNC Chapel Hill
3D printed optics
ACM SIGGRAPH Blog: 2018 Emerging Technologies Best in Show: Steerable Application-Adaptive Near-Eye Displays
3D Printing News Briefs: Siggraph Paper on Optical Design for Augmented Reality Near Eye Displays
Next Reality: Nvidia Researchers Have Developed a 3D-Printed Prototype Near-Eye Display for AR Headsets
MIT Technology Review: Microsoft Researchers Are Working on Multi-Person Virtual Reality
mspoweruser: Microsoft Research Is Working On Multi-Person Virtual Reality
Tech Times: Microsoft Lab Working On 'Comradre' Project For Shared Multi-User Augmented Reality Experience
The Bottom Line: Multi-Person Virtual Reality Is Becoming a Reality
cnet: Microsoft lab working on multiperson augmented reality
onmsft: Microsoft researchers working on multi-person mixed reality experiences
MIT Technology Review: A Headset Meant to Make Augmented Reality Less of a Gimmick
PC Mag: UNC, Nvidia Serve Up the AR Goggles of the Future
ExtremeTech: UNC and Nvidia collaborate on 'pinlight display' augmented reality breakthrough
SlashGear: NVIDIA and UNC cook up truly immersive AR wearable
techradar: Nvidia has cooked up its own AR headset, should Google Glass be worried?
Gizmodo: These New Glasses Could Make Augmented Reality Practical
gearburn: Pinlight Display technology acts like Star Wars holo-projectors
Other Publications & Projects
Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support
In this paper, we present our novel design for switchable AR/VR near-eye displays which can help solve the vergence-accommodation-conﬂict issue. The principal idea is to time-multiplex virtual imagery and real-world imagery and use a tunable lens to adjust focus for the virtual display and the see-through scene separately. With this novel design, prescription eyeglasses for near and far-sighted users become unnecessary. This is achieved by integrating the wearer’s corrective optical prescription into the tunable lens for both virtual display and see-through environment. We built a prototype based on the design, comprised of micro-display, optical systems, a tunable lens, and active shutters. The experimental results conﬁrm that the proposed near-eye display design can switch between AR and VR and can provide correct accommodation for both.
A Granular Recursive Fuzzy Meta-clustering Algorithm for Social Networks
Abstract—This paper uses the concepts of fuzzy membership and granularity proposed by Zadeh to propose a fuzzy meta-clustering algorithm for creating associated profiles of networked granules. The proposed algorithm uses repeated applications of fuzzy c-means algorithm to create soft clustering. Representation of a granule is recursively updated using the fuzzy cluster memberships of other connected granules. These fuzzy memberships are obtained from the previous application of clustering. The fuzzy cluster memberships enhance the traditional representation of a granule derived from the primary source of data by recording events such as transactions, phone calls, user sessions, security breaches, and car trips. The proposed approach extends a previous recursive meta-clustering algorithm based on crisp k-means clustering. The use of fuzzy memberships is shown to create more meaningful recursive profiles of a social network of phone users.
Recursive Meta-clustering in a Granular Network
Granular computing represents an object as an information granule. Traditionally the information is derived from the primary source of data by recording events such as transactions, phone calls, user sessions, security breaches, and car trips. Much of the early data mining techniques used information granules generated from primary data sources. Recent data mining techniques such as ensemble classifiers and stacked regression use secondary sources of data obtained from initial data mining activities. Typically, these techniques use preliminary applications of data mining techniques for initial knowledge discovery. The knowledge acquired from the preliminary data mining is then used for more refined analysis. Granular computing can enable us to develop a formal framework for incorporating information from both primary and secondary sources of data. This enhanced granular representation can help us develop integrated data mining techniques. This paper proposes a novel recursive meta-clustering algorithm to demonstrate the versatility of granular computing for developing integrated data mining techniques to exploit primary and secondary knowledge sources.
Telepresence using VR headgear
This was my Ph.D. qualifiers project.
We built a telepresence system that allows multiple, distant individuals to share the experience of a special event such as a rock concert, a lecture, or even a surgical procedure - any event that is observed from assigned places such as seats. This simpliﬁcation enables immersion with only 360◦ panorama video systems rather than more complicated 3D reconstruction. Live imagery of distant friends is merged into the panorama at appropriate places. The overall impression each user gets is of being immersed at the special event, sitting next to the distant friend. At any time, users can look around to see nearby and distant people at the event itself.
Wearable AR Display
This was my first-semester's project at UNC-Chapel Hill. I made a wearable AR display using Lumus DK-32 and HiBall tracking system. The project involved writing software for binocular rendering, integrating tracking with the display, and calibration of a user wearing a display to the world-coordinates of the tracking system. Recent AR displays (e.g., HoloLens, Meta) do this in a built-in manner.
This was done in parallel to Pinlight Displays to demonstrate that narrow field-of-view augmented reality displays are ineffective at providing an immersive experience to users.
A multispectral palmprint recognition system
This report describes a recognition algorithm for multispectral (MSI) palmprint images. The recognition algorithm proposed in this work is primarily based on the concept of compressive sensing. The sparse signal to be reconstructed is assumed to be the vector containing the scores of likelihood of the test image matched to the set of training images. We also propose a simple physiology driven registration step to extract the region of interest (ROI) as opposed to other approaches which use ﬁducial markers or complicated geometric models which describe the shape of the hand. We consistently report recognition percentages over 98% with an Equal Error Rate (EER) under 0.03. We also report the performance of our algorithm on single channel grayscale images to conclusively prove that recognition capabilities are greatly improved in the case of MSI data.