I build at the edge of what’s possible with multimodal AI, computer vision, and generative models. My career has been about taking ideas from zero-to-one —turning ambitious concepts into real, scalable products that redefine how people and businesses interact with technology.
Currently, I am a Director of Applied Science at Amazon. I lead multimodal understanding efforts at Amazon, where I’ve helped shape and launch the Nova model suite—Nova Lite, Pro, and Premier. These models were designed to meet very different needs:
Nova Lite brings ultra low-cost, lightning-fast multimodal inference for image, video, and text.
Nova Pro strikes the balance between accuracy, speed, and cost—perfect for enterprise applications.
Nova Premier is our most advanced model, built for complex, high-stakes tasks.
Alongside the core models, I’ve driven Amazon’s generative creativity initiatives. I helped bring to life:
Nova Canvas, an image generation and editing model that now powers Virtual Try-On, and
Nova Reel, a text-to-video system that expanded into two-minute videos and advanced storyboarding with its 1.1 release.
I also led the launch of the Titan image generator, pushing state-of-the-art performance.
My focus has always been not just on building powerful models, but on positioning them responsibly—ensuring Nova became a competitive multimodal platform that businesses and creators can adopt with confidence and scale.
Before Amazon, I served as Senior Director of Perception at Magic Leap, leading the World Sensing team that developed the foundation for spatial computing on ML2. My team worked on SLAM, world reconstruction, object recognition, and scene understanding—everything required for devices to understand and interact with the physical world. I also helped architect and launch Magic Leap’s AR Cloud, enabling persistence, multi-user sharing, large-scale mapping, and world understanding across devices. Some of our milestones at Magic Leap included:
Persistence and sharing in the MagicVerse SDK (2019)
XRKit for iOS and Android (2020)
3D object recognition with bounding boxes (2020)
Even earlier, I was a Senior Staff Engineer/Manager at Qualcomm Research, where I worked on computer vision and machine learning algorithms for Snapdragon chipsets, powering use cases like augmented reality, virtual reality, and drones. At Qualcomm, I also led several academic collaborations and served on the Qualcomm patent review board, helping shape innovation strategy.
My journey began in academia. I earned my Bachelors in Electrical Engineering from the Indian Institute of Technology, Madras, and my M.S. and Ph.D. in Electrical and Computer Engineering from the University of Maryland, College Park. My doctoral research focused on multimedia forensics, exploring intrinsic and extrinsic fingerprints for information security. During my Ph.D., I interned at HP Labs (2006) on digital rights management and at Microsoft Research (2007) on machine learning and applied statistics.
Across Amazon, Magic Leap, Qualcomm, and academia, a common thread runs through my work: building systems that give machines new ways to perceive, understand, and create. Looking ahead, I’m excited to continue pushing the boundaries of multimodal AI and to collaborate with others who share that vision.
If you’re passionate about foundational models, generative AI, deep learning, or computer vision, let’s connect—we’re hiring, and I’d love to hear from you.
April 2021: Our tutorial on "Building Digital Twins for Large Scale Augmented Reality" has been accepted at ICCV 2021.
April 2021: Elevated to the grade of IEEE Senior member for my contributions to the profession
November 2020: Check out my talk at EdgeAI on "Spatial Computing: A Collision of Edge and Cloud Computing"
November 2020: Presented a tech talk at IIIT Hyderabad "Perception needs for Spatial Computing headsets: Edge and Cloud"
October 2020: Presented a tech talk at Rochester Institute of Technology on "Perception needs for Spatial Computing headsets"
March 2020: Generic Object recognition with 3D bounding boxes released in latest Lumin update.
March 2020: MagicVerse SDK for XRKit on iOS and Android released
October 2019: Promoted to Senior Director, Perception.
June 2019: Check out our tutorial at CVPR on Perception at Magic Leap.
May 2019: Persistence and sharing features in MagicVerse SDK for ML1 Lumin OS.
January 2019: Moved to start working on World Sensing.
August 2018: Released Magic Leap One!
April 2017: Promoted to Distinguished Fellow/Manager at Magic Leap, Inc.
December 2015: Joined Magic Leap as Principal Engineer, Perception and moved to the bay area.
May 2015: Four of my patents with Qualcomm co-authors on Computer vision, Augmented reality, and peer-to-peer networks got approved by USPTO.