Conventional visual-computing systems—cameras, displays, robots, and GPU—are often built in isolation, with hardware and software designed separately. This separation limits performance and capability.
We pursue learning-based, end-to-end design, reconstruction, and control of visual-computing systems, jointly modeling the multi-dimensional nature of light—much like eyes and brain co-evolve and collaborate.
Our work sits at the intersection of computer vision, computer graphics, optics, AI, and robotics.