Weekly Gathering for

Vision Researchers and Enthusiasts

12:00 am- 1:00 pm, 32-D507

Click this to join mail list for receiving the latest talk information.

Hybrid Bayesian Eigenobjects - Toward Unified 3D Robot Perception

Speaker: Ben Burchfiel

Date: Tuesday, March 24, 2019.

Time: 12-1pm

Location: 32-D507


Abstract:

Hybrid Bayesian Eigenobjects are a novel representation for 3D objects that leverage both convolutional (deep) inference and linear subspace methods to enable robust reasoning about novel 3D objects. HBEOs allow joint estimation of the pose, class, and full 3D geometry of a novel object observed from a single (depth-image) viewpoint in a unified practical framework. By combining both linear subspace methods and deep convolutional prediction, HBEOs offer improved runtime, data efficiency, and performance compared to preceding purely deep or purely linear methods. In this talk, I discuss the current state of 3D object perception, HBEOs (and their predecessor BEOs), and the path forward towards reliable perception in cluttered and fully unstructured environments.


BIO:

Benjamin Burchfiel is a 6th year PhD candidate at Duke University as part of the Intelligent Robot Lab (IRL) supervised by Professor George Kondiaris. Benjamin's primary area of research lies in the intersection of computer vision and robotics; his thesis work is a unified framework to enable more robust object-centric robot perception and reasoning. In addition to robot perception, Benjamin also has interest and research experience in reinforcement learning, learning from demonstration, skill transfer, and natural language understanding. Benjamin received his BSc in Computer Science from the University from Wisconsin-Madison and his MSc in Computer Science from Duke University. Benjamin's webpage can be found at benburchfiel.com.

More resources: MIT CSAIL