Research:
While manipulating objects is relatively easy for humans, reliably grasping arbitrary objects remains an open challenge for robots. Manipulating is an essential skill for both warehouse robots and assistive robots to effectively interact with the physical world. In this research trend, we mainly focus on vision-based manipulation and apply advanced computer vision, machine learning algorithms to improve robotic manipulation. Specifically, our research covers robotic grasping, grasping with primitive shapes, affordance understanding, domain adaptation.
For more details, please check our selected project pages listed below:
Projects:
Keypoint-Based Category-Level Object Pose Tracking from an RGB Sequence with Uncertainty Estimation
Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan Birchfield
ICRA 2022
[pdf] [project]
Multi-View Fusion for Multi-Level Robotic Scene Understanding
Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan Birchfield
IROS 2021
[pdf] [project]
GKNet: Grasp Keypoint Network for Grasp Detection
Ruinian Xu, Fu-Jen Chu, and Patricio A. Vela
IJRR
[pdf] [project]
IVALab VisMan members:
2020, Ph.D.
fujenchu@gatech.eduPh.D. student
rnx94@gatech.eduPh.D. student
njtangchao96@gatech.eduPh.D. student
yunzhi.lin@gatech.eduAssociate Professor
pvela@gatech.edu