Collaborative Robotics
Group Project
Group Project
Using the Trossen Robotics Locobot which is a mobile base with a 6-degree-of-freedom robotic arm, and a head-mounted camera, we designed and implemented natural communication methods to interact with the robot.
The objective is to integrate robot perception, audio processing, and motion planning to enable the robot to understand human requests and execute tasks accordingly. We developed this project using ROS2 on Linux-based systems, with access to Docker environments.
This team completed three core tasks.
Object Retrieval – The robot recognizes and retrieves an object based on verbal instructions, navigate to the object in a cluttered environment, and return it to a specified location.
Sequential Tasks – The robot follows a multi-step instruction, such as finding and placing a specific item in a designated location.
Color Picking - The robot detects the colors of objects in its surroundings and follows the user's instructions.
“Retrieve the Banana”
“Retrieve the strawberry and place it in the basket”
“Retrieve the yellow object”
This project integrates perception, planning, and control to enable a mobile robot to understand and execute human instructions. By leveraging natural language processing, computer vision, and motion planning, the robot interprets user commands, perceives its environment, and performs tasks accordingly. Here is the detailed strategies and system architecture used to achieve this:
01. Strategy
01. Github Link for the code
02. Report about this project
Hanvit Cho
M.S. in Mechanical Engineering
Stanford University
Louis Conreux
M.S. in Mechanical Engineering
Stanford University
Shalika Neelaveni
Ph.D. in Mechanical Engineering
Stanford University
Tom Soulaire
M.S. in Mechanical Engineering
Stanford University