May 2018
In this project, a mobile platform is made available to be controlled in the simulation environment via keyboard or visual servoing to handle tasks including SLAM, face detection and face recognition.
Tasks:
1. Build 2D grid map with laserscan data and show it via RVIZ
2. Control the mobile robot in the simulation environment with keyboard
3. Image recognition and localization
4. Visual servoing by following the yellow ball
5. Current room number recognition
6. A top-level launch file
Environment:
The simulation environment used for this project is V-REP, which is a robot simulator. It is based on distributed control architecture where each object can be individually controlled through various methods including plugin, embedded script, remote API clients, or a ROS node.