Plan for how we will handle navigation capabilities for our project with justification:
Our project uses a kitchen environment, and when a kitchen item is moved from a user(blind people) designated location, the robot returns the item to that location. Therefore, we use a map that includes the kitchen environment and the table where the user has designated the locations of the items. We perform global mapping using the slam_toolbox package for our own kitchen environment. The robot needs to explore the entire kitchen to search for items that have been moved from user designated locations. The Stretch nav2 stack uses AMCL (monte-carlo probabilistic localization), which calculates where the robot is on the map based on LiDAR data, and publishes the robot’s current estimated pose as a pose message (the x, y, and orientation of the robot on the map) to the /amcl_pose topic. Through this, we can localize the robot. We will attach ArUco markers to the user designated locations so that the robot can accurately recognize the positions. Therefore, we will use local servoing to allow the robot to approach the marker precisely. The robot will recognize the marker using its camera, and by using logic such as compute_difference and align_to_marker, it will finely adjust its position and place the item exactly at the marker location.