A. Design
enumerate any design requirements for your system (e.g., the robot should be easy to clean and liquid-proof)
The robot must be safe and intuitive for blind users to interact with. It should also be capable of functioning effectively in cluttered kitchen environments. Given its role in handling utensils, the gripper needs to be precise enough to manage thin or irregularly shaped objects like forks and spoons, potentially with the help of our custom-designed hardware.
present any input you have received from your target user about the design of your system here (e.g., the user requested an ability to pause the robot at any point during feeding).
User feedback emphasized the critical role of spatial consistency. Blind users often rely on objects being placed in fixed and familiar positions. As a result, a key request was for the robot to help preserve this consistency by returning utensils to precise and predefined spots.
B. Robot Platform
We use the Stretch RE2 mobile manipulator, developed by Hello Robot. Stretch has a differential drive base and a 3 degree-of-freedom manipulator. The manipulator consists of a telescoping arm that extends 50cm horizontally and a prismatic lift that reaches 110cm vertically. The arm has a 1 degree of freedom gripper attached to a rotational joint. For sensing, Stretch has a Lidar sensor attached to its base, a Realsense camera attached to a pan-tilt head, and two fixed fish-eye cameras: one with an overhead view of the base and arm and the other with a view of the gripper.
C. Custom hardware
Describe any custom hardware you developed for the project here.
We aim to develop a custom adaptive gripper extension specifically for handling thin utensils like spoons and forks. To better manage these challenging shapes, the design may incorporate features such as compliant rubber tips and a sliding pressure mechanism for more secure gripping.
Provide design rationale for any choices you made in the design.
We chose a ROS2 architecture to provide flexibility and support future extensibility. Our design rationale includes storing saved poses in a separate file to enable the robot to reuse location data, and separating task planning (task_manager_node) from motion execution (goal_sender_node, execute_pose_node) to support a clearer and high-level control structure.
D. Software
Show the ROS-level system diagram and briefly describing what some of the key components do and how they will be implemented.
The ROS system is structured into four primary subsystems: Perception, Navigation, Manipulation, and Interface.
The Interface layer includes the ui_command node, which processes user inputs and forwards commands to the task_manager node. This node manages high-level task coordination and delegates specific actions to the appropriate motion nodes. An error handling ensures robustness.
The Perception subsystem consists of the camera_processor and tf2_listener. When a save_pose command is triggered, these components collaborate to obtain the latest pose information from the broadcaster. This pose data is temporarily stored in pose_buffer and then written to a separate file. When replaying a saved pose, the system retrieves the transform using the listener and commands the execution node to move the robot accordingly.
The Navigation subsystem is responsible for robot movement. It receives tasks from the task_manager node and uses the goal_sender node to interact with the nav2_stack. This stack includes components such as the map server for environment mapping and localization tools to assist movement.
The Manipulation subsystem features the execute_pose_node, which carries out tasks delegated by the task_manager node and directs arm_controller to move the arm to the desired position. It also includes a grasp_planner that plans the gripper's movement.
consider making a visual Finite State Machine that represents the flow of the software when it runs.
This diagram represents the high-level task logic organized as a finite state machine (FSM) that governs the robot’s operational workflow. It outlines distinct states—such as locating the object, picking it up, navigating, placing it, and confirming placement—while transitions between these states are determined by success or failure conditions.