The system centers on a web-based UI (roslibjs) that Eric implemented, which teleoperates the Stretch robot by sending JointTrajectory goals to the existing Stretch Driver (hardware node) and publishing base velocities on /stretch/cmd_vel. When the user clicks “Save Pose,” the UI publishes a String message to /save_pose, which Sohrab’s FrameListener node (already implemented) subscribes to. FrameListener captures the current TF and joint states, then writes a single‐pose JSON file. Sohrab’s PosePlayer node (already implemented) offers a Trigger service on /pose_player; when called by the UI, it reads that JSON and sends trajectory goals back to the Stretch Driver. These four nodes—WebUI, Stretch Driver, FrameListener, and PosePlayer—constitute the MVP.
Beyond the MVP, Iman’s AlignToSavedPose node (already implemented) uses an ArUco-based TF frame to calculate offsets, then sends rotate/translate/rotate action goals to the Stretch Driver for autonomous alignment. Iman’s PoseNavigator CLI (already implemented) uses Nav2’s BasicNavigator to drive to saved map-frame poses via a service call. The missing piece is a real ArUco Detection node (to publish the marker’s TF), which Iman will develop. Once that exists, AlignToSavedPose can reliably align to patients, and the WebUI can gain a “Navigate to Saved Location” button. These features—AlignToSavedPose, PoseNavigator, and ArUco Detection—are stretch goals that build on the MVP’s foundation of teleop, pose saving, and replay.