We've identified several key perception needs and developed plans to address them effectively.
Our robot needs to detect:
The user's chair position for safe navigation and positioning
The application area on the user's leg to properly apply ointment
Obstacles in navigation path for safe autonomous movement
Pressure/contact with skin to ensure comfortable application
For our initial prototype, we will implement a mix of general and minimum viable solutions to balance reliability, user control, and implementation feasibility while setting the foundation for more advanced perception capabilities in future iterations. We have highlighted our top choice of solution for each perception need.
The human-in-the-loop components are particularly important for our target users, who value autonomy and control over their care. By keeping users involved in key decisions like application pressure, we maintain their agency while still providing robotic assistance for the physically challenging aspects of ointment application.
User's Chair Position
Minimum Viable Solution - Environment Modification
ArUco Marker Approach: Attach ArUco markers to the user's chair at a consistent height and position
Implementation: Use ROS 2's ArUco detection packages to locate and establish a precise transformation between the robot and chair
Feasibility: Highly feasible as it requires minimal modification (a small printed marker) and is robust in varying lighting conditions
Fallback: If detection fails, the GUI can provide manual adjustment controls for the user
Human-in-the-Loop Alternative
GUI-Based Selection: Implement a camera feed in the user interface where the user can tap/click to indicate their chair position
Implementation: Stream Stretch's camera feed to the GUI and use click coordinates to calculate positioning adjustments
Feasibility: Works for users with upper body mobility who can sufficiently operate a GUI
General Solution
Furniture Detection Model: Train a computer vision model to recognize standard chairs without markers
Implementation: Use a pre-trained object detection model fine-tuned on chair images
Data Requirements: Would need a dataset of chairs in various home environments
Limitations: Less precise than ArUco markers and vulnerable to lighting/occlusion issues
Application Area Detection
Minimum Viable Solution: Human-in-the-Loop
GUI Selection Interface: Develop a user interface showing the camera view of the leg, allowing the user to tap/click to select the application area and adjust movement as needed
Implementation: Use a web-based interface with the robot's camera feed where users can draw or select regions
Feasibility: Highly feasible and gives users precise control over application areas
Advantages: Respects user autonomy and accommodates personalized application needs
General Solution
Segmentation: Implement a body part segmentation model to identify different areas of the leg
Implementation: Use a pre-built model or custom segmentation model trained for medical applications
Data Requirements: Would need annotated images of legs with different skin tones, lighting conditions
Obstacle Detection for Navigation
Minimum Viable Solution: Human-in-the-Loop
Remote Monitoring: Provide a real-time view of the robot's path in the GUI with controls
Implementation: Stream camera and LIDAR visualizations to the GUI
Feasibility: Straightforward but requires active monitoring by the user
General Solution
Existing Sensors: Utilize Stretch's built-in LIDAR system
Implementation: Configure ROS 2 Nav2 and detect obstacles
Feasibility: Somewhat feasible as it leverages existing hardware and standard ROS capabilities
Limitations: May miss small or low-profile obstacles
Pressure/Contact Detection
Minimum Viable Solution: Human-in-the-Loop
User Feedback Interface: Add pressure adjustment controls and real-time feedback in the GUI
Implementation: Simple slider or buttons to adjust pressure in real-time on GUI
Feasibility: Very feasible and gives users direct control over comfort
General Solution
Force Estimation: Use Stretch's pressure sensors to detect pressure force and adjust accordingly autonomously
Implementation: Train a model to recognize cues of appropriate vs. excessive pressure
Data Requirements: Would need various user data (visual, auditory, etc) to understand what user quantifies as excessive
This video shows our command line controls for running a variety of saved poses.
Please turn on audio for project context! In this video, we demonstrate a variety of robot poses, including ones where:
the user or caregiver can apply ointment onto a roller
the roller navigates to the application site
the ointment is gently applied to the user's leg