Due: Jan 31 Wednesday, 11:59pm | Canvas link (rubric & submission) | Points: 4
For this assignment you will think more about perception capabilities needed for your project and you will get started on implementing perception capabilities on the Stretch robot.
Post #7: Make a speculative video prototype for your project by teleoperating the Stretch. If you have not made any major pivots since last week, the video could largely be based on your storyboard from Post #4. Show the video to one person from your target user population and try to elicit their reactions. As with the earlier lo-fi prototypes, try to get their feedback and ideas for other tasks the robot could support them with and how the solution should be designed. Your post should include your video and any reactions/feedback you got from the user (in the form or of direct quotes or rephrased).
You can use any of the teleop interfaces that are available, including the joystick, the keyboard, and the web teleop interfaces available for the Stretch, as well as the interface that you developed in Week 4 labs.
If you would like to get feedback on your video before you show it to your user, email it to the teaching staff and we will get back to you within a day.
You do not need to do fancy editing on your videos but make sure you cut out unnecessary parts and speed up parts that are too slow. This is a good time to start learning video editing tools, as you will be making many videos throughout the quarter.
Post #8: Make a list of technical capabilities and environmental modifications that your system will need to complete the tasks you are focusing on in your project (likely what you depicted in your video for Post #7). This should include:
Perception: What objects, people, or landmarks does the robot need to be able to detect? At what level of precision and temporal continuity do they need to be detected?
Manipulation: What objects or landmarks will the robot need to physically interact with?
Navigation: What navigational capabilities does the robot need? Does it need to be able to move to arbitrary points on a map? Does it need to precisely position itself relative to landmarks in the environment?
Interaction: What state information about the user does the system need to know? What explicit input or commands does the user need to communicate to the robot? What information should the robot communicate back to the user? Which of the above autonomous capabilities might need human monitoring (e.g., to stop the robot immediately if something goes wrong) or human help/control (e.g., take over driving the robot if it gets lost)?
Environment: How do you need to modify the robot's environment to make the above autonomous capabilities possible?
Post #9: After completing labs on Week 5, write a post demonstrating the perception capabilities you implemented on the Stretch within the context of a task that is relevant to your project (e.g., attach an ArUco marker to an object relevant to your project and place it in a natural relative pose to the robot).
Submit your response on Canvas as a link to the latest post on your team's website.