Assignments
WEEK #10 (3 points)
Due: Mar 18, 5:00pm
It is crunch time! Only one week left to complete your projects :-) This week you'll continue to execute the project implementation plan proposed earlier, finish your final presentation video, and complete course and peer evaluation.
1. (1.5 points) Post #1 -- Team evaluations and reflection: It is time to fill out your peer evaluations. Everyone should fill those out individually. In addition, we would like you to reflect as a team and write a brief blog post about your reflection. In particular, talk about which strategies from Week #3 Post #2 did you actually adopt as a team, what other strategies did you adopt along the way, which ones worked well, and which did not.
2. (1.5 points) Post #2 -- Course evaluations and feedback: Reflect individually and as a team about the Robotics Capstone Winter 2018 offering and write a blog post touching on (a) what was most fun, (b) what was most useful, (c) what was not so useful, (d) what would have been useful but was missing. As this is a continually changing course, we would like to hear from you any information that can help make future offerings better. Please also have everyone in your team fill out the official course evaluation by Mar 17, 2019, 11:59pm.
3. (Optional) As in Week #9, post an update on your latest progress on the project. As before, you may have multiple progress updates from different subgroups focusing on different aspects of the project.
Submit links (one for each blog post) as text entry on Canvas. See Canvas for more details on grading.
WEEK #9 (6 points)
Due: Mar 11, 5:00pm
This week we continue executing our plan from last week and start thinking about the video presentation.
1. (3 points) Post #1: Write a short progress update on what you accomplished in Week 9 with respect to the plan you made in Post #1 last week. At this point in the project you are likely working in parallel on several aspects of the project in subgroups. We recommend that each subgroup posts a separate progress update. For example if you have one person working on the UI, that person can be responsible for posting a progress update on that. Progress updates should include the different deliverables mentioned in your plan, e.g. pictures, videos, screenshots, or any other evidence of progress. Make sure to indicate who has been working on what.
2. (3 points) Post #2: Write a draft script for your final presentation video. See the final video assignment below for more details and example videos. If you would like feedback on your script earlier than the assignment deadline, please submit only this post on Canvas and send us a note.
Submit links for each blog post as text entry on Canvas. See Canvas for more details on grading.
FINAL VIDEO & DEMONSTRATION (20 points)
Due: Mar 18th, 12:00noon
1. (16 points) VIDEO PRESENTATION: You final assignment involves creating a video that presents your project, based on the script you prepared in Week #9 blog. Your video should be three minutes or shorter. Your video should both introduce and "sell" your design. The following example videos might inspire you:
- Videos on Kickstarter (e.g. apps, software, robots, web, wearables, or any other) or Indiegogo (e.g. technology, search results for "apps")
- Videos created for projects in previous robotics capstones: 2016, 2017, 2018
- Videos created in HCI courses, e.g. iDrinkSafe, TasteBud, Aqueous, Neighborly, Broccoli for All
Your video script (Week #9 blog) should include a plan for narration and accompanying visuals (footage or images). Your script should be in the form of a three column table.
- The first should include a section number and a short section title (e.g. "introduction", "demonstration of task 1" or "testimonial from test user").
- The second should be the a description of visual elements that will be shown in the video (e.g. "talking head of team member A", "video of robot manipulating object", "screen recording of someone using the UI"). Include sufficient details such that someone who reads your description could go and shoot that video for you.
- The last should be the narration text that will be read voice-over on the video, if the video does not already include relevant audio. This includes what will be said in "talking head" shots.
To allow for creative freedom and avoid obtaining many cookie-cutter videos, we are not constraining the sections in your video. But please make sure that your video follows these guidelines:
- Your video should clearly describe and motivate the problem. Get the viewers to care about the problem.
- Your video should clearly describe your solution and convince the viewer that it addresses the problem.
- It should be clear what your design is useful for to someone who is hearing about the project for the first time.
- Your video should demonstrate the use of your robot. This can be done by walking through different tasks that your robot supports. Be sure to set context for the use and demonstrate the outcome (just like in your storyboards).
- Your video should feature positive feedback from potential users regarding your design (quotes from your interviews or new recordings of people saying what they like about your design or describing contexts in which they would use your design if it were real).
- Your video should introduce you, the team.
You will receive feedback about your scripts shortly after submitting them. Afterwards you will have a week to shoot and edit your videos. Try to incorporate any feedback you received about your script.
The evaluation of this assignment will be done by a panel of judges and it has two components:
- Evaluation of the video: In addition they will assess the quality of your video. Besides covering important content, it is important for your video to have appeal.
- Evaluation of the project: The judges will assess the importance and quality of your project. Note that this assessment depends on the judge's ability to understand your project based on the video.
The rating sheet that the judges will use is available here. To avoid file format related challenges and allow judges to view videos remotely, we request that you upload your video to YouTube (you can make it Unlisted if desired) and submit a link to the video on Canvas.
2. (4 points) DEMONSTRATION: In addition to the video, you will demonstrate your project live on the robot during the final presentation event. The judges will expect to see the core functionality shown in the video to work similarly on the robot and will try variations of the demonstration. You will have 10 minutes to prepare and demonstrate the capability. We encourage creating checklists and launch scripts to simplify getting your demo running as well as rehearsing getting it to run. One person from the team should describe the demonstration while others set up the demo environment, run the necessary scripts, keep an eye on the debug screens, or pretend to interact with the robot for the demonstration. If a team's demo does not work during the event, a make up time can be scheduled later during the finals week with no penalty.
WEEK #8 (7 points)
Due: Mar 4, 5:00pm
This week we get started on fabrication and start ramping up our project-specific implementation. Here are the blog posts we would like to see for this assignment.
1. (3 points) Post #1: You have only three weeks left to complete your projects! The good news is you now know almost everything you need to implement your projects. The labs from now on will be optional labs that introduce additional tools that might help some teams. So this is a good time to create an action plan for the next three weeks (including this week) so you can complete your projects on time. To that end, your first blog post should include the following:
- A short description of your minimal and stretch goals in terms of capabilities you will demonstrate.
- A graphical representation of your final system architecture showing the different ROS nodes that need to be implemented and the interface (messages, services, actions) between them. Annotate this graph with the information of (i) what is already implemented, (ii) what needs to be implemented for your minimal goal, and (iii) what needs to be implemented for your stretch goal.
- A three week plan of how you will implement and test your system. Clearly indicate what you expect to have implemented by the end of Weeks 8, 9, and 10, and how you will demonstrate that it is working robustly.
- (Optional) A list of additional labs you would like to have to help with your projects.
2. (2 points) Post #2: In the fabrication tutorial (this Thursday) you will learn to design objects that can be laser cut or 3D printed. As part of this assignment, you will create a design for something that you would like to fabricate as part of your projects, e.g. a custom tool for your robot to handle objects; a custom handle to attach to objects that the robot will need to manipulate; attachments to the robot so it can carry certain items; etcetera. Let us know if you are unsure about what to design. Post a picture of your CAD model and briefly explain what it is for.
3. (2 points) Post #3: Write a short progress update on what you accomplished in Week 8 with respect to the plan you made in Post #1. At this point in the project you will likely start working in parallel on several aspects of the project in subgroups. We recommend that each subgroup posts a separate progress update. For example if you have one person working on the UI, that person can be responsible for posting a progress update on that. Progress updates should include the different deliverables mentioned in your plan, e.g. pictures, videos, screenshots, or any other evidence of progress. Make sure to indicate who has been working on what.
Submit three (or more) links (one for each blog post) as text entry on Canvas. See Canvas for more details on grading.
WEEK #7 (7 points)
Due: Feb 25, 5:00pm
This week we jump into robot perception, starting with so called "AR markers" or "fiducials."
FETCH PROJECTS:
1. (4 points) Post #1: Labs 27-29 will show you how to work on perception problems in simulation and introduce you to a package for tracking fiducials in the pointcloud stream from Fetch’s PrimeSense camera. Using this and other functionality developed in previous assignments, you will develop a new system that allows you to: (a) save end-effector poses relative to a fiducial and (b) move the robot’s arm to the same relative poses after the fiducial has been moved. Using this system you will be able to define different actions for manipulating objects that have fiducials attached to them, simply by saving a sequence of end-effector poses and gripper states (open/closed) relative to the fiducial on the object.
The system should allow you to specify end-effector poses by physically moving the robot's arms to the desired poses. It should detect the pose based on the robot's sensor data (i.e., using TF). You will need some kind of interface (command-line or web-based) to save poses after moving the arm to each pose. For each pose, you will need to use your interface to specify which fiducial the pose is relative to (or if the pose is relative to the robot's base). Once you have finished defining an action, you should use your interface to save the action to a file. You also need to build a program executor that can load an action from a file and execute it.
To demonstrate the tools developed in this assignment, assume that you have box shaped object with a fiducial (F1) on its front face. Use of your tool *on the real robot* to define the following three actions:
- Push: Make the object with F1 move side-to-side by pushing it one way
- Poke: Make the object with F1 tip over by poking it from front-to-back
- Pick-and-place: Pick up the object with F1 from the top. Place the object on a second fiducial, F2.
Make a video to show how the tool is used for specifying three different actions as well as the execution of each action in two different initial configurations of the object with F1. Post your video with a brief description of what is being shown.
Your actions should actually succeed. So if you find that the action fails when you test it, you need to go back and re-define your actions differently (e.g. try having a different number of poses, using different pose configurations, or adjusting the motion speed).
2. (3 points) Post #2: Labs 30-32 will get you started on processing the robot's PointCloud stream to detect surfaces and segment objects that are on a surface. For this assignment we would like you to start thinking about using this functionality for your projects. Choose a representative object that needs to be perceived for your project (e.g. tennis ball) and tune the methods you learn in the labs to work for segmenting that particular object. Your blog post should include the following:
- A list of objects/landmarks that need to be perceived for your project. You might already have this in Blog #2, Post #2 (or in later revisions of your technical requirements). Discuss with the instructors if you are unsure what the full list is.
- A screen capture video showing a visualization of the outcome of your segmentation algorithm (similar to the screenshot in Lab 32) in a scene containing the object you chose for this assignment. Use a placeholder for objects that you do not have yet. Demonstrate the outcomes for different configurations of the scene by moving the objects around. Describe how you tuned the method to make it work for the specific object.
- For each object/landmark that needs to be perceived for your project, discuss the robustness of the clustering-based perception system you developed, how it might be improved, and whether it is a better option than attaching a fiducial to the object.
KURI PROJECTS:
1. (4 points) Post #1: Labs 28b-29b will show you how to work on perception problems in simulation and introduce you to a package for tracking fiducials in images from Kusi's camera. You will combine this new capability with functionality developed in previous weeks to allow specifying a sequence of robot configurations relative to a detected fiducial and navigating to those configurations when the fiducial moves. Using this tool you will be able to define robot actions relative to objects (augmented with a fiducial) in its environment.
The system should first detect the fiducial and record its initial pose. All robot poses saved after this should be in the reference frame of the fiducial. The system should allow you to specify robot poses by manually driving the robot to the desired pose and saving through a UI of your choice.
To demonstrate the functionality of this tool you will program Kuri to person two actions relative to a detected fiducial:
- Circle around the object on which the fiducial is attached
- Push the object on which the fiducial is attached for about 10 inches
Make a video to show how the tool is used for specifying two different actions as well as the execution of each action in two different initial configurations of the object relative to the robot. Post your video with a brief description of what is being shown.
Your actions should actually succeed. So if you find that the action fails when you test it, you need to go back and re-define your actions differently (e.g. try having a different number of poses, using different pose configurations, or adjusting the motion speed).
2. (3 points) Post #2: Labs 30b-32b will guide you through a speech processing tool integrated with ROS called Sphinx. You will train a new grammar for Sphinx based on representative verbal interactions between the user and the robot you wish to support in your project. For this try to come up with five verbal dialogs (supporting core functionality of your robot) with 4-10 turns (each utterance by the human or the robot is considered a turn). Once you have trained your grammar, make a video of someone from your team verbally interacting with Kuri to demonstrate the five interactions. Post the vide with a short description.
3. (Optional) Post any changes or refinements to your business model, design, target use cases, or technical/environmental requirements.
Submit two (or three) links (one for each blog post) as text entry on Canvas. See the rubric on Canvas for details on grading.
WEEK #5-6 (7 points)
Due: Feb 13 (WED), Feb 18 (MON), 5:00pm
This week you will continue to learn about ROS and Fetch/Kuri capabilities that will form the basis of your proposed projects. Your blog posts for the week include two videos based on lab milestones as well as some work on your project ideas.
FETCH PROJECTS:
- (2 points) Post #1: Labs 19-22 will introduce you to Inverse Kinematics (IK) and motion planning with obstacles and constraints. At the end of Lab 22 you will develop an RViz-based interface that involves an InteractiveMarker, shaped like the robot's gripper, which can be moved around to specify a target gripper pose and send the robot to that pose. When moved around, the marker should change color to reflect whether the corresponding gripper pose is reachable by the robot or not. After testing it in simulation, you will test this interface *on the real robot* to grasp a soft object. To do this, use your interface to (i) send the robot gripper to a "pre-grasp" pose near the object, (ii) then send it to a "grasp" pose, (iii) close the gripper, and (iv) send the gripper to a "lift" pose. Make a video that shows both the use of your RViz interface and the successful grasping the object by the robot. See Lab 26 for additional tips on how to accomplish this part of the assignment. Post this video with a brief description of what it demonstrates.
- (3 points) Post #2: Labs 23-25 will introduce you to coordinate frame transforms using TF and transformation matrices. At the end of Lab 26 you will have an RViz-based interface that involves an InteractiveMarker corresponding to a world object which can be moved around. You will use transform arithmetic to compute "pre-grasp", "grasp", and "lift" poses for the gripper relative to the current pose of the object. You can use any type of visualization to indicate whether an object pose has reachable gripper poses associated with it (e.g. change color of the object and attach text or separately visualize the associated gripper poses in different colors). After testing it in simulation, you will test this interface *on the real robot* to grasp a soft object. To do this, use your interface to (i) place your interactive object marker approximately where the object appears in the robot's pointcloud stream, and (ii) send the robot to computed pre-grasp, grasp, and lift poses one at a time (close the gripper before lifting). See Lab 26 for additional tips on how to accomplish this part of the assignment. Make a video that shows both the use of your RViz interface and the successful grasping the object by the robot. Post this video with a brief description of what it demonstrates.
KURI PROJECTS:
- (2 points) Post #1: Labs 19b-20b will introduce you to face detection on the Kuri and guide you through an exercise to make Kuri visually servo towards a detected face. The robot will use control wheels and its head pose to ensure that a detected face is roughly at the center of its camera image and it is of a particular size. Make a video of the servoing behavior demonstrating how (i) Kuri rotates, moves away, and moves forward to keep the detected person at a certain distance, and (ii) Kuri follows a person as they move continuously (but not too fast). Post this video with a brief description of what it demonstrates.
- (3 points) Post #2: Labs 21b-26b will introduce you to coordinate frame transforms using TF and transformation matrices, and then guide you through making Kuri look at a 3D point using simplified inverse kinematics and navigate to a point relative to the pose of a detected face. It will also ask you to create emotional expressions for Kuri which can be triggered throughout the robot's execution. By the end of these labs you will make Kuri detect a face and localize it in 3D (expressing happiness when the face is present); and navigate in front of the person when commanded (responding to the command by nodding). Make a video of the full execution of this behavior and post it with a short description.
3. (2 points) Post #3: Post a bullet list of objects, components, tools, and materials you would like to purchase or fabricate in order to create the demo scenario for your project. Include a justification of why you need each item, and, if applicable, a link for purchasing (Amazon.com when available). Make sure your list is comprehensive. Your budget is roughly $150 (excluding tablets/smartphones which we already have), but exceptions can be made if justified.
4. (Optional) Post any changes or refinements to your business model, design, target use cases, or technical/environmental requirements.
Submit three (or four) links (one for each blog post) as text entry on Canvas. See the rubric on Canvas for details on grading.
WEEK #4 (7 points)
Due: FEB 4, FEB 6 (WED) 5:00pm
The next group assignment involves the following blog posts based on the tools you will develop in this week’s labs.
- (2 points) Post #1: Labs 11-15 will introduce you to a visualization tool in ROS, called RViz, and teach you how to create custom interactive visualizations different types of information. By the end of Lab 15 you will have implemented the following functionalities in RViz: (a) custom “Markers” that provide a visualization of the robot’s path as you drive it around around, and (b) custom “InteractiveMarkers” that allow you to trigger robot actions, such as moving or turning the robot using odometry. Make a screen capture video of your RViz interface demonstrating both capabilities: drive the robot around using the InterativeMarkers you created and demonstrate the Marker-based visualization of the path it takes. Post this video with a short description of what it demonstrates.
- (2 points) Post #2: Labs 16-17 will introduce you to basics of robot navigation and teach you how to make a map, localize in it, and navigate to a target configuration in it through RViz and through code. By the end of Lab 17 you will make a tool that allows you to annotate a map by driving the robot to a desired location in the map and using command line inputs to trigger the annotation process and name the location. The same tool should allow you to send the robot to a previously annotated location on the map. Make a screen capture video that demonstrates driving the robot around to annotate distinct locations on the map and sending the robot to those locations. Post this video with a short description of what it demonstrates.
- (3 points) Post #3: Lab 18 involves putting together what you learned in the previous exercises to build a tool for annotating a map through a web/RViz interface and sending the robot to annotated locations. Make a screen capture video demonstrating the use of this tool to annotate distinct locations on the map and sending the robot to those locations. Post this video with a short description of what it demonstrates.
- (Optional) Post any changes or refinements to your lean canvas (i.e. your business model), sketch (i.e. your design), story board (i.e. your target use cases), or technical/environmental requirement based on the feedback you get from peers or the teaching staff or based on your elaborations.
Submit three (or four) links (one for each blog post) as text entry on Canvas. See the rubric on Canvas for details on grading.
WEEK #3 ASSIGNMENT (8 points)
Due: JAN 28, 5:00pm
Your next assignment will demonstrate the successful completion of this week's labs and build on your previous assignments to push your project ideas forward. Here is what we would like to see on your blog next week. Before you begin, (a) make sure you complete the optional Step 7 of last week's assignment and (b) send Ethan the Github usernames of all your team members along with your assigned team number (1-6).
- (3 points) Post #1: The first set of labs will guide you through ways of controlling different actuators of the robots in ROS and making a browser-based user interface integrated with ROS. By the end of these labs, you will be able to develop a tele-operation interface that lets you remotely control different parts of the robots.
FETCH PROJECTS: Complete Labs 1-10 and use the browser-based teleoperation tool to control the simulated Fetch robot to do the following tasks:
- Drive from the starting pose to the table
- Raise the torso and lower the head to look at the table
- Move the arm (and possibly the base and torso) and open/close the gripper to pick up the blue cube on the table, without colliding with the table.
The tele-operator should be able to do these tasks without looking at the simulation (Gazebo) screen and without relying on their memory. In other words, the teleoperation interface should display sensory information to help guide the operator.
Make a screen capture video of the teleoperation process. At the end, open the Gazebo window to show that the object was successfully picked up by the Fetch robot.
KURI PROJECTS: Since our Kuri labs are currently under development, your task for this week is less structured and involves more exploration, but we will work more closely with you to help out as your efforts will contribute to future Kuri labs. The Fetch Labs 1-10 will serve as guidelines and you can still use the Fetch starter code as your templates. You will be able to skip some of the labs but be sure to read through them nonetheless for potentially useful general advise. Your goal is to develop a browser-based teleoperation interface that allows you to:
- Drive Kuri from the starting pose to another landmark
- Move the Kuri's head to look around
- Open and close Kuri's eyelids
The teleoperation interface should display Kuri's eye camera image to help guide the operator. Make a screen capture video of the teleoperation process demonstrating the three capabilities above.
To complete this step, upload your video to Youtube and post a link on your blog with a description of all elements on your tele-operation interface.
2. (1 points) Post #2: Assign team roles to everyone in your project team, considering common potential roles in this class. Write a blog post that describes everyone's roles and concrete responsibilities. Consider weekly responsibilities as well as quarter-term project responsibilities. In addition, write a paragraph about strategies you will follow to ensure everyone in the team acquires the knowledge and skills they hope to get out of the class (e.g. everyone in the team learns ROS).
3. (2 points) Post #3: Next, you will critique two project proposals by your peers. Review the sketch, storyboard, technical requirements (Post #2 in Assignment #2) and the associated Lean canvas (Post #1 in Assignment #2) for each project that was assigned to your team. Provide a critique in the following format:
- What do you like about the project?
- What are potential issues with the project? Try to identify potential flaws in the business model as well as implementation challenges that the team might not have considered or might have underestimated.
- Any suggestions you have for resolving the identified issues or pivoting the project.
We will separately post the assignment of projects that each team will critique with links to the relevant posts on Canvas.
4. (Optional) You will likely pivot or refine your projects throughout the course of the quarter. We would like you to keep documenting these changes or refinements on your blog. Post any changes or refinements to your your business model (i.e. lean canvas), your design (i.e. sketch), your target use cases (i.e. story board), or technical/environmental requirement based on the feedback you get from peers and the teaching staff, or based on your elaborations as a team.
Submit three links (one for each blog post) as text entry on Canvas. See the rubric on Canvas for details on grading.
WEEK #2 ASSIGNMENT (8 points)
Due: JAN 22 (TUE), 5:00pm
This week we will finalize teams and each team will get started on selecting and detailing a project, as well as learning how to program robots.
- (0 points) Go to Canvas groups page, claim one of the groups (1-6) and add all members of your team to the group.
- (0 points) Have one team member create a team blog and add all others to the blog so they can post on it.
- (0 points) Work as a team to select your project topic from the shortlist selected by the teaching staff.
- (3 points) Post #1: Create a one-page business model ("lean canvas") for your selected project and post it on your team blog with a short statement of why you chose this project. (example)
- (4 points) Post #2: Create a sketch and a storyboard for your chosen project and post it on your blog with a list (i) all the environmental modifications needed to make your project feasible and (ii) all technical capabilities you will need to implement (e.g. object recognition and localization, people recognition, navigation, pick-and-place, etcetera) with details of their scope (e.g. which objects need to be recognized/manipulated, etcetera). (example)
- (1 points) Choose a company name and a product name (they could be the same) and create a logo. Update your blog with this information. You can change these later, but pick something to start with.
- (0 points) Make sure everyone in your team watches the following videos: Video-1, Video-2, Video-2.1, Video-2.2. Then complete Lab 0, which involves going through the basic official ROS tutorials as a team. You can also leave this to next week but this is an opportunity to get a head start on learning ROS and completing prerequisites for next week's assignments.
Submit two links (one for each blog post) as text entry on Canvas. Make sure your blog has the company/product name and the logo embedded (for Step 6).
WEEK #1 ASSIGNMENT (9 points)
Due: JAN 14, 5:00pm
Since you do not yet have teams, this assignment will be done individually. Here is what you need to do.
- (0 points) If you do not yet have one, create a GitHub and a Tumblr account.
- (0 points) Start forming a project team. The number of people in a team will be four or five. If you are looking for the rest of your team (i.e. your are one person or two/three people) send an email to Maya by the above deadline and we will work to pair you.
- (3 points) Post #1: As a first step, please read the project scope description on the Home page. Then search for example prior work on assistive robots and post a short project profile outlining the addressed problem and the developed solution (see previously posted example) for your selected project. Include "Prior work:" in the title of your post.
- (6 points) Post #2: Post a short description for one of your own assistive robot project ideas. Your idea should be based on a real problem (need or pain point) experienced by a particular user group. The problem should be supported with evidence from one of the following:
- Personal experience with the target user group as member or caregiver
- A contextual inquiry or interview of one or more people from the target user group (ask Maya if you would like contact information)
- Publicly available documentation (research paper, report, discussion forums, video blogs) about challenges faced by the user group
- Your post should include "Project idea:" in the title of your post and have the following parts:
- Specify the user group and state the problem with the supporting evidence. Provide pictures and quotes if available.
- Describe the proposed robot-based solution, using one of the two robot platforms that will be available this quarter. Answer the question of why a robot is a good solution for the problem, as opposed to solutions using other technologies.
- Present an argument for the tractability and feasibility of the solution (e.g. does not rely on the ability to pick up arbitrary objects).
Even if you already have a team you need to do this assignment independently. Team projects will be chosen later from a short list selected by the teaching staff. Submit two links (one for each blog post) as text entry on Canvas.