Assignments

WEEK #10

TODOs

  • Complete the peer evaluation for all your teammates by June 7th. Everyone should fill those out individually, once for each teammate (i.e., three times).

  • Complete the official course evaluation by June 5th.

Assignment 10

Due: Jun 7 **Tuesday**, 11:59pm | Canvas link (rubric & submission) | Points: 6

PART 1: Project update (2 points)

  • Write a short progress update on what you accomplished in Week 10 with respect to the plan you made in Assignment 7. At this point in the project you are likely working in parallel on several aspects of the project in subgroups. We recommend that each subgroup posts a separate progress update. For example if you have one person working on the UI, that person can be responsible for posting a progress update on that. Progress updates should include the different deliverables mentioned in your plan, demonstrated with pictures, videos, screenshots, or any other evidence

PART 2: Team reflection (2 points)

  • Write a short reflection on how teamwork went in your project. In particular, talk about which strategies from Week 2 (Part 2) did you actually adopt as a team, what other strategies did you adopt along the way, which ones worked well, and which ones did not.

PART 3: Course reflection (2 points)

  • Reflect individually and as a team about the Robotics Capstone Spring 2022 offering and write a few sentences touching on (a) what was most fun, (b) what was most useful, (c) what was not so useful, (d) what would have been useful but was missing. As this is a continually changing course, we would like to hear from you any information that can help make future offerings better.

WEEK #9

Projects

This is project crunch time. This week you will continue to work on your project implementation and testing, trying to hit your milestones.

Assignment 9

Due: May 30 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 6

PART 1: Project update (2 points)

  • Write a short progress update on what you accomplished in Week 9 with respect to the plan you made in Assignment 7. At this point in the project you are likely working in parallel on several aspects of the project in subgroups. We recommend that each subgroup posts a separate progress update. For example if you have one person working on the UI, that person can be responsible for posting a progress update on that. Progress updates should include the different deliverables mentioned in your plan, demonstrated with pictures, videos, screenshots, or any other evidence of progress. Make sure to indicate who has been working on what.

PART 2: Video script (2 points)

  • Write a draft script for your final presentation video. See the final video assignment below for more details about the format of the script and example videos. In particular your script should be a Google doc (with comment/suggestion access for the instructors), with a three column table as described. If you would like feedback on your script earlier than the assignment deadline, please send the instructors an email with a link to your script.

PART 3: Ethics reflection (2 points)

  • One of the learning objectives of all capstones is to give students "an ability to recognize ethical and professional responsibilities in engineering situations and make informed judgments, which must consider the impact of engineering solutions in global, economic, environmental, and societal contexts." To that end, this week you will write a reflection on the potential positive and negative impacts of robotic picking on society and on the environment.

FINAL VIDEO & DEMONSTRATION

Final Assignment

Due: Jun 6 Monday, 12:00pm **noon** | Canvas link (rubric & submission) | Points: 20

1. (16 points) VIDEO PRESENTATION: Your final assignment involves creating a video that presents your project. Your video should be three minutes or shorter. Your video should both describe and "sell" your project. The following example videos might inspire you:

---

Your video script should include a plan for narration and accompanying visuals (footage or images). Your script should be in the form of a three column table. Here are some example scripts: Luci, Auxie.

  1. The first column should include a section number and a short section title (e.g. "introduction", "demonstration of task 1" or "testimonial from test user").

  2. The second should be the a description of visual elements that will be shown in the video (e.g. "talking head of team member A", "video of robot manipulating object", "screen recording of someone using the UI"). Include sufficient details such that someone who reads your description could go and shoot that video for you.

  3. The last should be the narration text that will be read voice-over on the video, if the video does not already include relevant audio. This includes what will be said in "talking head" shots.

---

To allow for creative freedom and avoid obtaining many cookie-cutter videos, we are not constraining the sections in your video. But please make sure that your video follows these guidelines:

  • Your video should clearly describe and motivate the problem. Get the viewers to care about the problem.

  • Your video should clearly describe your solution and convince the viewer that it addresses the problem.

    • It should be clear what your design is useful for to someone who is hearing about the project for the first time.

    • Your video should demonstrate the use of your robot. This can be done by walking through different tasks that your robot supports. Be sure to set context for the use and demonstrate the outcome (just like in your storyboards).

  • Your video should feature positive feedback from potential users regarding your design (quotes from your interviews or new recordings of people saying what they like about your design or describing contexts in which they would use your design if it were real).

  • Your video should introduce you, the team.

You will have a chance to receive feedback on your scripts as part of Assignment 9, which should be submitted on time. Afterwards you will have a week to shoot and edit your videos. Try to incorporate any feedback you received about your script.

The evaluation of this assignment will be done by a panel of judges and it has two components:

  • Evaluation of the video: In addition they will assess the quality of your video. Besides covering important content, it is important for your video to have appeal.

  • Evaluation of the project: The judges will assess the importance and quality of your project. Note that this assessment depends on the judge's ability to understand your project based on the video.

The rating sheet that the judges will use is available here. To avoid file format related challenges and allow judges to view videos remotely, we request that you upload your video to YouTube (you can make it Unlisted if desired) and submit a link to the video on Canvas (link to be added).

2. (4 points) DEMONSTRATION: In addition to the video, you will demonstrate your project live on the robot during Week 10 sections and during our demo event (Jun 7, Tuesday 11:30am-12:50pm). The instructors and judges will expect to see the core functionality shown in the video to work similarly on the robot and will try variations of the demonstration. You will have 10 minutes to prepare and demonstrate the capability. We encourage creating checklists and launch scripts to simplify getting your demo running as well as rehearsing getting it to run. One person from the team should describe the demonstration while others set up the demo environment, run the necessary scripts, keep an eye on the debug screens, or set up the shelf for the demonstration. If a team's demo does not work during the section or the demo event, a make up time can be scheduled later during the finals week with no penalty.

WEEK #8

Labs

This week you can continue to catch up on any labs from previous weeks and make progress on your project milestones (Assignment 7, Part 3).

Assignment 8

Due: May 23 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 2

In this week's update write a short progress update on what you accomplished in Week 8 with respect to the plan you made in Assignment 7 last week. At this point in the project you are likely working in parallel on several aspects of the project in subgroups. We recommend that each subgroup posts a separate progress update. For example if you have one person working on the UI, that person can be responsible for posting a progress update on that. Progress updates should include the different deliverables mentioned in your plan, demonstrated with pictures, videos, screenshots, or any other evidence of progress. Make sure to indicate who has been working on what.

WEEK #7

Labs

This week you can work to catch up on any labs from previous weeks.

Assignment 7

Due: May 16 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 9

In this week's update you will present a proposal and plan for your projects.

PART 1: Project proposal (3 points)

  • Building on last week's ideation about project directions, your increasing experience with the robot and task domain, and discussions with the teaching staff decide on your specific project topic and write a post to answer the following questions.

    • General motivation: Write your own version of the motivation for building on picking robots. Explain why it is challenging. You can reference the problem description on the course home page.

    • Specific motivation: Describe the particular failure of the baseline approach to robotic picking that you would like to address in your project, referencing the common failure modes. Describe which item attributes or item configurations lead to this type of failure (e.g. "bagged items", "non-rigid items", "items with indistinct packaging", "bins that are +75% full", etcetera). Provide evidence for the problem based on a quantitative analysis of the baseline approach and/or a specific video or image illustrating the problem.

    • Technical approach: Describe your approach for addressing the problem described in your specific motivation, referencing the topics of interest from last week (i.e., incremental container modeling, data-driven perception, environment co-design, extraction strategies, remote human assistance, or other). Provide details on how you will implement your approach.

PART 2: System (4 points)

  • Draw a ROS-level system figure for your proposed project. Be sure to indicate:

    • Which modules are already implemented, which need to be developed, and who will work on developing which module;

    • How each module will communicate with one another;

    • Which parts constitute the minimal viable product, and what parts are stretch goals.

  • Draw a Finite State Machine or storyboard that illustrates how your system will work.

PART 3: Plan (2 points)

  • Write detailed deliverables for three upcoming milestones:

    • Week 8: Demonstrate feasibility of proposed approach

    • Week 9: End-to-end pick with proposed approach

    • Week 10: Improved end-to-end pick with proposed approach and formal evaluation of approach

Write a self contained update that would make sense to someone who reads it independently of this assignment description. Submit a link to your update on Canvas.

WEEK #6

Labs

We are half way through the quarter and this week we will have our last set of labs to achieve our first "real" picks. To that end, we will continue implementing basic perception skills to detect an item on the shelf so as to grasp and extract it. Lab 32 will explore different clustering methods to segment the pointcloud within a shelf bin and Lab 33 will involve simple feature extraction from the segments to try to recognize them. Lab 34 will provide high-level guidance for closing the loop to pick up a detected object.

Assignment 6

Due: May 9 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 7

In this week's update you will demonstrate new perception capabilities and the first perception-enabled robotic pick of an object from the shelf. You will also discuss what specific problem you will focus on in your project.

PART 1 (2 points)

  • After completing Lab 32 provide an analysis of the three different clustering algorithms that you tried for segmentation. Show the segmentation results for each method in at least five different bin configurations involving different items, at different packing densities, in different arrangements. Then quantify the segmentation success for each method by counting correctly segmented items and errors (false negatives: over segmented or undetected items; false positives: segments that do not correspond to an item). Write up these results in your update (under subsection "Segmentation Results") including images for segmentation results. Discuss which method you might use in your project.

PART 2 (2 points)

  • After completing Lab 33 provide a similar analysis for item recognition. Be sure to report accuracy (% of correctly identified objects) as well as a confusion matrix that characterizes the errors that your recognizer makes. You can do this analysis in the same scenes as the ones from Part 1. Write up these results in your update (under subsection "Recognition Results"). Discuss which method you might use in your project.

PART 3 (2 points)

  • After completing Lab 34 make a video of the robot attempting to pick up at least two different items that it is able to detect, in at least two different scenarios (i.e., different shelf compositions and configurations). Post a video describing your system.

PART 4 (1 points)

Next week you will start working as a team to focus your projects on a particular sub-problem. To that end, this week you should start team discussions to identify topics of interest for specialization. A few possible directions are presented below, but you might come up with others. Write a paragraph describing your interests (pick at least two) and particular approaches you might take for each. Give motivation for why you would like to pursue these directions, preferably based on the failures of the baseline picking approach you have implemented so far. Your specialized solution can focus on a subset of the objects (based on attributes) or a subset of failure modes. Reference the Project Scope documentation which consolidates your efforts to identify performance metrics, failure modes, and object attributes to take inspiration. You will write a more detailed proposal next week, so this is the initial ideation and exploration phase. Possible topics of specialization:

  • Incremental container modeling: Assume the robot can watch a person stow things on the shelf over time and use this temporal information to build better models of how objects are configured on the shelf.

  • Data driven perception: Collect larger object datasets with ground truths to develop better item perception algorithms.

  • Environment co-design: Fabricate end-effector, shelf, or workcell enhancements that facilitate picking.

  • Extraction strategies: Explore alternative strategies for extracting objects from the shelf, different from the classical approach of rigid grasping based pick-and-place. For example you can pull the object closer to the shelf edge or tip it over to create an overhang before attempting to pick; or you can try to push other objects away from the target object to create space around it.

  • Remote human assistance: Build interfaces to enable a remote operator to efficiently assist the robot with in challenging scenarios (e.g., when perception fails).

Write a self contained update that would make sense to someone who reads it independently of this assignment description. Submit a link to your update on Canvas.

WEEK #5

Labs

This week we will jump into robot perception, one of the key challenges of making robots do useful tasks autonomously. As perception algorithms tend to be computationally expensive, due to the high dimensionality of sensor data, many perception algorithms are implemented in C++. In Lab 27 we will create our first C++ package and then learn to deal with point clouds obtained from Fetch’s PrimeSense camera in Labs 28-29. Next we will get a very useful perception library in ROS, namely AR marker detection, working with our point cloud data in Lab 30. Finally, we will integrate perception and arm motion by developing a Programming by Demonstration (PbD) system in Lab 31. This PbD system will allow you to: (a) save end-effector poses relative to an AR marker and (b) move the robot’s arm to the same relative poses after the marker has been moved. Using this system you will be able to define different actions relative to the shelf (which will have markers attached to it) for manipulating objects that are on the shelf. You will do this simply by saving a sequence of end-effector poses and gripper states (open/closed) relative to the marker on the shelf.

Assignment 5

Due: May 2 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 6

In this week's update you will demonstrate the perception capabilities and the PbD system you developed in the labs.

PART 1 (2 points)

  • At the end of Lab 30 you will have a smart point cloud cropper that cuts out part of the robot's point cloud that corresponds to a particular bin on the shelf, based on its pose relative to an AR marker attached to the shelf. Make a video that demonstrates the smart cropper by visualizing the cropped point cloud in RViz and showing how the cropped point cloud updates over time when you move the shelf around, successfully capturing the contents of a particular bin in different shelf poses. Post this video together with a brief description of what it demonstrates.

PART 2 (4 points)

  • At the end of Lab 31 you will program (by demonstration) three robot actions applied on objects placed on the shelf (see images below) to demonstrate the capabilities of the PbD system you developed. Make a video that demonstrates (a) the process of programming at least one of those actions, and (b) the successful execution of all three actions on the real robot. Post this video with a description of what it demonstrates.

Write a self contained update that would make sense to someone who reads it independently of this assignment description. Submit a link to your update on Canvas.

WEEK #4

Labs

This week we will continue to work with RViz and get started on robot manipulation. Labs 19-22 will introduce you to Inverse Kinematics (IK) and motion planning with obstacles and constraints. Labs 23-25 will introduce you to coordinate frame transforms using TF and transformation matrices. Lab 26 will give guidance for implementing two RViz tools to command robot manipulation with interactive markers, as part of Assignment 4 described below.

Assignment 4

Due: Apr 25 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 6

In this week's update you will demonstrate the teleoperation interface that you developed in the labs.

PART 1 (3 points)

  • In Lab 26 you will first develop an RViz-based interface that involves an InteractiveMarker, shaped like the robot's gripper, which can be moved around to specify a target gripper pose and send the robot to that pose. When moved around, the marker should change color to reflect whether the corresponding gripper pose is reachable by the robot or not. After testing it in simulation, you will test this interface on the real robot to grasp a soft object from the shelf. To do this, use your interface to (i) send the robot gripper to a "pre-grasp" pose near the object, (ii) then send it to a "grasp" pose, (iii) close the gripper, and (iv) send the gripper to a "lift" pose. Make a video that shows (a) the use of your RViz interface and (b) the successful grasping the object by the robot. See below example videos (in simulation and on the real robot) of such an interface. Post your video with a brief description of what it demonstrates.

PART 2 (3 points)

  • Next you will develop an RViz-based interface that involves an InteractiveMarker corresponding to a world object which can be moved around. You will use transform arithmetic to compute "pre-grasp", "grasp", and "lift" poses for the gripper relative to the current pose of the object. You can use any type of visualization to indicate whether an object pose has reachable gripper poses associated with it (e.g., change color of the object and attach text or separately visualize the associated gripper poses in different colors). After testing it in simulation, you will test this interface on the real robot to grasp a soft object from the shelf. To do this, use your interface to (i) place your interactive object marker approximately where the object appears in the robot's pointcloud stream, and (ii) send the robot to computed pre-grasp, grasp, and lift poses one at a time (close the gripper before lifting). Make a video that shows (a) the use of your RViz interface and (b) the successful grasping the object by the robot. See below example videos (in simulation and on the real robot) of such an interface. Post your video with a brief description of what it demonstrates.

Write a self contained update that would make sense to someone who reads it independently of this assignment description. Submit a link to your update on Canvas.

WEEK #3

Labs

This week we will work on ROS visualizations and robot navigation through Labs 11 through 18 (more details in the assignment description).

Assignment 3

Due: Apr 18 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 7

In this week's update you will demonstrate the teleoperation interface that you developed in the labs.

PART 1 (2 points)

  • Labs 11-15 will introduce you to a visualization tool in ROS, called RViz, and teach you how to create custom interactive visualizations different types of information. By the end of Lab 15 you will have implemented the following functionalities in RViz: (a) custom “Markers” that provide a visualization of the robot’s path as you drive it around around, and (b) custom “InteractiveMarkers” that allow you to trigger robot actions, such as moving or turning the robot using odometry. Make a screen capture video of your RViz interface demonstrating both capabilities: drive the robot around using the InterativeMarkers you created and demonstrate the Marker-based visualization of the path it takes. Post this video with a short description of what it demonstrates.

PART 2 (2 points)

  • Labs 16-17 will introduce you to basics of robot navigation and teach you how to make a map, localize in it, and navigate to a target configuration in it through RViz and through code. By the end of Lab 17 you will make a tool that allows you to annotate a map by driving the robot to a desired location in the map and using command line inputs to trigger the annotation process and name the location. The same tool should allow you to send the robot to a previously annotated location on the map. Make a screen capture video that demonstrates driving the robot around to annotate distinct locations on the map and sending the robot to those locations. Post this video with a short description of what it demonstrates.

PART 3 (3 points)

  • Lab 18 involves putting together what you learned in the previous exercises to build a tool for annotating a map through a web/RViz interface and sending the robot to annotated locations. Make a screen capture video demonstrating the use of this tool to annotate distinct locations on the map and sending the robot to those locations. Post this video with a short description of what it demonstrates.

Write a self contained update that would make sense to someone who reads it independently of this assignment description.

Submit a link to your update on Canvas.

WEEK #2

TODOs

Come up with a team name and a name for your version of the robot, to replace our current placeholders "Team N" and "Fetch." Think of when you will present your project at the end of the quarter; you'll want to say something like "Hello, we are Team Warebots and this is our first robot, Pickie!" Update your team page with this information. It is okay to change these in later weeks, but try to pick something for now.

Labs

This week we will start learning about the Fetch mobile manipulator and how to control it through ROS. In Lab 1 through Lab 8 you will learn to control the robot's base, gripper, torso, head, and arm, both in simulation and on the real hardware, as well as inspecting the current state of the robot. In Lab 9 and 10 you will learn how to make a browser-based user interface integrated with ROS. By the end of these labs, you will be able to develop a tele-operation interface that lets you remotely control different parts of the robot. You are welcome to complete these tutorials as a team, in pairs, or individually, but make sure everyone in the team can get a strong foundation for working with the Fetch robot.

Assignment 2

Due: Apr 11 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 6

In this week's update you will demonstrate the teleoperation interface that you developed in the labs and discuss teamwork strategies.

PART 1 (4 points)

  • After completing Lab 10, use the browser-based teleoperation tool you developed to control the simulated Fetch robot to do the following tasks.

    • Drive from the starting pose to the table

    • Raise the torso and lower the head to look at the table

    • Move the arm (and possibly the base and torso) and open/close the gripper to pick up the blue cube on the table, without colliding with the table.

  • The tele-operator should be able to do these tasks without looking at the simulation (Gazebo) screen and without relying on their memory. In other words, the teleoperation interface should display sensory information to help guide the operator.

  • Make a screen capture video of the teleoperation process. At the end, open the Gazebo window to show that the object was successfully picked up by the Fetch robot. Upload your video to Youtube and include a link in your update that describes all elements on your tele-operation interface.

PART 2 (2 points)

  • Assign team roles to everyone in your project team, considering common potential roles in this class. Write a blog post that describes everyone's roles and concrete responsibilities. Consider weekly responsibilities as well as quarter-term project responsibilities. In addition, write a paragraph about strategies you will follow to ensure everyone in the team acquires the knowledge and skills they hope to get out of the class (e.g., everyone in the team learns ROS).

As before, try to write a self contained update that would make sense to someone who reads it independently of this assignment description.

Submit a link to your update on Canvas.

WEEK #1

TODOs

  • Form a team. The number of people in a team will be four. If you are looking for the rest of your team (i.e. your are one person or two/three people) send an email to Maya and we will work to pair you.

  • Everyone on the team should have a GitHub account.

  • Create a Google Sites page for your team. The main page should be structured like a blog where you will post your weekly updates. Add another page titled "Team" and add a photo and short bio of each team member there.

  • Carefully read the project scope description on the Home page.

Labs

This week you will complete the Getting set up tutorial and Lab 0 which involves going through the basic official ROS tutorials. Do these tutorials individually or in pairs. You are welcome to use the lab machines or your own computer. The following video tutorials might be helpful: Video-1, Video-2, Video-2.1, Video-2.2.

Assignment 1

Due: Apr 4 Monday, 11:59pm | Canvas link (rubric & submission) | Points: 5

For this assignment you will get more familiar with the Amazon Picking Challenge (APC), which is closely related to the projects you are doing in this class. As a starting point read the official APC rules from the 2016 challenge (you can focus on the section "PHASE 4").

  • Part 1 (2 points): Based on your understanding of the goals of the project, create a list of performance metrics with which to evaluate a robotic picker. Identify as many metrics as you can think of and give a precise definition of each metric. As an example, here is one performance metric you should include: "success percentage/ratio is the fraction of pick attempts that result in the requested item to be picked and placed into a tote without any error". As part of this process make a list of all types of errors you can anticipate (the rule book already identifies a few "penalties").

  • Part 2 (1 point): Find a video from a picking challenge team demonstrating their robot performing the task on at least five different objects. Choose two metrics from your list and measure them (to the best of your ability) based on the video to obtain a rough baseline number for those performance metrics.

  • Part 3 (2 points): Go over the full list of objects that were used in APC 2015. Create a list of object attributes that may influence the best strategy or the difficulty of the picking task. Identify as many attributes as you can think of, give a precise definition, and provide examples of objects that differ in terms of each attribute. As an example, here is one attribute you should include: "rigid/non-rigid: whether the object maintains its shape while being manipulated; e.g. the soap bar is rigid, the dog toy is non-rigid.

Write your first weekly update as a team with the information from these three exercises. Be sure to include links (or embed) any external information, such as the video you use for Part 2. Try to write a self contained update that would make sense to someone who reads it independently of this assignment description. To that end, you are welcome to copy or rephrase parts of this assignment description.

Submit a link to your update on Canvas.