Post #10

Perception - Early Stages


This week in lab, we were able to get a start on the kinds of perception we would need in order for our robot to do some of its tasks autonomously. We started by using an early cardboard prototype of the graspable "wings" on our walker, and were able to place an ArUco tag on the wing. ArUco tags have many nice properties. First of all, they have a standard, distinct pattern corresponding to some number. This gives us the ability to label distinct parts of the walker differently that we may want the robot to pay attention to. Second, ArUco tags have well defined algorithms for  easily computing distances between the tag and the camera. This will hopefully allow us to precisly move the robot into positions autonomously in order to hold on to and position the walker. Third, we can make use of pretrained models to detect the ArUco tags. We will not need to train these models ourselves, which takes time and requires a lot of compute power. Instead, we should be able to plug and play with existing models in order to get our solution working quickly.

Below is a video of our detection algorithm and prototype in action. If you look carefully at the beginning of the video, you'll see the ArUco tag slightly changing color when we check/uncheck the RVIZ topic. This is an RVIZ marker being placed on the tag (id 47) after the the detection algorithm ran. In additon, our group was only able to get the hello robot detection algorithm running by changing the source Python code a bit. We hope that the tips we posted on Ed are helpful to the rest of the class.

Also, we were able to get object detection and facial recognition running during our labs. While ArUco tags can be placed on the walker, we think that placing the tags onto the Parkinson's patient might be a bit invasive. Also, since placing the walker in front of them does not need to be quite as precise as grasping it, we feel that using one of these two other algorithms to find the patient and have the robot drive towards them (using the depth camera to measure how far away they are) will be sufficient to get our project working.


Here is a better visualization of the detected marker:

IMG_2416.MOV