In conclusion, we were satisfied with our results as the sawyer bot had colored each of the convex shapes successfully. We were able to successfully transform pixel points into coordinates in the base frame of the sawyer bot.
Lighting
Under ideal lighting conditions, our algorithm found the board without fail. The problem was then getting the right lighting conditions which was difficult at times due to other groups needing the lights in the room to be set to a specific configuration. We used random objects to block light that was preventing the camera from detecting the edge of the whiteboard and this could have been avoided if we changed some settings of the camera on the robot itself but we were worried that this would negatively impact the other groups working on the same robot. There were also times where there would be a light directly above the board and since we still needed some light, we could not block it resulting in us sometimes being unable to fully detect a shape if a corner of the shape was being overly saturated. We avoided this by simply avoiding drawing where the light would be shining on the board.
AR Tracking
We had tried to implement AR tracking to identify the whiteboard more consistently but got continuous errors that were out of scope of our knowledge
Homogenous Transformations
When implementing the transformation from image pixel coordinates to real life coordinates relative to base frame, ran into many issues:
Time was a limited factor in this project, and so with more time we would be willing to make more improvements
Recognize Board Consistently:
Throughout our testing process, there needed to be a bit more consistent in identifying the board. There had to be a perfect amount of lighting to recognize the contour area of the board and there were times we had to wait for the system to identify the board. We would combat this issue by trying to implement AR tracking and look more into the issues we faced if we had more time.
Stiff Movements of the Sawyer Bot:
During the coloring process of the sawyer bot, the lines that the sawyer drew between the top and bottom points went beyond the lines of the shape, and so the coloring wasn't done as neatly as expected. We had tried to measure this error with the robot arm, but it ended up taking too much time and the videos shown in the Results Page was the closest we got to measuring this error. With more time, we would have written our own controller to better limit the robots plan as well as the speed of the arm resulting in more accurate drawing.
With more time, we would be willing to include more ambitious features to our product:
Color Identification:
We had wanted to have the sawyer bot be able to identify the color of the shape and then pick up a corresponding color marker. In that way, the color of the inside of the shape and its lines would be consistent. We would attempt to do this by measuring the hue or the lighting of the lines of the shape and compare that with the hue of the available markers.
Identify and Color Concave Shapes
This way, we would allow the Filler Bot to color for a wide variety of shapes. We would go about this by cutting up the concave shape into multiple convex shapes and then going about the coloring process recursively.