Our robot was able to successfully complete the drawing, intended to be a rough sketch of our classmate's face (the picture of our classmate produced a clearer computer image in our software than the picture of the SMFA Director's did, so we decided to use that one). As can be seen in the video above, the robot works by starting at the top left of the image and moving through it one pixel at a time. When it gets to each pixel, it figures out which of the four colors (white, brown, skin color, and black) that the pixel is closest to, rotates to the appropriate pen, and draws a mark (if the point is white, it draws nothing and moves to the next point). When it reaches the end of a row, it moves down one unit and then all the way back to the first column so it can draw the next row.
At first glance, the picture itself may not resemble the original person yet, when the picture was reviewed, we came to a satisfying realization. If our drawing was compressed in the Y direction and the misalignment from the return was corrected, it began to resemble the shape of our desired picture much more closely.
Original Image
Pixellated
Colors Converted
Robot Drawing
Thus, we found that the main sources of error could be largely corrected in a future version of our robot. As discussed previously, we were experiencing slipping with our wheels. This made it difficult to ensure that our X and Y travel distance (especially for the return to the start of each new row) were consistent and accurate. By using stepper motors to keep better track of how far the wheels travel and drawing on a surface with more consistent friction (to eliminate slipping), we would be able properly calibrate our robot to produce a much more accurate picture. Additionally, in the case that we were not able to eliminate the slipping and to insert feedback into our system, we could utilize the myRio's built in accelerometer. In the case of slipping, it would show that the robot was not moving properly and could correct for the error. In an improved version, we would also choose a greater variety of colors that were easier to differentiate between, in addition to configuring the pen dropping devices to drop the pen straight down to draw a point rather than having them come in from an angle to draw a short line. We believe that these elements combined together would greatly increase the ability of our robot to draw the picture and, given more time and resources, we could build a vastly improved version of the Rover Writer!
Thank you for taking the time to read this report. We hope that you found it interesting.
If desired, our robot can now be seen on display in Bray Labs at Tufts University as we continue with a second generation of the Rover Writer.