We didn't need to build any hardware for our project since all we needed was Sawyer, a Logitech webcam, a brush pen that we taped to Sawyer's gripper, and some paper.
All our files can be found under the "Materials/Links" page above, in our GitHub repository. We had various files that we each wrote, which we ran using a wrapper file main.py.
Organization & usage of our python files
(you can find these files in our GitHub repo on the "Links" page)
Our complete system works as follows:
The first part of the project is to capture an image of a face using a webcam. We used OpenCV and gave the user a very simple interface to take the picture where they can simply run python(3) main.py to start up the entire project, which begins by opening up the interface for the user to take a selfie. Our OpenCV code (written in save_webcam_img.py) constantly reads images in from the attached webcam while waiting for the user to press the 'y' key. The key press would begin a 5-second timer (visible to the user on the camera feed interface), which after would signal cv2 to save that frame, ask the user for a filename (such as "drawyer.jpg"), and then save it to an "img" folder with that name.
After taking the image from the user, we want to convert the captured image to a NumPy array that Sawyer can draw. We get the image captured previously from the "img" folder (we pass in the filename in the function call), convert it into grayscale, and then run Canny edge detection to find the relevant edges in the image (written in image_to_canny.py). We also give the user the control over both the lower and higher threshold to filter out the noise from the image. We reduce the resolution of the image by a factor of 20 so that the various points can be drawn by the Sawyer in a noticeable and efficient way. Finally, we need to convert all of the pixels from a range of 0-255 to 0s and 1s as the Sawyer needs that as input to draw at those points which are 1s. To do this, we give user control over the threshold we choose for this conversion so that the pixels below that value go to 0 and the rest go to 1.
Please wait, processing... :)
Using the array of 1s and 0s from the previous step, we run DFS on the array to find the paths we can draw sequentially. From this list of list of 2D points, we convert each 2D point to a 3D points in the real world from the point of view of the base of the robot (written in draw.py). We do this using the known 3D coordinates of the bottom right corner of the page and the dimensions of the page and then using geometry we convert every 2D point on this page to a 3D point.
For each one of the 3D points in our paths, we run an Inverse Kinematics solver to convert the 3D point to the 7 joint angles of the Sawyer and we can move to that position in our Sawyer. Theoretically, this might cause some issues as the inverse kinematics solver could have given us drastically different joint angles for points right next to one another but in practice it seemed to work great for our purposes so we ended up using it.