The design requirement for this project is simple and coincides with the project objective, namely that the robot should be capable of fully autonomous single colour drawings. We chose to use Sawyer as opposed to Baxter as it allows for more precision and control over the motions, and can allow for more precise drawings. Although we could have created our own custom hardware setup that could more easily draw images, we decided that it would be more creative and interesting to have a robotic arm hold a pen to draw things, so the Sawyer arm drawing images could be seen as more of a performance for the audience. We chose to draw with a pen as opposed to a paintbrush or any other more artistic tool as it was easier to acquire and simpler to use.
We split up our design into four main components: the end effector hardware, image processing, canvas localization, and control.
To make the system robust, we chose to include a spring so that there was a larger range of heights at which the end effector could draw lines. The first design can only hold one pen at a time.
In order to draw with multiple pens, we designed a rotating "carousel" end effector with four single pen end effectors and a displacement system so that one pen can be extended further than the others. We need 4 pens for each of the main subtractive colors: cyan, yellow, magenta, and black. To turn the carousel, the robot can use the table top to twist against.
The choice of materials was motivated by the additive manufacturing technique, where speed of manufacture and low cost were the most important priorities. Traditional manufacturing techniques such as milling and turning may be used for future iterations in order to increase longevity and accuracy.
Since we do not have a pen for every single color, we decompose the input image into cyan, yellow, magenta, and black components. For pointillism, we use dithering with 2 shades: full color or no color. Dithering works by iterating over each pixel of each of the single-colored components of the image. We start from the top left. Since we can only either have the presence of a pen mark or its absence, we choose the one that best fits the intensity of that pixel. We then keep track of the error accumulated so far by adding the difference between the pixel and our approximation. Proceeding left to right along each row, and approximating each row from top to bottom, we can convert each color component of the image to the presence or absence of dots using the shades that best reduce the error so far. Single color pointillism is a simplification of this multicolor procedure to only analyze one color component.
To generate the lines for multicolor cross hatching, we also first break the image into cyan, yellow, magenta, and black components. Each "layer" of the cross hatching consists of many slopes where each slope is a dashed line made up of smaller unconnected lines. To generate a slope, we start on the side of the image walk along the pixels in the direction of the slope. We keep track of the error accumulated so far and decide to start a line, end a line, or continue a line based on what would best minimize the error.
In order to have a fully autonomous setup for drawing images, we wanted Sawyer to automatically determine the size and location of the canvas to draw at. We chose to use AR tags for this purpose because this is one of the most reliable ways to determine the position of AR tags and consequently the canvas with respect to the camera's position. We placed 4 AR tags at the corners of the canvas in order to not only indicate the center of the canvas, but the width and height of the canvas as well. We also chose to use the camera on Sawyer's arm because there is a known and easily obtainable transform from the base to camera of the Sawyer. This could be combined with the transform from the camera to the AR tag to determine the position of the AR tag with respect to Sawyer's base, which is the main coordinate frame used in the control aspect of this project. Overall, this allowed for placing the canvas in any region reachable by the Sawyer robot arm and having Sawyer detect it.
We considered detecting the edges of the paper being drawn on in order to determine the canvas position and size, but we wanted the added flexibility of not just drawing on a piece of paper, but as a portion of any larger canvas such as a whiteboard.
We wanted to design a controller to move the Sawyer robot arm to draw at any position. In order to do this, we figured that we could always let the end effector remain in a fixed orientation with the pen being gripped point straight down, so we only needed to figure out how to change the exact position of the end effector, and thus our controller only needed 3 degrees of freedom. We decided between using inverse kinematics and manually computing joint angles to accomplish this, and decided that manually computing joint angles would be straightforward enough through spherical coordinates to be preferred over inverse kinematics. The main advantage of manually computing joint angles is the reliability in always being able to move the robot arm to the desired position and faster computation for doing so, but the downside is the lack of control over the path of the robot arm.
In order to simulate path control, we decided on setting a series of waypoints for the end effector to traverse. By creating waypoints close enough to each other, we could theoretically draw straight lines or move the Sawyer robot arm to desired positions without any possibility of causing any collisions or unintentionally drawing lines.