N.E.R.D - Line Tracing

Goal:

Create a line following program using the camera that will be able to track and react to different scenarios, like curves, gaps, intersections, and turns.

Detecting the Line:

To start line following, basic steps were needed to detect the line. In order to accomplish this, we had to distinguish black (the line color) from other colors. We decided to use an inRange() from (0,0,0) to (30,30,30) and place a mask over the frame. This allowed us to focus on black, which was highlighted, and ignore other colors. Afterwards, we used the findContours() function to find any contours. Although we tested out using blurs and editing the frame’s color to distinguish the line, we decided to use masking instead as the other option left us with strange noise that covered a majority of the bottom portion of the screen. Along with this, we also edited our frame width and height from (640, 480) to (160, 120). This allowed us to read each image faster and gave us less to process while implementing line following systems.

BrainStorming Ideas:

Idea 1:

The first idea we tried to implement was the line of best fit. After placing a mask that would separate black color from the rest, we would then go through each point of the contour to see if it was masked or not. Then, using a systematic equation, we would calculate the line of best fit, and form a line for the robot to follow. This idea was ditched however as it took too much time to process and was slow for line following.

Idea 2:

The next thought on line following was to separate the frame into several different sections and use the center points of each area’s contours to determine our next reaction. This would allow us to use the whole image to process information and have a view on the line’s overall movement. Although it seemed like a good idea, we did not pursue it. This is because it was difficult to figure out how each section contributed to the robots movement. Instead, we altered it to our next idea which would be used.

Idea 3:

This idea relied on splitting the frame into 3 different sections and was used to both line follow and detect special cases. To get to the final line following the result which had the capability to do gaps, intersection, turns, and more, we went through a variety of steps.

Our Line Following Solution:

Step 1:

To start off, we first split a tiny portion from the bottom area of the frame in order to begin line following. This section was the closest to the robot and was used to focus our sight on a concentrated area. We then used this region to find contours, which we then found the center of. Afterwards we subtracted it from the frame’s width center, giving us an error value. We ran this into a simple PID algorithm and used the corrections to alter our motor values to react to the line.

Our PID algorithm was a simple one we had used in previous line following competitions. This is essentially what we did:

error = width center - contour center; *finding the error value

integral = integral + error; *sum of the errors over time

derivative = pastError - error; * change over time


Fix = (kp*error) + (ki*integral) + (kd*derivative);

*fix value, goes into the motors

pastError = error; * setting up pastError for derivative

Step 2:

After we managed to get the PID line following done with just the bottom section, we used the remaining parts of the image to deal with cases like gaps and sharp turns. To do this, we cut a small top section out on the top and left a larger middle section.

Throughout line following, we would constantly check the top section for contours. If there was a contour, we continued doing normal line following. However, if the line was missing, we knew there must have been a special case like gaps or a turn.

Step 3:

Now we determine if there is a gap or sharp turn. For this, we used our middle section to find out which it was. To do this, we found the contours of the section and checked what its corner points were. After this, we used the corner points to determine the contour’s width and height. If the height was two times larger than the width, we knew it must be a gap case. After learning this, we would drive forward until we saw the next line.

If this was not the case, we would then go to check to see if it was a sharp turn. To do this, we would look at the contour’s height and width again. If the width was 3 times larger than the height, we would go into the gap case. First, the robot would find the center of the contour. This would allow us to determine if we were supposed to turn left or right. Afterwards, we drove forward until we saw no more black. Then, we turned in the corresponding direction until it saw the line again. Finally, we would move backward slightly to make sure it had room to see the line and deal with any close cases (for example a gap or obstacle case being right after the turn, 5 cm limit).

Here you can see a high level flowchart of the line following process: