N.E.R.D - Green Square

Objective:

We need to find a way to get past the green squares by looking at the position they are in relative to the robot. We also need a way to calibrate the robot to the green squares beforehand. We may need to remove any noise or ignore anything else green that might look like a green square, and lastly, we need to move accordingly based on the green square position.

Detecting Green Squares:

The most important thing for completing the green squares is being able to detect them with a robust system, even with extra noise.

To start detecting green squares, we use the original frame from the camera, and create a mask on it, using green values from our calibration. To create this mask, we utilize the inRange() function in cv2. After this inRange(), any spots that are in our green range, get turned to white, and everything else is black.

After that we use the findContours() cv2 function to draw contours around the spots that are possible squares. We use the area of the contour and compare it to a constant number of pixels required for being a square to eliminate noise. If there are more than four squares detected, we take the biggest four squares.

After that we call minEnclosingCircle() around the biggest contours, drawing a circle around our squares. With this circle, we can get the radius of the circle, which is about the length of the square. We can also get the coordinates to the center of the circle, allowing us to pinpoint the location of the square.

Now with the number of squares, along with their location on the screen as well as their size, we are ready to find possible solutions to determine which way to move.

BrainStorming Possible Solutions:

Solution 1:

In this solution, we would try and split the resulting frame from the mask into four different parts, each one with a square in it, or where a square would be located. Then, based on the closest two squares, we make a turn.

In the example on the left, we would split the frame into four sections (the red lines) based on the positions of the squares. In this solution, we only take into account the bottom two squares. Let’s say that there is only a square in spot 3. If that was the case, we would always turn left, no matter what is in positions 1 and 2.

Solution 2:

Create side slices next to the squares to determine the position of the squares. Based on this information, we would know whether the square area on the left/right and up/down of the lines next to them has the line, being able to know which square is where

In the example on the right, you can see that every single square would have a slice on each side of it. Although not visible, every slice on the black line is actually two slices. For example, there would be one slice on the right of square three, and also one slice on the left of square 4. These would be two separate slices. By determining the line positions for each square, we can map out the positions of the squares relative to the line. For example, square number one has black at the right and the bottom and only a square located there would have both those conditions as true.

Solution 3:

This solution is the most complex, relying on relative positions of squares. For example, on the image on the far left, here how pixels in our frame are determined.

As you can see, pixels start at the top left and increase as you move down or right. In this method, we compare the coordinates of the squares with each other to determine where they are relative to the line.

So for example, if there are two squares and one square is much lower and much to the right of the other one, we know that the image will look like this and that we have to turn right.

Like this, we use the relative positions to find out where the squares are on the line. There is one problem with this method though: If two squares are in a position like the one illustrated above.

In the scenario on the rightmost panel, we only know that one is on top, and one is underneath it. Because of this, we would have to find out if the bottom square (or top square but we use the bottom square) is on the left or right of the line. To do this, we use the method in solution 2 and check a square next to it to see if it is black or white. This is represented by the red square in the picture on the left.

In this solution, when presented with a three square case, we don’t need to ever check frames next to any squares, and we only use relative positioning. If there are two squares on the bottom, we know we have to turn back, and if there is one square on the bottom, we check which square above it is in line with it and use the relative positions of the top two squares to determine which side the bottom square is on.

In a one square case since we can’t position it with other squares, we create two slices both on top and on the right to determine the position of the square. If the top slice is black, it is on the bottom half and we check a slice on the right. If the top square is white, we just continue forward.

Note that for all solutions, if there are four squares we just turn around and go back

Implementing solutions:


Implementing Solution 1:

As soon as we started trying to implement solution one, we found out that the robot rarely positions with the line perfectly straight like in the example of the left. Instead, it usually looked like the diagram on the right.

This meant that we would need slices to be dynamic and also we would need slanted slices, which would be difficult to make based on the line positions, so this solution was pretty much immediately dumped after trying to implement it.

Implementing Solution 2:

Because solution one didn't work, we wanted a solution that worked no matter the position of the lines. For example, the side slices in solution two only rarely went off their target, as shown below.

As you can see, although the position of the squares was slanted and the slices didn’t slant, they were still usually on target, because their position was relative to the squares themselves. But this method had major problems. It wasn’t as robust as we wanted it to be, and calculating the average RGB values of 16 different squares was heating up our raspberry PI, and took a lot of time and processing power, making our video laggy and our PI slow.

Because of this, we needed a solution that was accurate but also didn’t require much processing power, and creating too many slices. That is why we created solution 3, which required only two slices, and was very robust.

Our Final Solution/Implementing Solution 3:

Although implementing solution three was hard, and writing the code was difficult, after a lot of debugging, solution number three, using relative positioning and only two slices worked. It was fast, was extremely robust, and didn't heat up our Raspberry Pi.

Improvements:

Originally, our slices were determined by a fixed constant that we had determined, like in solution two, but in solution three, we decided to use the radius of the circle around the square determined earlier to create a distance that would not have any green while also being exactly on target, making our system of checking squares extremely robust.

Here is a flowchart to explain the process solution three uses to find out where we have to go: In this flowchart, slice1 is a slice above a square, and slice2 is a slice to the right of a square.