N.E.R.D - Evac Room

Problem:

Once the silver tape is detected at the end of the first room, the task is to locate randomly placed victims which are indicated by black and silver balls. Then store the victims safely in a compartment, in order to travel and drop them off at a randomly placed triangle at one of the 4 corners of the evacuation room.

Detecting the balls:

Identifying the location of the balls relative to the placement of the robot and light source is vital in order to maintain accuracy and stability.

One of the most vital functions that contributed to the overall success of the evacuation room is the function that detected circles based on the minimum and maximum radius of the ball: cv2.HoughCircles().

The data that was fed into the cv2.HoughCircles() functions were based on precise calculations to narrow down the field of depth we wanted the raspberry pi to analyze. This was one method used to isolate the ball closest to the robot, the farther the ball the smaller the radius and vice versa. So by manipulating the minimum and maximum radius values for the function, we were able to prioritize which ball needs to be picked up first, in order to avoid backtracking and wasting time.

To get further information on the victim, before approaching we calculate the distance the victim is from the robot using a method called triangle similarity. This process works by first finding the focal length of the camera's vision, unique to the camera. The focal length of the camera is solved for using this function: F = (P * D) / W

P is the apparent width in pixels, D is the known distance the victim is placed from the robot (accurately and precisely measured), and W is the known width of the victim (average width is used if victim/target varies in size)

Using the value returned from the focal length formula, we can reverse engineer the formula to solve for distance (D):

D = (W * F) / P

BrainstoRmed Ideas:

Solution 1:

With this method, we thought that using the cv2.inRange() function would be enough to detect the difference between the white table and silver ball but this created unwanted noise. This noise also resulted in more complication along the way as other image operations were applied.

Although there is a solution to this issue, it requires a tedious and unnecessary process which involves manual calibration of the values to find the exact threshold between white and black. Additionally if any other unexpected light sources or reflections appear, calibration will be required again.

Solution 2:

With this solution, we use the cv2.contours() function to determine all the contours in an image and then find the difference between the furthest left and furthest right point of the contour location in order to detect a value equal to the width of the ball.

This method proved to be very unreliable, although it has great potential this method is more suited for an area of less light disruption. The light disruption creates unwanted contours in the image which may at times equal the value of which we are looking for.

Solution 3:

In order to detect the black and reflective silver ball, we use the original frame that was captured by the camera and apply various functions to better analyze the image. Some of the functions that were used converted the image into grayscale and the blurred it to reduce noise. A major function that would narrow the field of vision was slicing the image. This allowed the Raspberry Pi to only focus on one ball at a time while still maintaining the resources to look for surrounding objects, using the original frame.

This solution worked extremely well and ended up bring out final solution as it was reliable, fast and didn't overheat the Raspberry Pi.

Navigation:

In order to locate, travel and store the victim a time effective and reliable method must be used. This method starts off with identifying the victim, then correcting the angle at which the robot is turned to center the robot in order to directly face the victim and finally travel to the ball.

We identified the ball using the function cv2.HoughCircles() as mentioned previously above, then we split the frame into 3 parts. The center piece's width being the diameter of the victim in pixels with a few extra pixels on either side as a margin of error. If the victim is already centered withing the center piece (also mentioned as the gray area) then a message is sent to the robot indicating the next phase is ready for activation, which is retrieval. If the victim is not located in the gray area, a message is sent to the Arduino Mega Pi to adjust it's angle accordingly until the ball is located in the center piece.

Once the ball is centered, the retrieval process is begun, which starts off by finding the distance the ball is from the robot using triangle similarity, as mentioned previously above. Then the inertial management unit (IMU) handles movement to the victim.

Below is a flowchart explaining the high level process of the robot in the Evacuation Room