Phase 1

Creating The Car:

I repurposed a RC car I created for my voice control robot project for this project.

In order to create my car, I use two continuous rotation servos as wheels, and a Raspberry pi to control the two servos by using GPIO pins 13 and 18 as two input pins. I use two power sources to power the robot and to not overload the Raspberry Pi. I connect the power and ground of the two servos to the breadboard and connect it to one power supply of 4 AA batteries, and connecting one ground pin from the Raspberry Pi to the breadboard. The other power supply, a power bank, is directly plugged into the Raspberry Pi. Next, I connect all the pieces of the robot to a base. This can be any solid rectangle.

When installing the wheels, I tried my best to align the wheels, but slight misalignment still cause me some trouble later on.

The camera needs to be installed on a sturdy base to take picture without shaking. A 5” metal bookend provides a perfect solution.

121_1506311215766.jpeg

The Driving Code:

I create the driving code in Python by moving the appropriate servo when a button on a keyboard is pressed. To optimize picture quality, the car only takes pictures when the robot is stationary, so pictures and movement only occurs on a button press rather than continuous movement. Thus, when I input a movement command, the car takes a picture and attaches a tag that tells the movement command issued. This also reduces the amount of human error possible since the robot will stop very often, so overturning or moving too much will not happen that often.


Data Collection:

I first create a practice track. I used some painter’s tape on my hardwood floor to create a driving track with various and different angles of turns. We have to run the car on the track several times. Around 2,000 images were collected and turned out to be sufficient for training.

Possible errors in Phase 1:

The smallest alignment issues in the car created massive issues, as I had to feed the car extra commands in order to correct alignment issues while driving.

When driving the car along the track, the car ended up taking a lot more forward pictures compared to left and right. In order to correct this issue, I duplicated the left and right images to give the training data an equal number of images between forward, left, and right.

The placement of the camera in height is also a big issue in this step, because the camera needs to see the track directly in front of it, but not have much of its view obstructed by the front of the car.

(Left) (Right) (Forward)

My Experience:

Overall, the first part of the project did not go at all to plan. Because it was the first time I used a Raspberry Pi, I was extremely unfamiliar with how the platform worked and accidentally burned an SD card the first time by giving it too much power. Since I had built an Arduino powered voice controlled robot the prior summer, I was pretty easily able to create the Driving code. Data collection was the part that took me the most time in the project. I bumped into a ton of problems at this stage. At first, the camera was placed too high, so it didn’t pick up the ground immediately in front of the car. Then, the car kept on getting distracted by the surroundings such as windows, walls, etc., so I had to cut half the image so that the image only considered the ground view in front of the car. Then, because of the misalignment issue, I had a bunch of “mislabeled pictures” which we had to correct frame by frame, which was painstakingly meticulous. It was still very fun to be able to apply knowledge that I had used for another project in this one, and it definitely sped up the process.