Phase 3

Phase III:

The Driving Code:

Now that a trained model is available, car is driven by the trained model. To do this, I download the trained neural network model from my model training computer onto the Raspberry Pi, and add in my driving code. When the car starts, it starts a video stream and takes a picture of the car’s current surroundings. Then, the car asks the model to predict which direction to move, moves a step accordingly, and deletes the picture. It repeats this process and is thus able to move autonomously.

Errors Encountered while Testing Models:

Due to how little I chose to make the car move each time, some training images look like it has been incorrectly labeled and may cause inconsistencies in training the model. In order to compensate for this, I had to change my driving style during data collection. In order to be as clear as possible in images taken, the car must turns as early as possible in order to give the front of the car distance to the edge of the road in order for the camera to see a decent image of the terrain in front of it. Furthermore, in order to correct alignment issues of the car, all labels that corrected small alignment issues were also relabelled manually as to not overinfluence the model.

My Experience:

This part of the project was probably the easiest. I knew that my model worked because of the accuracy it showed during testing, so implementing it was the least stressful part of this procedure. The code was very similar to the driving code as well, with less inputs needed as the model was already created and just needed to be implemented. This part of the project had the least amount of problems and by far the easiest.