Today in class, I got my robot I have been working on for the last month to move. it has a cardboard chassis from my drawing practice on the website onshape and two wheels. I got my second motor to work without teacher input.
This is a section of my motor code I used to operate my robot car. I used comments to display what each part of my code did.
The link to the full code on GitHub is here.
This is my wiring chart. It shows all the input-output wires in my robot.
1) The goal of my model is to be able to recognize faces with a 90 percent accuracy
2) To train the model, I used the dataset train_labels and train_images. these datasets are filled with images and their corosponding labels that the AI can use to train.
3) My model takes in images of a faces and outputs who it thinks they are as well as the probability it is correct.
4) my model has 40 epochs and 3 layers with 330 neurons each
5) my model preforms around the mid 90's in terms of accuracy on the training dataset and 80 percent on the test dataset.
You can find my model here.
This is my singularizer, it's job is to make sure only one skittle enters the sorter's system at a time.
Over this semester, we have researched designed how to make a skittle sorter. We researched skittle sorters on the internet, sketched our ideas, and assembled prototypes of our components. We also had to created most our code with little to no help from our teachers aside from a small starting template, all of our coding skill had to come from research or past projects.
This is a video describing how my skittle sorter sorts the tropical flavored skittles into 4 bins.
It isn't in the video but there is now code to shake the ramp so that no skittles get stuck at the end