Engineering Concepts

IMG_7906.mov

Robot is Moving

Through a step by step process, we were able to create a robot. I started by 3D printing the wheels and robot chassis on Onshape. After learning how to wire the pico, motor driver, motherboard, and motors,  worked in the MU editor program to make my robot carry out its code. Below are images taken to document my progress.

Unassembled Robot

This is what my robot looked like when all of the pieces were put together before the parts were attached.

Assembled Robot

This is what my robot looks like now that the parts were glued onto the chassis. Next to my robot is a card that has my wiring table. There are also labeling stickers on my wires. Both of these are to prevent confusion in the future if any thing falls apart.

GitHub

Using the "#", I was able to create comments that describe what different lines of codes' purposes are.

GitHub is a website where I can store code in different repositories to use later and share with others. Here is the code that I used for this robot. GitHub Link

IMG_8762.mov

Traversing the Path

After programming the robot to move forward, backwards, left, and right we were tasked with making it traverse a path. In class we talked about the physics behind our robots. I knew that mine had trouble moving straight without drifting. The solution to this problem was weighing down my robot and adjusting the surface of out wheel. I did this by taping around my wheels and doubling them. I also changed the weight distribution on my robot by rearranging it.

When coding I played around with the premade commands that we made. Our class learned how to make them into functions. By doing this it simplified the code so that I could repeat steps more easily. 

GitHub Link

Robot Body Design Process

Robot Body Design Process
IMG_2921.MOV

Prototype Phase 1a (Servos working)

In this stage of creating our skittle sorters, my partner and I got the servos we'll be using to function. We used a breadboard and Pico device. Because my partner and I are planning to have two funnels for the Skittles to be sorted, we programmed two servos in Python. Both servos connect to 5V and GND on the breadboard. There is a third wire in both that goes into the middle section of the breadboard. This is specified in the wiring table. To supply power from the computer, the Pico also connects to the breadboard. The specific points where VYSYS, the GPs, and GND are plugged in is also specified in the wiring table.

Labeled Wires

Wiring Table

GitHub

Here is the code that we used for the servos. GitHub Link

Prototype Phase 1B (RGB Sensor)

IMG_1097.MOV

Sensor Working

My partner and I soldered the SDA and SCL ports from the Pico controller to the corresponding parts on the color sensor. We then went on GitHub to upload the necessary libraries to program the color sensor and get it to run. 

Data Collection

We used different colored pens, markers, and other objects and recorded the color rgb that printed in serial.

Face Recognition Model

In this unit, our class we worked with Pedro Pascal from the AI Institute at Georgia Institute of Technology. He taught us about deep learning which is a type of machine learning with artificial intelligence that uses neural networks and multiple layers to process information and achieve a high understanding of data. 

Our model uses data from a set of images collected over the span of multiple weeks. The engineering classes both contributed to the data set by taking pictures that were collected. The images were used to train and test the accuracy of the model. To put it simply, the model uses a series of imports from inside the tensorflow library. It uploads the images from a google drive and imports the data into training and testing sections. The training set uses the most data. This is where the machine learns how to predict the names that correspond with the images. The training set is similar but this is where the machine is supposed to execute with the most accuracy. 

The purpose of our individual models was to make the machine model as accurate as possible when it came to labeling the correct names with the pictures. We did this by messing around with the number of epochs ran and the number of layers made. I was able to get my model to an 82% accuracy in the training set and 97% accuracy in the testing set. 

Here is the link to my code.

IMG_3616.MOV
IMG_3653.MOV

Skittle Sorter Singularizer 

The next stage of trying to create our sorter was to get the singularizer working. The singularizer is the mechanics behind getting the skittles to individually be sorted. 

To make it, we made a funnel system that moves skittles on two sides of the sorting layer. We created a hole layer that rotates that the skittles fall into as well as a layer with two holes on different sides that individual drops skittles down. To create this design, a servo is connected with a servo horn dowel that rotates the hole layer. Additionally, there are multiple rings of cardboard that provide a space for the singularizer. When we work more on the color sensor, we plan to place them next to the place where the skittles get dropped.

Our singularizer prototype functions fairly well, the only issue we have is occassional jamming. We think this will be solved when we move on to working with wood.