For our very first prototype, our team wanted to simply draw out where certain components would be located. We got a posterboard and started drawing where the screen would be located in the back of the truck (it is not drawn/created to scale). Additionally, our team drew where we would like our sensor to be located, although this is still an ongoing debate. After drawing our first rough prototype, our team started seriously thinking about the placement of our components because looking at the prototype, it is going to be very difficult to connect everything together without any mishaps. This was a good first prototype to get us thinking about how we will improve our product, more specifically, the positioning of our components.
The VEX V5 Optical Sensor detects color to then relay it to the robot brain.
The VEX V5 Robot Brain receives the information/color detected by the color sensor and projects it.
The VEX V5 Battery is used to power up our prototype system.
At first, when one of our members (JP) went to meet with our mentor for help in connecting our VEX kit to our display screen, we thought we would need to create a new wire (solder a wire) to be able to connect both devices with one another. However, after looking online at a couple of forums, it was stated how it is possible to use a Raspberry Pi to connect the VEX V5 robot brain with a display screen and then display what the robot brain displays onto the display screen for an overall better demonstration of our product, which is what the pictures below show.
For our team to successfully display what the VEX Robot brain displays onto a display screen, we had to use and code a Raspberry Pi to connect the display screen and the robot brain together through the Raspberry Pi as a medium.
Here is the best model that our team has been able to create with the resources we have currently gathered. This is our finished prototype which we will be displaying at EXPO.
This was an accuracy test to see if our Optical Sensor was able to pick up the three colors of the traffic light, or none of the three colors in general, which proved to be successful.
This was a difficult test to run as the transmission was near instantaneous when the optical sensor captured and displayed the color detected from one to another, however, these were the rough results we got. All of them were below a second and, as said, near instantaneous, proving to be effective.
This is where our Optical Sensor failed the most as it was not able to detect color from farther than 7 inches away without the color detected being wrong. This was the main struggle with our prototype, and we were not able to acquire a better sensor to better our demonstration of our product.
Here, our group wanted to test the different brightness levels of our Optical Sensor in different lighting environments. The Optical Sensor can switch between 0% brightness all the way up to 100%, and we wanted to see if different environments would affect it. It appeared that only when we were in a dark space with no brightness levels did the Optical Sensor struggle to pick up red; however, after some retests, it appeared that this may have just been a flaw. The Optical Sensor appeared to work well within reasonable range to different lighting environments.