One of our goals this season was to use a camera and camera vision to detect our game element for the most autonomous points. The idea was to have a camera in the front that could see all of the spike marks, then move to place the purple pixel, then, using the April tags on the board, navigate to place the yellow pixel and get out of the way for the other team.
With inexperienced first-year programmers, using the camera effectively would take a lot of work, and no one from our four teams knew how to use the camera to detect an object. To start, we programmed the robot to park with the preloaded pixels, and somehow, we managed to put the purple pixel on the correct spike mark twice in one competition.
Rather than jump straight into using computer vision, we wanted to use something more straightforward: a color sensor and a distance sensor combo. We designed the custom game element to be cubic and concave on four of it's faces, this would allow for more consistent feedback from the color sensor because there would not be any light coming in from other sources. The robot would search for it and place the pixels accordingly while using the distance sensor to zero in on the backboard. This scores us 50 points in autonomous. Our next main area that needs to be improved is the speed at which the camera comes into play.
This is where we get to work on our main goal, using a camera for object detection. The robot starts by scanning the two spike marks in front of it, and if the object is not found within four seconds of starting the operation, it assumes it is on the spikemark it cannot see. The robot moves to place the purple pixel on the spike mark and then moves to place the yellow pixel on the board. The main goal is to make this all happen quickly to allow for the alliance partner to also score their yellow pixel for more points.