The above is a video demonstration of our final prototype demonstrating our system tracking a drone. This video is showing the image detection from the perspective of the system (this is the video with the big blue line in the middle). We are using a webcam to record video, where we then use image detection to identify if there is a drone on screen. Once we believe if there is one we process where it is on the screen, and then control the motor to either make the system rotate clockwise or counter clockwise based on the position of the drone. The main optimizations that occurred throughout the prototyping process was the effectiveness of our image detection. We were able to improve our image detection's effectiveness by training the image detection model with more images to make the model more accurate. In addition, to optimize the model for the Raspberry Pi, we trained a quantized MobileNet model and converted it to TensorFlow Lite. We also properly calibrated the motor to not move to fast, as it would have to over correct if it passed the drone, while also not being too slow to not be able to follow the drone.