Subsystem 1 is focused on the visual identification of the drone and controlling the rotation of our platform. Since we will be detecting the drone using a deep learning model, we can utiltize a multitude of tools for analyzing the efficacy of our model. The most simplisitic way of analyzing the model would involve recording the accuracy of new images of drones against the model that was trained against different images. This is usually a decent indicator of how well the model runs, but there are other factors that should be taken into respect, including the trend of our chosen loss function, or possibility of over optimizing towards the training samples. Since we are using Tensorflow-lite as our primary training/impelmentation library, we should be able to easily collect these statistics, and it should allow us to properly determine any changes that may need to be made to the model when training. The other portion of subsystem 1 is controlling the rotation of the platform with the same Raspberry Pi that will be performing visual detection. Analysis of this portion will come from physical tests, we will have to see at what speeds we can rotate the platform so that a moving drone will stay within the frame of our camera. As long as we can maintain the drone within the frame, we can ensure that our directional transmit antenna is facing the drone to perform the deterrence. We will also need to analyze various "idle" states that the platform can be in to try and optimize the way in which the system searches the surrounding area, which can also be performed using physical experiments.