Milestone 2

Project Plan

The project consists of two main subsystems focusing on distinct methods of detecting a drone’s presence, which the team will be divided between. The first subsystem focuses on detecting the image of a drone within a live video stream. The second strives to identify the communication signals between a drone and its controller and to then broadcast signals to the drone to deter it.

The team that will construct Subsystem 1 will consist of Dan, Thomas, and Tim. The team will need to complete the following tasks to fully construct this subsystem:

  1. Implement Object Detection Model on Raspberry Pi 4

    1. Gather large collection of drone images

    2. Process and label images

    3. Train custom model starting from predefines model found in TensorFlow's "model zoo"

    4. Convert Custom TensorFlow (TF) model to TF lite model using the TensorFlow Lite Optimizing Converter (TOCO)

    5. Test Model's performance will running on the Raspberry Pi 4

    6. Additionally the team can incorporate the use of the Coral Edge TPU (Tensor Processing Unit) to drastically increase the model's performance.

      1. Recompile custom model through Coral's Edge TPU Complier tool

  2. Rotate system to face detected drone

    1. Use results of the Custom TF model to determine the drones location in the video stream

    2. Rotate platform so that detected the detected object is in the center of the video stream

The team that will construct Subsystem 2 will consist of Ben, Connor, and Dan. The team will need to complete the following tasks to fully construct this subsystem:

  1. Study the signals between the Drone and its controller in a controlled environment

    1. Use Tektronix Real-time Spectrum Analyzers (RSA) to identify the shapes of various signals sent between the drone and its controller

    2. Record the details of the shapes and spectrum locations of the drone signals

    3. Write script to watch for the scenarios recorded above using the Pluto Software Defined Radio (SDR)

  2. While studying the Drone Signals, also study the effects of transmitting out own signals such as noise in the same frequency range

    1. Record the results of the transmitting tests and determine the best signal type to transmit for deterrence

    2. Write script to be called by script in Task 1.3, which will use the SDR and the directional antenna to transmit the deterrence signal

  3. Build Machine Learning model for more robust drone signal identification

    1. Gather a large dataset of drone/controller signal readings

    2. Process recording data

    3. Train custom model starting from predefines model found in TensorFlow's "model zoo"

    4. Convert Custom TensorFlow (TF) model to TF lite model using the TensorFlow Lite Optimizing Converter (TOCO)

    5. Test Model's performance will running on the Raspberry Pi 4

    6. Additionally the team can incorporate the use of the Coral Edge TPU (Tensor Processing Unit) to drastically increase the model's performance.

      1. Recompile custom model through Coral's Edge TPU Complier tool

Additionally the construction of the system's housing will be handled by Ben, Thomas, and Tim.


We have also made a Gantt Chart to help distribute and track our tasks which can be seen here - Gantt Chart

Concepts

  1. Image Detection - We plan on using computer vision techniques to allow us to properly identify and track drones in the sky.


  1. RF Detection - We will also have our system actively searching for radio waves between the drone and the controller and a certain frequency in order to also help locate and detect the drones in the vicinity.


  1. RF Jamming - Signals will be outputted from out system in an effort to jam the communication between the drone and the controller, in the hopes of having the drone's live video feed being disrupted and forcing the drone to land.


  1. Rotating Stand (Oscillating or Full 360) - A rotating stand will be used to hold the system, and this stand will either rotate 360 degrees or rotate back and forth in a specific direction in an effort to help the image detection and RF detection find drones in the sky.

Concept Selection

  1. Image Detection: We are using Image detection in our project because we feel as though it is the most basic form of detection that we can use and will be a form of detection that will work in most cases. In addition to this, we will be using image detection to determine whether the drone is on the left or right side of where the DDS is aiming.

  2. RF Detection: We are using RF detection because it will work in a lot of situations that image detection will not work (or where image detection will be less effective/accurate). For example if there is a very noisy background (such as a treeline) could make image detection less effective we can use RF detection to determine whether there is a drone in the area.

  3. RF Jamming: We intend on being able to jam the connection between the drone and the controller to force the drone to land. We are concerned about the legality of jamming the drone, the main way that we intend on dealing with this is by either jumping frequencies or making sure that the distance in which we are jamming not large enough to affect anything other than the drone.

  4. Rotating Stand: We intend on having the entire system on a rotating stand so that we can aim the antenna and camera at the drone based on where we detect the drone. This is necessary because the antenna that we are using sends out a very narrow signal, and if the platform is not moving the antenna would not be able to jam the drone as it would only be jamming a very narrow area that would likely not have the drone in it, or not have the drone in it for a long period of time.


Analysis

Since our project focuses on a combination of different technical areas with differing individual goals, our analysis of any subsequent results throughout the semester will have to be catered to each of those goals.

Subsystem 1 is focused on the visual identification of the drone and controlling the rotation of our platform. Since we will be detecting the drone using a deep learning model, we can utiltize a multitude of tools for analyzing the efficacy of our model. The most simplisitic way of analyzing the model would involve recording the accuracy of new images of drones against the model that was trained against different images. This is usually a decent indicator of how well the model runs, but there are other factors that should be taken into respect, including the trend of our chosen loss function, or possibility of over optimizing towards the training samples. Since we are using Tensorflow-lite as our primary training/impelmentation library, we should be able to easily collect these statistics, and it should allow us to properly determine any changes that may need to be made to the model when training. The other portion of subsystem 1 is controlling the rotation of the platform with the same Raspberry Pi that will be performing visual detection. Analysis of this portion will come from physical tests, we will have to see at what speeds we can rotate the platform so that a moving drone will stay within the frame of our camera. As long as we can maintain the drone within the frame, we can ensure that our directional transmit antenna is facing the drone to perform the deterrence. We will also need to analyze various "idle" states that the platform can be in to try and optimize the way in which the system searches the surrounding area, which can also be performed using physical experiments.

Subsystem 2 is focused entirely on radio frequency based decision making and offensive techniques. One of the goals we are likely going to have to tackle involves being able to determine whether a drone is present in the immediate area by looking across the expected RF spectrum. Our tests will give us an idea of possible frequencies the drone can take, and what the shape of its spectrum will be. We can use this information to train a deep learning model that can differentiate between non-drone and drone signals (since we will be operating within the same band as Bluetooth, WiFi, etc.). The analysis of this model would then be in similar vain to our analysis of the visual detection model in subsystem 1. The other purpose of subsystem 2 is to develop a means of deterring the detected drone. This will involve looking deep into the way in which information is transmitted between the controller and drone in order to determine the best way to affect that connection. Our basic goal is to simply jam the connection, since that will likely cause the drone to perform a landing sequence. Our analysis for these tests will be based around our ability to properly jam the section of the spectrum that the drone is operating in, without affecting any other devices (i.e. FCC rules). We can determine whether our jamming is working appropriately by monitoring our own wireless devices while simultaneous attempting to jam the drone in a lab environment. Additionally, this portion of the analysis will also require a focus on any restrictions we have in regards to the distance we can transmit. We can do this by monitoring the recieve power of our transmitter at various distances away from the line-of-sight of the directional transmit antenna. One of our possible goals further down the line could involve attempting to directly corrupt or alter the recorded video the drone is taking. Analysis of this would involve monitoring the resulting video transmission and what video is actually recieved by the controller, as well as looking more into which portions of the drone's spectrum is focused on transmitting video, so that we could possibly disrupt that transmission

Testing Plan

The test plan for our project is broken into two parts; Subsystem 1 test plan and Subsystem 2 test plan.


Subsystem 1 test plan: The SS1 tests serve to validate the image recognition and platform movement accuracy.

  • Test 1:

    • Detect drone at 10 feet and 25 feet

    • Performed in lab setting

  • Test 2:

    • Scan field for drone

    • Stop scanning when drone is detected

    • Performed at 25 feet

    • Performed in lab

  • Test 3:

    • Control platform to turn towards drone

    • Detect which direction the platform should turn

    • Control platform with accuracy

    • Performed at 25 feet

    • Performed in lab


Subsystem 2 test plan: The SS2 tests serve to validate the RF detection and deterrence ability.

  • Test 1:

    • Start deterrence signal on received command

    • Receive TTL command and start broadcast

    • Stop signal on command

  • Test 2:

    • Generate transmit signal capable of covering drone control

    • Noise generation must be confined to 10MHz spectrum

    • Noise generation must be above drone signal

  • Test 3:

    • Force landing of drone by targeting drone side

    • Done at 10 and 25 feet