Management

Switching to TensorFlow

In the beginning, we looked and picked apart what the previous team had done for C.A.T.S. What we found was that, since the backend was based on the open source NVR software Frigate, the model could gain more performance if it used TensorFlow and an Edge TPU, since Frigate is built around the usage of Edge TPUs and Tensorflow. So, we decided to make a new model to replace the PyTorch model previously in use.

Raspberry Pi Camera Usage

OpenCV doesn't work well with the serial camera that is made for the Pi, leading us to learning how to use the Pi Camera library, however this library is not well documented, with many of the existing documentation being out of date. This lead us to patchworking many articles as well as trial and error in order to get our camera to work properly using this library.

Formatting New Training Data

Originally we had gone about using a modified version of an existing model that was built to identify different cat and dog breeds, but upon further research we realized that in order to increase the probability that our model will successfully identify a cat, we need it to be able to successfully identify animals that are not cats. This lead us to training the model on a variety of different animals, however we soon found that it was not as simple as just training the model multiple times, but rather we needed to merge all the datasets along with reformatting some of the corresponding files to reflect the new additions. This was easier said than done as many of the training data that we had was already split up into multiple sections meaning we needed to merge the collections of images, the label maps, and the tfrecords. The images and label maps were not very difficult, but because of the nature of tfrecord files it took a special process in order to merge multiple together which we had to take the time to research and implement in order to accomplish.

Incorporating the Coral Accelerator and Running the Model

While we were able to solve this issue rather quickly, it is worth mentioning. Upon finishing the training the model, all that was left was testing it to make sure it worked with the accelerator, which we had found documentation on how to do so from Google themselves. However, this documentation appeared to be missing some key information which led us down a rabbit hole to figure out what exactly we were missing. We were fortunately able to get this done quickly, but it is a hassle and displays the poor quality of documentation existing on the internet.