Optimization

Camera Position

Based on our observations in tests of the model, it has the highest confidence and is able to correctly identify animals the most often when the animal is facing head on towards the camera. This is likely because the majority of the images in our dataset were taken from a perspective where the animal was facing the camera. In order to best utilize the model, images should be captured from a similar angle. Our recommendation would be to place the camera capturing footage of the cats near where their heads will be when they are feeding. If a feeder mechanism of some kind is used, the camera could go on top of that. If food is left on the ground, then the camera could be placed just behind it. 

Further Speed Increases

Based on our testing from the delivery page, we found that we were very close to being able to perform classification in real time. Our goal was to achieve real time performance through the use of the Coral Accelerator module. 

The primary hindrance to utilizing the accelerator is the need to convert our model into something it can understand, known as quantizing the model. The tensors in the current model use 32 bit floating point numbers, but to use the Coral 8 bit integers are required. We attempted to perform this conversion, however the model lost nearly all of it's accuracy during this process. 

The conversion process requires the supplying of example images, and in our attempt we believe we didn't supply enough images (only a couple of dozen), which led to the huge drop in accuracy. 

Future work would entail trying again with more example images, and then passing this quantized model to the special Coral compiler to generate an accelerator compatible model.

Coral Testing Specifications

USB Accelerator: 

Google Edge TPU coprocessor (4 TOPS (int8); 2 TOPS per watt)

Other Specifications: Same as delivery specifications