We benchmark two popular CNN-based real-time object detection models on Android, using Google's Tensorflow Lite library.
The source code is available on GitHub. We tested our code on a Huawei Nexus 6P running Android Oreo (8.1.0). It uses Android's Camera2 API, and the Tensorflow Lite library (downloaded automatically by the build script).
It should just run after hitting the "Run" button in the "Run" menu, after importing the project in Android Studio.
Here is a sample run of our Android application with the SSD-MobileNet model. It can process a single frame in around 300-600 ms, giving us an effective frame rate of 2-4 fps.
The model is trained on the MS-COCO dataset, and as such, it can recognize 80 categories of objects.
We also implement the YOLOv2 and v3 object detectors in the app. That works a bit more slowly, at around 1200-2000 ms per frame.
This shows that object detection can be done somewhat fast on a smartphone, but not on a frame-by-frame basis in real-time.