Vision Detector

Introduction

There's no need to build an application each time. 

With Vision Detector, running CoreML models on your dievice is effortless.

How to use

Vision Detector performs image processing using a CoreML model on iPhones and iPads. Typically, CoreML models must be previewed in Xcode, or an app must be built with Xcode to run on an iPhone. However, Vision Detector allows you to easily run CoreML models on your iPhone.


[For iPhone/iPad]

To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, copy this model into the iPhone/iPad file system, which is accessible through the iPhone's 'Files' app. This includes local storage and various cloud services (iCloud Drive, One Drive, Google Drive, Dropbox, etc.). You can also use AirDrop to store the CoreML model in the 'Files' app. After launching the app, select and load your machine learning model. 

Models can be accessed and opened directly from the 'Files' app using the export menu.


You can choose the input source image from:

- Video captured by the iPhone/iPad's built-in camera

- Still images from the built-in camera

- The photo library

- The file system

For video inputs, continuous inference is performed on the camera feed. However, the frame rate and other parameters depend on the device.


[For Mac]

When an external video input device is connected to your Mac, it will be used in priority. If no external device is available, the FaceTime camera on your MacBook will be used.


The supported types of machine learning models include:

- Image classification

- Object detection

- Style transfer

Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.


In the local 'Vision Detector' documents folder, you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed. The data should be organized into a table with two columns as follows:

(Label output by YOLO, etc.) (tab) (Message)

(Label output by YOLO, etc.) (tab) (Message)

The file should contain a table data with 2 columns as shown above.

For macOS version, use the file:

~/Library/Containers/jp.thyme.maclab.vision/Data/Documents/customMessage.tsv


This application does not include a machine learning model.

Privacy policy

This application does not collect or transmit any personal information.

This application does not use identifying information that can be linked to a specific device or individual.

This application does not require an Internet connection unless requested by the user.

This application uses photos or images stored on the device, but does not store or transmit the images used.

This application uses the camera functionality, but does not store or transmit the images it captures.


October 1, 2022