There's no need to build an application each time.
With Vision Detector, running CoreML models on your dievice is effortless.
Vision Detector performs image processing using a CoreML model on iPhones and iPads. Typically, CoreML models must be previewed in Xcode, or an app must be built with Xcode to run on an iPhone. However, Vision Detector allows you to easily run CoreML models on your iPhone.
The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.
Note: This app includes no model.
[For iPhone/iPad]
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, copy this model into the iPhone/iPad file system, which is accessible through the iPhone's 'Files' app. This includes local storage and various cloud services (iCloud Drive, One Drive, Google Drive, Dropbox, etc.). You can also use AirDrop to store the CoreML model in the 'Files' app. After launching the app, select and load your machine learning model.
Models can be accessed and opened directly from the 'Files' app using the export menu.
You can choose the input source image from:
- Video captured by the iPhone/iPad's built-in camera
- Still images from the built-in camera
- The photo library
- The file system
For video inputs, continuous inference is performed on the camera feed. However, the frame rate and other parameters depend on the device.
[For Mac]
When an external video input device is connected to your Mac, it will be used in priority. If no external device is available, the FaceTime camera on your MacBook will be used.
You can customize the labels detected by object detection.
An empty definition file is automatically created at:
~/Library/Containers/VisionDetector/Data/Documents/customMessage.tsv
Edit this file as a tab separated text file to map each detected label to a custom message.
Each line should contain two columns:
(label output by the model such as YOLO) [tab] (custom message)
Example:
person Hello, human detected
dog This looks like a dog
Note: This feature was removed in version 2 for iOS edition.
This application does not collect or transmit any personal information.
This application does not use identifying information that can be linked to a specific device or individual.
This application does not require an Internet connection unless requested by the user.
This application uses photos or images stored on the device, but does not store or transmit the images used.
This application uses the camera functionality, but does not store or transmit the images it captures.
October 1, 2022