https://www.youtube.com/watch?v=aeMWWvteF2U
The following is used to create a tflite model:
https://teachablemachine.withgoogle.com/
when we export the model, select the "Tensorflow Lite" tab and the "Quantized" radio box (see image below)
https://www.youtube.com/watch?v=1_tFzuxQ-FM&t=0s
FTC-TFoD Repo: https://github.com/ssysm/FTC-TFoD
Original FTC TFoD Repo by Google: https://github.com/google/ftc-object-...
Skystone Model:https://github.com/ssysm/skystone-model
Where to put the tflite model
(note that cindy is using SECOND for herself instead of FIRST)
(base) ~/Desktop/FIRST/FtcRobotController/FtcRobotController/src/main/assets (master-25) $
For the official FTC tensor flow machine learning site.
Must log in by mentor who does upload
https://ftc-ml.firstinspires.org/
then use
photobooth to take the video. Note, however, that
Limit max number of frames is 1000
video has to be less than 100 megabytes
so can leave webcam on the robot and plug it into the mac while taking the video
QuickTime player can be used to edit the video to make it smaller.
https://storage.googleapis.com/ftc-ml-firstinspires-prod/docs/ftc-ml_manual_2021.pdf
https://my.firstinspires.org/Dashboard//
https://osxdaily.com/2016/12/04/record-video-mac/
Mount the webcam where it will be on the robot
Plug the webcam into your laptop into the USB port
Look for QuickTime Player in the Applications Directory
Then do File->New Movie Recording
Once that window opens your mac will automatically choose your internal isight camera.
Toward the right side of the control bar that comes up is a small triangle pointing downward.
Click on that triangle and a menu will come up giving you the option of what camera to use along with where you want the sound to be recorded from as well as quality and the option to save it where you want.
File->Export as 720p
Save the QT movie somewhere (can put in Downloads or Desktop, etc.)
Then login to https://ftc-ml.firstinspires.org/
Follow the directions in this FTC Machine Learning document
https://storage.googleapis.com/ftc-ml-firstinspires-prod/docs/ftc-ml_manual_2021.pdf
Go to the videos tab, and then click the Upload Videos button and select your QT movie
It may take several seconds for the new video to show up in the Tab Contents area. Once the new video shows up in the list, it may take several seconds for the extraction process to begin, depending on server resources. As frames are extracted, the “Extracted” column will begin to count up. Once the “Extracted” column matches the “In Video” column, the description of the video will change to a link. Clicking on the link will navigate to the Video Labeling tool where objects in video frames may be labeled.
To create a bounding box left-click on the location for one corner of the box, drag the mouse to the opposite diagonal corner for the box, and then release the mouse button. Once a bounding box is shown, a label can be added to the Region Labels area or the bounding box can be deleted using the Trash Can icon in the Region Labels area.
Once the first frame has been fully labeled, click the “Start Tracking” button on the OpenCV Object Tracking tools. It may take several seconds for the tracking process to begin. Once started, OpenCV will progress frame-by-frame, attempting to track the bounded labeled object as it moves for you.
Once the labeling process has completed, click on the FIRST Tech Challenge logo to return back to the ftc-ml main workflow page. Note that there is no “save” button, actions are saved each time a browser action occurs, and there is no way to “undo” or “redo” actions.
To create a dataset, one or more videos should be selected (checking the box to the left of each video to be combined into a single dataset) and the “Produce Dataset” action button pressed. This will open a pop-up dialog to select the number of frames for training and evaluation. The standard is to take 80% of the frames for training the model, and saving 20% for validation/evaluation/testing. Frames are randomized and separated into the two pools (Training vs Evaluation) based on this percentage. It’s not recommended to change this. Enter a descriptive name in the “Description” field, as this will be the description for the dataset. Keep it short and to the point. When ready, press “Produce Dataset” – the ftc-ml tool will extract the frame, label, and bounding box information and build the dataset. Don’t worry if you close your window or the pop-up goes away before it’s done, when the dataset is completed it will show up in your “Datasets” Tab Content area.
Datasets must contain AT LEAST one label. In other words, a dataset cannot contain only negative frames (frames that are unlabeled, because no actual objects being detected are present).
Datasets should be considered “whole” by themselves. While it’s possible to create datasets for individual labels, datasets cannot be “combined” to train models unless they contain exactly the same labels. For example, a dataset containing only the label “Bird” cannot later be combined with a dataset containing both labels “Bird” and “Bee” to form a model. However, a single dataset may be created out of multiple labeled videos that contain only “Bird”, multiple videos that contain both “Bird” and “Bee”, and videos that only contain negative frames all with the Video “Produce Dataset” action.
8. What are the maximum limitations imposed within the ftc-ml tool for various actions? (PER TEAM) • Max # of Datasets: 20 (you can delete datasets to make more) • Max # of Videos: 50 (you can delete videos to upload more) • Max # of Videos performing tracking at once: 3 (for multiple logins doing tracking) • Max # Bounding Boxes per frame: 10 • Max Video Limits: 2 Minutes, 1000 frames, 3840 x 2160 resolution, 100MB
This is most likely because you are running a version of the SDK that is 6.x. I assume this because you are using the TFOD models/opmodes from last year. FTC-ML produces TF2 Models. 6.x only has support for TF1 models. If you wish to use the models form FTC-ML please upgrade to 7.0. In addition in order to compete this year you need a version of the SDK that is 7.x ( <RS03>).