**This instruction is still under construction -- if you have questions or comments, please let Irene know on Slack.
If you don't have any questions and thought the instruction was well written, still slack Irene and let her know.
Starting the GUI from ehren's m1 macbook:
m1 mac requires special install instructions available at this page: https://github.com/DeepLabCut/DeepLabCut/pull/1430
To start GUI on Mac, open terminal, call: conda activate DEEPLABCUT_M1, then call: pythonw -m deeplabcut
Starting the GUI from PCs in undergrad room in lab:
in search box in Windows tray bottom left, type 'anaconda Powershell Prompt (anaconda3)' and press enter to start it
in the terminal, call: conda activate deeplabcut then call python -m deeplabcut
*NOTE: the version of DLC on all PCs is 2.2.1.1 as of 11/08/2022. When you update DLC, make sure you install the same version on all PCs. This instruction (tab names, etc.) is based on ver 2.2.1.1. If you update, please update the instruction so those details match!
First, start a new project
Go to 'Manage project' tab
Click on 'create new project' - This will create the config.yaml file that keeps track of all the specifics for this particular network.
Click on 'Load Videos' - Select 12 videos that this network will specialize in scoring.
The videos for the rearing project can be found in ~Dropbox/docs(1)/docs_Dylan/Studies/Ach_Rearing/Camera Roll/<rat ID>
Press 'ctrl' on the keyboard while clicking to select multiple.
After choosing the videos, click 'Open'.
'Load Videos' will be replaced with 'Total <number of videos> Videos selected'.
Fill in information about project, etc.
Project name should take the following form '<ratID>tracker'
Experimenter's name should be 'jic'
DeepLabCut is very picky about the experimenter name. Using the same name makes it easier for us to combine frames from different trackers later as needed.
Select 'Select the directory where project will be created'
this allows you to specify where the config file will be saved (technically this is optional but STRONGLY recommended).
Recommended place to save it is ~GoogleDriveMirrors/Projects/DeepLabCut/. << be sure to use the newmanmemorylab@gmail.com google drive to have access to the network from Collab >>
Select 'Copy the videos'
Click 'Ok'
This will update the GUI so that there are a bunch of tabs that will allow you to push the project / network forward.
Go to 'manage project' tab
Click 'load existing project'
Click 'browse' next to 'Select the config file'
The config files can be found in: ~GoogleDriveMirrors/Pojects/DeepLabCut/<rat ID>-tracker/config.yaml
Click 'Ok' -- this will update the GUI so that there are a bunch of tabs that will allow you to push the project / network forward.
To check if extraction has been performed already or not, look at individual video folders in 'labeled-data' folder.
Full path to the folders: ~GoogleDriveMirrors/Pojects/DeepLabCut/<rat ID>-tracker/labeled-data/<video titles>
You will find PNG files in all folders if extraction has already been done. If not, proceed with extraction.
'extract frames' for your network - this step pulls individual frames out of the videos you selected in the previous step so that you can label them as training data for your network-to-be.
Click on 'Extract frames' tab at the top of the GUI
Double check that the config file listed is the correct one
Use the following settings
Extraction method = automatic
'want to crop the frames?' = false
'Need user feedback?' = no
'Want to use OpenCV?' = yes
'Select the algorithm' = kmeans
'specify the cluster step' = 1
'specify the GUI slider width' = 10
Click 'Ok' - this will start processing the videos (and take ~2.5 min / video), during this time the GUI will be unresponsive and will say '(Not responding)' at the top of the screen. DO NOT QUIT. BE PATIENT. Look at the terminal window (the code window, the black background one) to see that progress is being made. It will say 'You can now label the frames...' when done.
Before you start labeling the frames, you must edit the config.yaml file to specify the names of the parts you wish you label. The config.yaml file can be found in your tracker folder on Google Drive.
Initially, it says
- bodypart1
- bodypart2
- bodypart3
- object A
For non-ephys rats: update it so it says
- nose
- l_ear
- r_ear
- rump
For ephys rats: update it to say
- nose
- f_drive
- l_drive
- r_drive
- m_back
- rump
Now, you will label the extracted frames with the points that you want to teach the network to label for you.
Click the 'Label Data' tab at the top of the GUI
Double check that the config file listed is the correct one
Click 'Label Data' - this will open up a new window ('DeepLabCut2.0 - Labeling ToolBox')
Click 'Load frames'
select the video folder that you want to label. The video folders can be found in ~<ratID>tracker-jic-<date>/labeled-data
Click 'select folder' - this will display the extracted images on the 'Labeling ToolBox' window.
Label bodyparts on the images:
Link to a guide for labeling: https://docs.google.com/presentation/d/1YdFiQOoqKTqGugruEIj0XiKkSkXkJwFPqSoQBeRK4F8/edit?usp=sharing
Add any frames that you thought were confusing to the Google Slide for future reference!
Use 'Zoom' to zoom in.
Use 'Home' to zoom out to the original ratio
Check 'Adjust marker size' if needed. The marker size can be adjusted using the slider on the top left corner.
Recommended marker size: 3
Right click on the body parts to label.
Label bodyparts in the order from 1-4 (bodypart1 → bodypart2 → bodypart3 → objectA) Use 'select a bodypart to label' box when needed.
Left-click-drag the markers to move them.
Middle mouse click the markers to remove them.
Click 'Next' to move on to the next image.
Click 'Save' to save the labels.
Click 'Quit' when you're done.
a pop-up window will ask: 'Do you want to label another data set?'. Click 'Yes', then select another video folder to label. Click 'No' to exit if you're done.
Open Google Colab (https://colab.research.google.com/) using newmanmemorylab@gmail.com account.
You will see a window with five tabs. Click on 'Google Drive' -- this will give you the list of all Colab Notebooks we have.
If there is a notebook already created for your project: Click on the notebook you want to work on.
If there isn’t a notebook already created for your project:
Create a copy of this notebook (https://colab.research.google.com/drive/1zecJfrybVV6Ip1GfUESdoRCDikasOwNm#scrollTo=MBuaMj0_1y6e) and rename the notebook to match the name of your project.
The naming convention is: ‘<project name>_RearingAnalyzer.ipynb”.
Run the first set of codes.
VERY IMPORTANT: When working on a Colab notebook...
ALWAYS run the first set of codes (the four blocks of code under “INITIALIZE DEEPLABCUT SESSION”)
If prompted (and you will be most of the time), restart runtime after running the FIRST block of code.
“Restart Runtime” button will appear at the end of the code output; click it!
The third block of code will ask you to mount the Google Drive account; choose the newmanmemorylab@gmail.com account.
If the third block gives you an error, go back and check if you restarted runtime in the first block.
Run the first block of code under “TRAIN A NEW NETWORK”
Once the first one is done running, run the second block of code.
This will run for a very long time. If it stops any sooner than 24 hours after you run it, it most likely means there was an error.
Open your notebook on Google Colab (https://colab.research.google.com/)
Run the first set of codes.
VERY IMPORTANT: When working on a Colab notebook...
ALWAYS run the first set of codes (the four blocks of code under “INITIALIZE DEEPLABCUT SESSION”)
If prompted (and you will be most of the time), restart runtime after running the FIRST block of code.
“Restart Runtime” button will appear at the end of the code output; click it!
The third block of code will ask you to mount the Google Drive account; choose the newmanmemorylab@gmail.com account.
If the third block gives you an error, go back and check if you restarted runtime in the first block.
Run the first block of code under “EVALUATE A TRAINED NETWORK”.
This code will create an excel file (‘CombinedEvaluation-results.csv’) in the ‘iteration-0’ folder in the ‘evaluation-results’. Check that the file exists.
Open the DLC GUI.
Click on 'Extract/Refine Outliers' tab at the top of the GUI.
Click on 'Select videos to analyze'. Select one video you want to extract frames from.
Under 'speficy the algorithm' -- select 'manual'.
Click 'Extract Frames' -- this will open up a window similar to when you're labeling.
Use tools available to look at inidividual labeled frames and 'grab' (by clicking on the 'grab' button) erroneous frames. -- Alternatively, you can create labeled videos and watch them to find errors to determine which frames to pull & refine. (Latter method is recommended, but try and see which one works better for you!)
When grabbing error frames try to: (1) include at least 1-2 frames of each type of error (ex. if the tracker consistently labels part of the apparatus as the nose, grab at least one frame from each video where it does that); (2) don't grab too many frames from one error incident (i.e. don't grab five frames from the one time the rat forms a donut); (3) grab at least 20 frames from each video.
After hitting 'grab', check the 'labeled-data' folder -- this should create a png file in the folder.
When you're done with all videos, refine the labels following the steps laid out below. (don't extract twice! decide if you want to manually extract or have the computer do it in advance & choose one!)
Open your notebook on Google Colab (https://colab.research.google.com/)
Run the first set of codes.
VERY IMPORTANT: When working on a Colab notebook...
ALWAYS run the first set of codes (the four blocks of code under “INITIALIZE DEEPLABCUT SESSION”)
If prompted (and you will be most of the time), restart runtime after running the FIRST block of code.
“Restart Runtime” button will appear at the end of the code output; click it!
The third block of code will ask you to mount the Google Drive account; choose the newmanmemorylab@gmail.com account.
If the third block gives you an error, go back and check if you restarted runtime in the first block.
Run the second block of code under “EVALUATE A TRAINED NETWORK”. (skip the first block -- that's for evaluation)
Run the first block of code under “REFINE A TRAINED NETWORK”.
This code will create ‘machinelabel.csv’ and ‘machinelabels-iter0.h5’ files in the individual video folders in the ‘labeled-data’ folder. Check that the files exist.
Open the DeepLabCut GUI. -- SKIP TO THIS STEP IF YOU MANUALLY EXTRACTED YOUR FRAMES!!
Refer to the ‘Guide for creating a DLC project & labeling’ if you need guidance.
To open your project (NOT create, but open an existing project), follow the instructions under ‘To continue labeling on an existing project….’.
After opening your project on the GUI, click on the ‘Extract/Refine Outliers’ tab at the top of the GUI.
Click ‘launch GUI’ – this will open up a new window.
Move or delete labels as appropriate.
If you see a label for a body part that is not visible in the frame, middle mouse click the markers to remove them.
If you see labels in wrong places, left-click-drag the markers to move them.
**If you see an empty frame (no rat frames), please make sure to delete ALL labels! They might be in the corner, etc. Actively look for them & delete all.**
DLC puts markers for body parts not seen in the frame on the top left corner. Make sure to check that there's no markers there!
Once you are done refining the labels, go back on Google Colab.
Run the first set of codes.
VERY IMPORTANT: When working on a Colab notebook...
ALWAYS run the first set of codes (the four blocks of code under “INITIALIZE DEEPLABCUT SESSION”)
If prompted (and you will be most of the time), restart runtime after running the FIRST block of code.
“Restart Runtime” button will appear at the end of the code output; click it!
The third block of code will ask you to mount the Google Drive account; choose the newmanmemorylab@gmail.com account.
If the third block gives you an error, go back and check if you restarted runtime in the first block.
Run the second, third, and fourth blocks of code under “REFINE A TRAINED NETWORK”.
We want to get the iteration number up to >1,000,000. (MAX iteration is 1,300,000.)
Training process takes long. Google Colab sessions usually time out after 24 hours. To get the iteration number up to around one million, you may need to re-run training. Instruction for restarting can be found below.
You’re done! Now you can use your tracker to analyze videos!
For secondary training, you may need to run training more than once to get the iteration number up to around one million. To re-run:
Open the pose_config.yaml ('pose_cfg.yaml') file found in the 'train' folder.
Path to the 'train' folder: ~Projects\DeepLabCut\<project name>\dlc-models\iteration-1\<project name>-trainset95shuffle1\train
Change the init_weights to: /content/drive/My Drive/Projects/DeepLabCut/<project name>/dlc-models/iteration-1/<tracker name>-trainset95shuffle1/train/snapshot-<most recently saved iteration number>
This is the path to the most recently saved 'snapshot' file. DLC network is only saved every certain number of iterations. The most recently saved snapshot will have the largest number. The snapshot files can be found in the 'train' folder. Look through and see which snapshot has the largest number; use the path to that snapshot file.
Example paths:
/content/drive/My Drive/Projects/DeepLabCut/rear2tracker-jic-2022-01-26/dlc-models/iteration-1/rear2trackerJan26-trainset95shuffle1/train/snapshot-928000
/content/drive/My Drive/Projects/DeepLabCut/msopto24tracker-jic-2022-02-25/dlc-models/iteration-1/msopto24trackerFeb25-trainset95shuffle1/train/snapshot-714000
the default setting is: /usr/local/lib/python3.7/dist-packages/deeplabcut/pose_estimation_tensorflow/models/pretrained/resnet_v1_50.ckpt
VERY IMPORTANT: If you restart training without the init_weights setting changed, the training will restart from iteration 0. Be careful not to do this!
VERY IMPORTANT: When working on a Colab notebook...
ALWAYS run the first set of codes (the four blocks of code under “INITIALIZE DEEPLABCUT SESSION”)
If prompted (and you will be most of the time), restart runtime after running the FIRST block of code.
“Restart Runtime” button will appear at the end of the code output; click it!
The third block of code will ask you to mount the Google Drive account; choose the newmanmemorylab@gmail.com account.
If the third block gives you an error, go back and check if you restarted runtime in the first block.
ONLY use Google Colab for training / (automatically) extract outliers / refine labels / analyze videos steps.
Use the NewmanMemoryLab@gmail.com account for both file storage and colab'ing when colab'ing (mount everything onto this account!)
To use a different Google account for colab'ing:
* ONLY do this if you absolutely have to run 2+ sessions at once. (Google Colab has a 2-session limit.)
share 'DeepLabCut' folder & 'colab notebook' folder from newmanmemorylab@gmail.com Google Drive to the other account
On the other account's Drive, add shortcut to 'DeepLabCut' folder
edit the paths on colab notebooks & run the code!
normally the paths are: '/content/drive/My Drive/Projects/DeepLabCut/'+ProjectFolderName+'/~'
edit these to be: '/content/drive/My Drive/DeepLabCut/'+ProjectFolderName+'/~'
NOTE: Google colab on a private Google account only runs for ~12 hours. From my experience, this is good enough for analyzing or even labeling. However, it tends to give you more error than Colab Pro (on memlab Google account) when training. It also gives you GPU connection error often. -irene
Google Drive Mirror is Google Drive mounted onto a desktop. We have newmanmemorylab@gmail.com Google Drive mounted onto all PCs in the undergrad room.
the path to Google Drive Mirror is different on each PC. On skywalker PC, it's 'D:\GoogleDriveMirrors'.
In Google Drive, your project should be in ~/MyDrive/Projects/DeepLabCut/<your tracker folder>. (Projects are created as folders!)
All videos are stored in newmanmemorylab@gmail.com Dropbox.
The videos for the rearing project can be found in ~Dropbox/docs(1)/docs_Dylan/Studies/Ach_Rearing/Camera Roll/<rat ID>
The videos for the triangle completion task can be found in ~Dropbox/docs(1)/docs_Stephen/Behavior Videos