This site is from a previous workshop. To view the current year's MABe site
Behavior Classification Challenge Timeline
March 8, 2020: Challenge starts
April 30, 2020: Challenge ends
May 14, 2020: Decision to participants
Top participant for each task invited to speak at our workshop
Behavior Classification Challenge
The emergence of markerless pose estimation tools such as DeepLabCut has revolutionized the quantitative analysis of animal behavior. By applying computer vision tools to estimate the poses of freely behaving animals, researchers can hope to automate the process of detecting behaviors of interest, freeing them from the labor of frame-by-frame annotation of behavior videos, and opening the field to more high-throughput screening of animal behaviors.
Typically, automated classification methods first estimate the poses of animals in terms of a set of anatomical keypoints, then use temporal features constructed from these keypoints alongside manual annotations as training data for a supervised classifier. Unfortunately, there are very few publicly available datasets for training supervised classifiers, and the behaviors annotated in those datasets may not match the set of behaviors a particular researcher wants to study. Collecting and labeling enough training data to reliably identify a behavior of interest therefore remains a major bottleneck in the application of automated analyses to behavioral datasets.
One way to reduce the training cost of behavior classifiers is to employ unsupervised or semi-supervised methods to first learn more informative, disentangled representations of animals’ movements and actions. While high-quality manually annotated behavior data is scarce, unlabeled videos of animal interactions are abundant and easy to produce. How can we best capitalize on this video data to improve our ability to recognize new behaviors of interest?
Challenge Dataset
*<Arxiv link to the data coming soon>*
*<link to the dataset and competition coming soon>*
We have assembled a large collection of mouse social interaction videos from our collaborators in the David Anderson laboratory at Caltech, which we have manually curated for this Challenge. All videos use a standard resident-intruder assay format: resident is a black male mouse, intruder is a white male or female mouse. Assays are performed in a standard laboratory mouse home cage, recorded using our previously published setup (Hong et al 2015), and filmed using top- and front-view cameras (only top-view data will be provided for this Challenge.) The poses of both mice have been estimated in terms of seven anatomical keypoints using our Mouse Action Recognition System, which achieves high accuracy on videos in the Anderson lab environment (Segalin et al 2020, bioRxiv).
Competitors will be provided with frame-by-frame annotation data as well as animal pose estimates; raw video data will not be included. To reflect the fact that behavior video data is cheap to obtain compared to expert annotations, competitors will be provided with a large amount of unannotated data (ie: tracked keypoints with no manual behavior annotations), as well as a smaller quantity of data with frame-by-frame annotations of behaviors.
Our competition will have three tasks:
Task 1: Classic classification task. Predict bouts of attack, mounting, and close investigation from hand-labeled examples. Training and test sets will be annotated by the same individual, annotator A.
Task 2: Style transfer task. Different individuals have different rules for what they consider a behavior, particularly in the case of complex behaviors such as attack. Using the training data from annotator A above, as well as a small number of examples of the same behavior from annotator B, train a classifier that can capture annotator B’s “annotation style”.
Task 3: New behavior task. To what extent can we take advantage of transfer learning to reduce training data demands in behavior classification? Using the training data from annotator A above, as well as a small number of examples from a new class of behavior X scored by annotator A, train a classifier that can now recognize behavior X.
If you find our challenge dataset useful, please cite the following papers:
Prizes
Total: $9000 USD Cash Prize
Contact