QABVR 2024 Tutorial

Champalimaud Center for the Unknown in Lisbon, Portugal

Improving performance of your classifier

How well can you detect social behaviors of interest?

Now that you've gotten the hang of classification, you can make improvements on your classifier design to see how high you can push your performance on the MABe Task 1 dataset. You can keep working in your notebook from Part 1, or you can use the official AICrowd Challenge page and see how well your approach performs on the official test set.


Steps

Train a classifier on your own data

Use Bento to annotate behavior, and train the tutorial classifier on that data

If you brought your own dataset (or have collected one here), you can use your time this afternoon to try training your own behavior classifier using the notebook from Part 1.


Steps

Try training a neural network for classification

Explore the four baseline models from the MABe dataset.

Rather than using our pandas-based filters to characterize the dynamics of our pose features, we can use neural networks to classify behavior based on feature values within a window of the current frame. 

In the MABe 2021 paper we tested out four different neural network architectures to see how well they perform as behavior classifiers. The winning solution to the competition also used a neural network based approach.


Options

Inter-annotator style differences

Study a dataset of 10 videos, each scored by 8 annotators

Annotating behavior can sometimes feel a bit subjective, and different people (even in the same lab) often settle on different mental rules for what counts as the start and stop of a behavior. While working on MARS, we ran a study in which eight labmates all annotated the same set of videos for attack, mounting, and close investigation, and looked at some of the differences in annotations that emerged.

Can we use pose features or classifiers to explain where differences in annotation style come from?


Steps

Questions