MoCA Dataset

Kinematic and Multi-View Visual Streams

of Fine-Grained Cooking Actions

Download Dataset and Code

MoCA is a bi-modal dataset with Motion Capture data and video sequences acquired from multiple views. The focus is on upper body actions in a cooking scenario. A specific goals is investigating view-invariant action properties in both biological and artificial systems and in this sense it may be of interest for multiple research communities in the cognitive and computational domains.

The dataset consists of 20 different actions with significant diversity in motion granularity, composition of motion primitives as well as the presence of tools.