VACE
Virtual Annotated Cooking Environment
About
We present the Virtual Annotated Cooking Environment (VACE), a new open-source virtual reality dataset and simulator for object interaction tasks in a rich kitchen environment.
We use the Unity-based VR simulator to create thoroughly annotated video sequences of a virtual human avatar performing food preparation activities. Based on the MPII Cooking 2 dataset, it enables the recreation of recipes for meals such as sandwiches, pizzas, fruit salads and smaller activity sequences such as cutting vegetables. For complex recipes, multiple samples are present, following different orderings of valid partially ordered plans. The dataset includes an RGB and depth camera view, bounding boxes, object masks segmentation, human joint poses and object poses, as well as ground truth interaction data in the form of temporally labeled semantic predicates (holding, on, in, colliding, moving, cutting).
Features
VR interface
Immersive environment interaction via HTC Vive headset, controller, and chest tracker
Rich interactive kitchen environment
~80 tool, dish, and cutlery objects
~50 food objects
~20 furniture objects
Efficient sample generation
Easy to use sample recording process
User guidance through HUD step-by-step recipe from MPII 2 Cooking dataset
Thorough annotation
RGB view
Depth view
Object segmentation mask
Object bounding boxes
All object poses in 3D space
Logic predicates
on, in, grasping, pushing, cutting
RGB View
Depth View
Segmentation Mask
Dataset Statistics
22 samples
10 x cut cucumber
4 x cut bread
4 x prepare salad
4 x prepare openface sandwich
Variations:
with/without washing of the ingredients
with/without tidying up after preparation
with knife/with grater
order: get tools first/get food items first
salad: with/without additional spices, with/without stirring after seasoning, with/without pouring the salad into another bowl after stirring
sandwich: bun/toast
Single Sample Description
Download
Citation
@inproceedings{koller2022new,
title={A New VR Kitchen Environment for Recording Well Annotated Object Interaction Tasks},
author={Koller, Michael and Patten, Timothy and Vincze, Markus},
booktitle={Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction},
pages={629--633},
year={2022}
}
Acknowledgements
The research leading to these results has received funding from the Austrian Science Fund (FWF) under grant agreement No. I3969-N30 InDex and the project Doctorate College TrustRobots by TU Wien.
Contact Us
If you have a feature request or have recorded samples that you would like to add to the dataset, write us an email!
Maintainer: Michael Koller - (koller_michael@gmx.net)