RoboCup@Home-OBJECTS benchmark

Nizar Massouh, Lorenzo Brigato, Luca Iocchi

DIAG - Sapienza University of Rome, Italy


This web site contains some samples of data sets, models and results to complement the RoboCup Symposium 2019 paper.

This web site will be continuously updated and used as a community resource for RoboCup@Home teams to continuously improve the data sets, the models, and object recognition functionality in RoboCup@Home.


TRAINING SET

196K images DOWNLOAD (1.7 GB)

The structure of the main RoboCup@Home-Objects data set is presented next.

8 main parent categories that contain 180 children categories, distributed as follows:

Parent Number of children

Cleaning_stuff 37

Containers 17

Cutlery 15

Drinks 17

Food 22

Fruits 23

Snacks 26

Tableware 23

The images were downloaded using Google, Yahoo and Bing! and cleaned from duplicates. We ended up with a total of 196K images that we split in 80% for training and 20% for validation.

TRAINED MODELS

We fine-tuned an AlexNet and a GoogleNet pretrained on Imagenet's ILSVRC12 on our data, using the Caffe framework on NVIDIA Deep Learning GPU Training System (DIGITS). Only the last fully connected layer for Alexnet and the last pool layer for Googlenet learned new parameters while boosting their learning multiplier by +1. We set our initial learning rate to 0.001 which we step down by 10% every 7.5 epochs. We trained both networks for 30 epochs with a stochastic gradient decent (SGD) solver.

Trained models (i.e., structure and weights) can be found here.

Caffe: Alexnet \ Googlenet

Tensorflow: Alexnet \ Googlenet

EXAMPLES

Google Colab notebooks, MATLAB scripts and Caffee scripts are available for easy demonstration of use of data sets and models.

  • Colab Example (caffe)

  • Colab_Example (Tensorflow)

  • MATLAB Example (by Sebastian Castro) - follow instructions in the linked folder

  • Caffe_Classification_script
    Usage instruction of the classification script: the script takes as argument the path to an image to be classified.
    Inside the script the paths to the models' components need to be edited.


Github-seeded web benchmark

5.7K images DOWNLOAD (0.9 GB)

The RoboCup@Home github repository (5.7 K images) contains images dowloaded from photos of objects actually used in several competitions. We have used photos published on RoboCup@Home github as seeds to create a benchmark from visually similar images collected from the web. We took advantage of the reverse image search provided by Google to produce this data set. Google's reverse image search takes an image with an optional label and provides images that are visually and semantically similar. After collecting the competitions' images of objects we ended up with 160 photos divided in 8 categories (our parent categories). We then proceeded to use each of these photos as seeds providing their category as a label and we downloaded the first 50 returned images.

Some seeds vs returned results examples:

Pepper Objects Benchmark

The Pepper data set has been organized to follow three developmental directions < L, O, R > that will allow to organize the images taken from the robot, also considering contributions from other research groups.

    • L is the set of locations where the images where taken: storing information about them would enable comparison with respect to different ambient and background conditions.

    • O is the set of objects categories considered in the data set: specific object instances may of course change as the data set will be collected in a distributed way.

    • R is the set of different runs of acquisition for a specific object in a specific location: having more acquisition sessions would enable improved statistical analysis of the results.

    • In each run, there is :

      1. 1 picture taken in modality A

      2. 1 picture taken in modality B

      3. 10 pictures taken in modality C

      4. 20 pictures taken in modality D

for more details about the picture modality, take a look at the next examples.


A sample of the data set can be found at this link (more objects and locations will be published).

Transversal light condition - Object: banana

Results

Evaluation of our models trained with RoboCup@Home-Objects data set.



Validation accuracy

Test accuracy (github-seeded test set)

Top-1 and Top-5 parent majority accuracy of our 4 models trained with RoboCup@Home-Objects and tested on github objects (gh-o) and github-seeded data set (gh-s).

Test accuracy (Pepper test set)

Result of the Googlenet@Home-180 model on 7 different Pepper objects. The result is reported in Accuracy percentage over all acquisition procedures.

Accuracy on 5 selected objects varying robot acquisition behavior.

UPLOAD DATA SETS, MODELS AND TOOLS

Crowd-source data set collection. RoboCup@Home teams and researchers interested in developing benchmarking tools for object recognition in home robots are invited to contribute to this effort, by uploading data sets, models and tools.


This feature will be enabled soon.