CHallenge UP:

Multimodal Fall Detection

Overview

Winners!

  • 1st place – Hristijan Gjoreski (and team)
  • 2nd place – Egemen Sahin
  • 3rd place – Patricia Endo (and team)
  • Honorific mention – Vuko Jovicic


Details about the competition session during the IJCNN 2019 event will be available soon here!

Finalists!

We are pleased to announce the finalists of the competition. This list does not represent the ranking of the finalist. It is sorted by last name:


  • Patricia Endo (and team)
  • Hristijan Gjoreski (and team)
  • Vuko Jovicic
  • Egemen Sahin


Congratulations!


And thanks to all for participating in this competition.

See you at the competition session and awarding ceremony @ IJCNN 2019.

Description

Falls are frequent especially among old people and it is a major health problem according to World Health Organization. Fall detectors can alleviate this problem and can reduce the time in which a person who suffered a fall receives assistance. Recently, there has been an increase in fall detection system development based mainly in sensor and/or context approaches; however, public datasets are scarce.

In that sense, we built a public multimodal dataset for fall detection to benefit researchers in the fields of wearable computing, ambient intelligence, and vision. In the best of our knowledge, no fall detection competition has been reported, and especially using a multimodal dataset. It is important for the human activity recognition and machine learning research communities to be able to fairly compare their fall detection solutions.

For the competition, we provide a raw dataset collected from 12 subjects that performed 11 activities and falls, three attempts each; summarizing information from wearable sensors, ambient sensors and vision devices. Nine subjects are exposed as training labeled dataset and three subjects will be released as unlabeled testing set. For evaluation, F1-score will be used as the metric. The participant with the best score will be the winner.

To address this challenge, participants can do experiments considering different combination of multimodal sensors in order to determine the best combination of sensors with the aim of improving the reliability and precision of fall detection systems.

The competition can be interesting in particular to the growing research community of human activity recognition and fall detection. Moreover, it is also attractive to any person interested in solving signal recognition, vision, and machine learning challenging problems given that the multimodal dataset provided opens many experimental possibilities.

Organizing team

Hiram Ponce (hponce@up.edu.mx),

Faculty of Engineering, Universidad Panamericana, Mexico.


Lourdes Martínez-Villaseñor (lmartine@up.edu.mx),

Faculty of Engineering in Universidad Panamericana, Mexico.


León Palafox (lpalafox@up.edu.mx),

Faculty of Engineering in Universidad Panamericana, Mexico.


Karina Pérez (kperezd@up.edu.mx),

Faculty of Engineering in Universidad Panamericana, Mexico.