CHallenge UP:

Multimodal Fall Detection

Data

For this competition, we present a large dataset mainly for fall detection, namely UP-Fall Detection, that includes 11 activities and three trials per activity, performed by 12 subjects. Subjects performed six simple human daily activities as well as five different types of human falls. These data were collected over 12 subjects using a multimodal approach, i.e. wearable sensors, ambient sensors and vision devices.

At the end of this page, you will find the training dataset (see publication date in section Important Dates).

The data were collected over a period of four weeks, from 18th June to 13th July 2018 in the third floor of the Faculty of Engineering, Universidad Panamericana, Mexico City, Mexico. During the collection of data, 17 subjects (9 male and 8 female) -but only registers from 11 subjects are taking into account in this competition- ranging from 18–24 years old, mean height of 1.66 m and mean weight of 66.8 kg, were invited to perform 11 different activities (see Table 1). The activities performed are related to six simple human daily activities (walking, standing, picking up an object, sitting, jumping and lying) and five human falls (falling forward using hands, falling forward using knees, falling backwards, falling sitting in an empty chair and falling sideward). All daily activities were performed during 60 seconds, except jumping that was performed during 30 seconds and picking up an object which it is an action done once within a 10-second period. In terms of the falls, each one was performed only once within a period time of 10 seconds (see Table 2). An extra activity was labeled as "in knees" (Activity ID 20) when a subject remains in knees after falling.

Table 2: Activities performed in the dataset.

Figure 1: Distribution of the sensors. (a) Wearable sensors and EEG headset located at the human body. (b) Layout of the context-aware sensors and camera views.

We use a controlled laboratory room in which light intensity does not vary, and the ambient sensors and cameras remain in the same position during the data collection process. However, we maintain the windows visible, thus in some cases there are recordings from cameras that show people moving in the background. See Figure 1(a) and 1(b) about the distribution of the wearable sensors, ambient sensors and cameras.

We use five Mbientlab MetaSensor wearable sensors collecting raw data from the 3-axis accelerometer, the 3-axis gyroscope and the ambient light value. These wearables were located in the left wrist, under the neck, at right pocket of pants, at the middle of waist (in the belt), and in the left ankle. Also, one electroencephalograph (EEG) NeuroSky MindWave headset was occupied to measure the raw brainwave signal from its unique EEG channel sensor located at the forehead. As context-aware sensors, we installed six infrared sensors as a grid 0.40 m above the floor of the room, to measure the changes in interruption of the optical devices, where 0 means interruption and 1 no interruption. Lastly, two Microsoft LifeCam Cinema cameras were located at 1.82 m above the floor, one for a lateral view and the other for a frontal view. Table 3 summarizes all the sensors occupied and the units of measurement for each channel.

Training and Testing Datasets

The competition is now closed. But, if you are interested on using the whole data set (not only the portion for this competition), you can visit the official website: UP-Fall Detection data set.

If you use these data for your own research, please read further details of the data set and cite the following publication:

  • Lourdes Martínez-Villaseñor, Hiram Ponce, Jorge Brieva, Ernesto Moya-Albor, José Núñez-Martínez, Carlos Peñafort-Asturiano, “UP-Fall Detection Dataset: A Multimodal Approach”, Sensors 19(9), 1988: 2019, doi:10.3390/s19091988.


For this competition, all raw data from 9 subjects will be released for training (labeled), and the remaining 3 subjects will be done for testing and evaluation (unlabeled). It is remarkable to say that our dataset is considered huge because is contains different types of falls and daily activities. Also, it is interesting because it collects and organizes different modalities: wearable, ambient and vision sources. Using the dataset is straightforward, so we will not provide any codes.

The training and testing datasets will be available in section Datasets (training dataset now posted).