International Workshop on Deep Learning for Human Activity Recognition

Held in conjunction with IJCAI-2019, August 10-16, 2019 in Macao, China



Human activity recognition (HAR) can be used for a number of applications, such as health-care services and smart home applications. Many sensors have been utilized for human activity recognition, such as wearable sensors, smartphones, radio frequency (RF) sensors (WiFi, RFID), LED light sensors, cameras, etc. Owing to the rapid development of wireless sensor network, a large amount of data has been collected for the recognition of human activities with different kind of sensors. Conventional shallow learning algorithms, such as support vector machine and random forest, require to manually extract some representative features from large and noisy sensory data. However, manual feature engineering requires export knowledge and will inevitably miss implicit features.

Recently, deep learning has achieved great success in many challenging research areas, such as image recognition and natural language processing. The key merit of deep learning is to automatically learn representative features from massive data. This technology can be a good candidate for human activity recognition. Some initial attempts can be found in the literature. However, many challenging research problems in terms of accuracy, device heterogeneous, environment changes, etc. remain unsolved.

This workshop intends to prompt state-of-the-art approaches on deep learning for human activity recognition. The organizers invite researchers to participate and submit their research papers in the Deep Learning for Human Activity Recognition Workshop. Selected papers (or extensions) will be published on a special issue of “Deep Learning for Human Activity Recognition” at Elsevier Journal, Neurocomputing (JCR Q1, IF: 3.241).


Important Dates

·         Submission deadline: May 10, 2019

·         Acceptance notification:   June 10, 2019

·         Conference dates: Aug 10-12, 2019



Potential topics include but are not limited to:

     Device-based HAR using deep learning

     Device-free HAR using deep learning

     Image based HAR using deep learning

     Light sensor based HAR using deep learning

     Sensor fusion for HAR using deep learning

     Fusion of shallow models with deep networks for HAR

     Device heterogeneous for device-based HAR

     Environment changes for device-free HAR

     Transfer Learning for HAR

     Online Learning for HAR

     Semi-supervised Learning for HAR

     Survey for deep learning based HAR



 The authors should follow IJCAI paper preparation instructions, including page length (e.g. 6 pages + 1 extra page for reference).



·         Xiaoli Li (Nanyang Technological University/A*STAR, Singapore)



·         Peilin Zhao (Tencent AI Lab, P.R.C)



·         Zhenghua Chen (A*STAR, Singapore)



·         Le Zhang (Advanced Digital Sciences Center, Singapore)



Program Committee:

·        Ming-Ming Cheng (Nankai Univerisity, P.R.C)

·        Xi Peng (Sichuan University, P.R.C)

·        Vincent Zheng (Advanced Digital Sciences Center, Singapore)

·        Sinno Pan (Nanyang Technological University, Singapore)

·        Joey Tianyi Zhou (A*STAR, Singapore)

·        Zhang Wenyu (Cornell University, USA)

·       Jinming Xu (Purdue University,  USA)

·        Zou Han (University of California, Berkeley, USA)

·        Lu Xiaoxuan (University of Oxford, UK)

·        Zenglin Shi (University of Amsterdam,  Amsterdam)

·         Min Wu (A*STAR, Singapore)

·         Karl Surmacz (McLaren Applied Technologies, UK