Latest speech & signal processing for affective and behavioral computing in mHealth
AIME 2017 Tutorial: Latest speech & signal processing for affective and behavioral computing in mHealth
21 June 2017
This tutorial will focus on the latest speech & signal processing techniques within affective and behavioral computing (ABC) utilised for mHealth. There will be three main stages: i) an introduction to the topic, including the history of affective and behavioral computing, providing an overview of past and upcoming paralinguistic challenges, as well as specifying the role of affective computing techniques within ongoing European mHealth projects; 2) providing an overview of advanced speech processing and analysis techniques applicable to affective and behavioral computing in mHealth (acoustic and linguistic features extraction, types of acoustic features, modeling cross-speaker, crossgender, cross-cultural differences, and machine learning techniques); 3) practical implementations, in which attendees will receive training on running open-source established feature extraction and machine learning toolkits, including the state-of-the-art openSMILE and novel openXBOW multisensorial feature extraction software.
Aims: At the end of this tutorial, participants will be able to:
- Familiarise themselves with ABC and its applications in Medicine
- Apply ABC technologies, for use within passive, non-invasive and non-intrusive smart monitoring
- Process speech for a cost efficient, automated, objective diagnosis, as well as for monitoring various health states
Presenters:
- Prof. Björn Schuller, Imperial College London, UK
- Dr. Bogdan Vlasenko, University of Passau, Germany (bogdan.vlasenko(at) uni-passau.de)
- Dr. Hesam Sagha, audEERING GmbH, Gilching, Germany
Registration
To register for the tutorial, please register through conference website (Registration Form)
Associated projects