Organisers

Fabien Ringeval, Université Grenoble Alpes, CNRS, France, fabien.ringeval@imag.fr

Fabien Ringeval is Associate Professor at the Laboratoire d'Informatique de Grenoble (LIG), CNRS, Université Grenoble Alpes, France, since 2016. His research interests concern digital signal processing and machine learning, with applications on the understanding of human behaviours from multimodal data. Dr. Ringeval (co-)authored more than 70 publications leading to more than 2,100 citations (h-index = 19). He co-organised several workshops and challenges, including the INTERSPEECH 2013 ComParE challenge, and the International Audio/Visual Emotion Challenge and Workshop (AVEC) series since 2015. He serves as Area Chair for ACM MM 2019, Senior Program Committee member for ICMI 2019 and ACII 2019, Grand Challenge Chair for ACM ICMI 2018, Publication Chair for ACII 2017, and as a reviewer for funding and projects (ANR, NSERC), and several leading journals, conferences and workshops in the field.

Björn W. Schuller, Imperial College London/University of Augsburg, UK/Germany, schuller@ieee.org

Björn W. Schuller is Full Professor and Head of the Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, Germany, Reader at the Imperial College London, UK, and Chief Executive Officer (CEO) and Co-Founder of audEERING, Germany. Best known are his works advancing machine learning for affective computing and multimedia retrieval. Prof. Schuller (co-)authored more than 700 publications leading to more than 18,000 citations (h-index = 65). He is Editor in Chief of the IEEE Transactions on Affective Computing, and serves as General Chair for ACII 2019, ACII Asia 2018, and ICMI 2014, and initiated and co-organised several international challenges, including the INTERSPEECH ComParE challenge series, and the Audio/Visual Emotion Challenge and Workshop (AVEC) series. He is President-Emeritus of the AAAC, Fellow of the IEEE, and Senior Member of the ACM.

Michel F. Valstar, University of Nottingham, UK, michel.valstar@nottingham.ac.uk

Michel F. Valstar is Associate Professor in the Mixed Reality Lab of the University of Nottingham, UK. He is working in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. Dr. Valstar (co-)authored more than 110 publications leading to more than 6,500 citations (h-index = 34). He co-organised the first of its kind and premier Audio/Visual Emotion Challenge (AVEC) and Workshop series, and further initiated and organised the FERA competition series – another first of its kind event in the Computer Vision community focussing on Facial Action Recognition. He further serves as an Associate Editor for the IEEE Transactions on Affective Computing.

Nicholas Cummins, University of Augsburg, Germany, nicholas.cummins@ieee.org

Nicholas Cummins is a habilitation candidate at the ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg. He received his Ph.D. in Electrical Engineering from UNSW Australia in February 2016. He is currently involved in the Horizon 2020 projects DE-ENIGMA, RADAR-CNS and TAPAS. His current research interests include multisensory signal analysis, affective computing, and computer audition with a particular focus on the understanding and analysis of different health states. He has (co-)authored over 50 conference and journal papers (over 450 citations, h-index 12). Dr Cummins is a frequent reviewer for IEEE, ACM and ISCA journals and conferences as well as serving on program and organisational committees. He is a member of ACM, ISCA, IEEE and the IET.

Roddy Cowie, Queen's University Belfast, UK, r.cowie@qub.ac.uk

Roddy Cowie studied Philosophy and Psychology and received his PhD from Sussex on relationships between human and machine vision. He joined the psychology department at Queen’s, Belfast in 1975 and became a Full Professor in 2003. He has applied computational methods to the study of complex perceptual phenomena in a range of areas – perceiving pictures, the subjective experience of deafness, and the information that speech conveys about the speaker. Recently he has focussed on the perception of emotion through a series of EC projects, and was co-ordinator of the HUMAINE network of excellence. Pr. Cowie (co-)authored more than 260 publications leading to more than 9,600 citations (h-index = 42).

Maja Pantic, Imperial College London/University of Twente, UK/The Netherlands, m.pantic@imperial.ac.uk

Maja Pantic is Full Professor and Head of the Intelligent Behaviour Understanding Group (iBUG), Imperial College London, UK, working on machine analysis of human non-verbal behaviour and its applications to HCI. She also a part-time Professor of Affective & Behavioural Computing at EEMCS of the University of Twente, the Netherlands. Prof. Pantic (co-)authored more than 430 publications leading to more than 23,000 citations (h-index = 71). She serves as the Editor in Chief of Image and Vision Computing Journal (IVCJ). She was the organizer and co-organizer of various symposia on Automatic Human Behavior Analysis and Synthesis in conjunction with the IEEE SMC (2004), ACM Multimedia (2005, 2010), ACM ICMI (2006-2009), and IEEE CVPR (2008-2010).

Data Chairs

Mohammad Soleymani, University of Southern California, USA, soleymani@ict.usc.edu

Mohammad Soleymani is a research assistant professor in computer science at USC Institute for Creative Technologies and USC Viterbi School of Engineering. At ICT, he leads the effort on non-verbal behavior understanding and multimodal machine learning. He has served on multiple conference organisation committees and editorial roles, most notably as associate editor for the IEEE Transactions on Affective Computing and technical program chair for ACM ICMI 2018 and ACII 2017. He is one of the founding organisers of the MediaEval multimedia retrieval benchmarking campaign and the president elect for the Association for Advancement of Affective Computing (AAAC). His main line of research involves developing automatic emotion recognition and behavior understanding methods using physiological signals and facial expressions. He is also interested in understanding subjective attributes in multimedia content, e.g., predicting whether an image is interesting from its pixels or automatic recognition of music mood from acoustic content.

Maximilian Schmitt, University of Augsburg, Germany, maximilian.schmitt@informatik.uni-augsburg.de

Maximilian Schmitt is a research assistant at the ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, Germany. He received his diploma degree (Dipl.-Ing.) in Electrical Engineering and Information Technology from RWTH Aachen University, Germany, in 2012. His research focus is on signal processing, machine learning, intelligent audio analysis, and multimodal affect recognition, with special interests on machine learning, especially in the audio domain, computational paralinguistics, and multimodal signal processing. He has served as a reviewer for IEEE Transactions on Affective Computing (T-AffC), IEEE Signal Processing Letters (SPL), IEEE Transaction on Cybernetics, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Elsevier Computer Speech & Language (CSL), and Elsevier Knowledge-Based Systems (KNOSYS).

Eva-Maria Messner, Ulm University, Germany, eva-maria.messner@uni-ulm.de

Eva-Maria Messner is a researcher at the department of Psychology and Psychotherapy at Ulm University. Her background is Psychology, Systemic Family Therapy and Sport Sciences. Her research mainly focusses on the use of technology to foster health behaviour or to improve health. Furthermore she specialised on passive sensing and the use of personal narratives to assess and manipulate mood and well-being.