Organisers

Fabien Ringeval, Université Grenoble Alpes, CNRS, France, fabien.ringeval@imag.fr

Fabien Ringeval is Associate Professor at the Laboratoire d'Informatique de Grenoble (LIG), CNRS, Université Grenoble Alpes, France, since 2016. His research interests concern digital signal processing and machine learning, with applications on the automatic recognition of paralinguistic information from multimodal data. Dr. Ringeval (co-)authored more than 60 publications leading to more than 1,300 citations (h-index = 15). He co-organised several workshops and challenges, including the INTERSPEECH 2013 ComParE challenge, and the International Audio/Visual Emotion Challenge and Workshop (AVEC) series since 2015. He served as Grand Challenge Chair for ACM ICMI 2018, Publication Chair for ACII 2017, and as a reviewer for funding and projects (ANR, NSERC), and several leading journals, conferences and workshops in the field.

Björn W. Schuller, Imperial College London/University of Augsburg, UK/Germany, schuller@ieee.org

Björn W. Schuller is Full Professor and Head of the Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, Germany, Reader at the Imperial College London, UK, and Chief Executive Officer (CEO) and Co-Founder of audEERING, Germany. Best known are his works advancing machine learning for affective computing and multimedia retrieval. Prof. Schuller (co-)authored more than 700 publications leading to more than 18,000 citations (h-index = 65). He is Editor in Chief of the IEEE Transactions on Affective Computing, and serves as General Chair for ACII 2019, ACII Asia 2018, and ICMI 2014, and initiated and co-organised several international challenges, including the INTERSPEECH ComParE challenge series, and the Audio/Visual Emotion Challenge and Workshop (AVEC) series. He is President-Emeritus of the AAAC, Fellow of the IEEE, and Senior Member of the ACM.

Michel F. Valstar, University of Nottingham, UK, michel.valstar@nottingham.ac.uk

Michel F. Valstar is Associate Professor in the Mixed Reality Lab of the University of Nottingham, UK. He is working in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. Dr. Valstar (co-)authored more than 110 publications leading to more than 6,500 citations (h-index = 34). He co-organised the first of its kind and premier Audio/Visual Emotion Challenge (AVEC) and Workshop series, and further initiated and organised the FERA competition series – another first of its kind event in the Computer Vision community focussing on Facial Action Recognition. He further serves as an Associate Editor for the IEEE Transactions on Affective Computing.

Roddy Cowie, Queen's University Belfast, UK, r.cowie@qub.ac.uk

Roddy Cowie studied Philosophy and Psychology and received his PhD from Sussex on relationships between human and machine vision. He joined the psychology department at Queen’s, Belfast in 1975 and became a Full Professor in 2003. He has applied computational methods to the study of complex perceptual phenomena in a range of areas – perceiving pictures, the subjective experience of deafness, and the information that speech conveys about the speaker. Recently he has focussed on the perception of emotion through a series of EC projects, and was co-ordinator of the HUMAINE network of excellence. Pr. Cowie (co-)authored more than 260 publications leading to more than 9,600 citations (h-index = 42).

Maja Pantic, Imperial College London/University of Twente, UK/The Netherlands, m.pantic@imperial.ac.uk

Maja Pantic is Full Professor and Head of the Intelligent Behaviour Understanding Group (iBUG), Imperial College London, UK, working on machine analysis of human non-verbal behaviour and its applications to HCI. She also a part-time Professor of Affective & Behavioural Computing at EEMCS of the University of Twente, the Netherlands. Prof. Pantic (co-)authored more than 430 publications leading to more than 23,000 citations (h-index = 71). She serves as the Editor in Chief of Image and Vision Computing Journal (IVCJ). She was the organizer and co-organizer of various symposia on Automatic Human Behavior Analysis and Synthesis in conjunction with the IEEE SMC (2004), ACM Multimedia (2005, 2010), ACM ICMI (2006-2009), and IEEE CVPR (2008-2010).

Data Chairs

Heysem Kaya, Namık Kemal University, Turkey, hkaya@nku.edu.tr

Heysem Kaya completed his PhD thesis on computational paralinguistics and multimodal affective computing at the Computer Engineering Department, Boğaziçi University in 2015. His research interests include mixture model selection, speech processing, computational paralinguistics, affective computing, multi-view/multi-modal learning and intelligent biomedical applications. He is a member of the editorial board of SPIIRAS Proceedings, and serves as reviewer in IEEE Trans. on. Affective Computing; IEEE Trans. on. Neural Networks and Learning Systems; Computer, Speech and Language; Neurocomputing; Pattern Recognition; Pattern Recognition Letters; Information Fusion; Digital Signal Processing; and IEEE Signal Processing Letters.

Nicholas Cummins, University of Augsburg, Germany, nicholas.cummins@ieee.org

Nicholas Cummins is a habilitation candidate at the ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg. He received his Ph.D. in Electrical Engineering from UNSW Australia in February 2016. He is currently involved in the Horizon 2020 projects DE-ENIGMA, RADAR-CNS and TAPAS. His current research interests include multisensory signal analysis, affective computing, and computer audition with a particular focus on the understanding and analysis of different health states. He has (co-)authored over 50 conference and journal papers (over 450 citations, h-index 12). Dr Cummins is a frequent reviewer for IEEE, ACM and ISCA journals and conferences as well as serving on program and organisational committees. He is a member of ACM, ISCA, IEEE and the IET.

Maximilian Schmitt, University of Augsburg, Germany, maximilian.schmitt@informatik.uni-augsburg.de

Maximilian Schmitt is a research assistant at the ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, Germany. He received his diploma degree (Dipl.-Ing.) in Electrical Engineering and Information Technology from RWTH Aachen University, Germany, in 2012. His research focus is on signal processing, machine learning, intelligent audio analysis, and multimodal affect recognition, with special interests on machine learning, especially in the audio domain, computational paralinguistics, and multimodal signal processing. He has served as a reviewer for IEEE Transactions on Affective Computing (T-AffC), IEEE Signal Processing Letters (SPL), IEEE Transaction on Cybernetics, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Elsevier Computer Speech & Language (CSL), and Elsevier Knowledge-Based Systems (KNOSYS).

Shahin Amiriparian, University of Augsburg, Germany, shahin.amiriparian@informatik.uni-augsburg.de

Shahin Amiriparian received his master's degree in Electrical Engineering and Information Technology (M. Sc.) from Technische Universität München (TUM), Germany. He started working towards his doctoral degree as a researcher in the Machine Intelligence and Signal Processing Group at TUM, focusing his research on novel deep learning methods for audio processing. From 2014 to 2017, he was a doctoral researcher at the Chair of Complex and Intelligent Systems at the University of Passau, Germany and currently pursuing his doctoral degree at the chair of Embedded Intelligence for Health Care and Well Being at the University of Augsburg, Germany. His main research focus is deep learning for audio understanding and image processing.