Life-Long Learning for Spoken Language Systems

ASRU 2019

Description

This is the webpage of the Life Long Learning for Spoken Language Systems Workshop colocated with ASRU 2019, in Singapore.

The workshop will bring together experts in spoken language systems whose research focuses on solving problems related to continual improvement of speech processing systems such as conversational AI. Specifically, it will provide attendees with an overview of existing approaches from various disciplines including but not limited to active learning, few-shot learning, data augmentation, and enable them to distill principles that can be more generally applicable. It will also discuss the main challenges arising in bringing speech technology systems to masses and continuous improvement of such systems. The target audience consists of researchers and practitioners in related areas.


Registration, Venue Info

The registration desk will be open from 8:30 am on the 14th December 2019 at the Revelry Hall, Events Centre Level 2, Village Hotel Sentosa. Please refer to the venue map for the exact location http://asru2019.org/wp/wp-content/uploads/Venue_modified.png


Important Dates

  • Call For Papers: September 1, 2019
  • Deadline for submission: October 28, 2019
  • Notification of acceptance: November 15, 2019
  • Deadline for camera-ready version: November 30, 2019
  • Workshop Date: December 14, 2019

Invited Speakers

Alexander Waibel is Professor of Computer Science at Carnegie Mellon University (USA) and at the Karlsruhe Institute of Technology (Germany). He is director of the International Center for Advanced Communication Technologies. Waibel is known for his work on AI, Machine Learning, Multimodal Interfaces and Speech Translation Systems. He proposed early Neural Network based Speech and Language systems, including the TDNN, the first shift-invariant “Convolutional” Network. Combining advances in ML with work on better multimodal interfaces, Waibel and his team developed pioneering solutions to cross-lingual communication, including some of the first consecutive & simultaneous translation systems, mobile speech translator, multimodal smart rooms and human robot collaboration. He published extensively in the field, received many awards and founded more than 10 companies in an effort to transfer academic results to practical deployment.

Waibel is a member of the National Academy of Sciences of Germany and a Fellow of the IEEE. He received his BS, MS and PhD degrees from MIT and CMU, respectively.


Dr. Satoshi Nakamura is Professor of Nara Institute of Science and Technology (NAIST), Team Leader of Riken AIP, Honorarprofessor of Karlsruhe Institute of Technology, Germany. He received his B.S. from Kyoto Institute of Technology in 1981 and Ph.D. from Kyoto University in 1992. He was Associate Professor of Graduate School of Information Science at NAIST in 1994-2000. He was Department head and Director of ATR Spoken Language Communication Research Laboratories in 2000-2004, and 2005-2008, respectively and Vice president of ATR in 2007-2008. He was Director General of Keihanna Research Laboratories and the Executive Director of Knowledge Creating Communication Research Center, National Institute of Information and Communications Technology, Japan in 2009-2010. He is currently Director of Data Science Center and a full professor of Information Science Division, Graduate School of Science and Technology of NAIST, and Team Leader of Tourism Information Analytics Team at Center for Advanced Intelligence Project (AIP), RIKEN, Japan. His research interests include modeling and systems of speech processing, spoken dialog systems, natural language processing, and big data analytics. He is one of the world leaders of speech-to-speech translation research and has been serving for various speech-to-speech translation research projects. He was a committee member of IEEE SLTC 2016-2018. He is currently an Elected Board Member of International Speech Communication Association, ISCA. He received Antonio Zampolli Prize in 2012 and retained the title of ATR Fellow, IPSJ Fellow, and IEEE Fellow.


Haizhou Li is a Professor at the Department of Electrical and Computer Engineering, National University of Singapore, and a Bremen Excellence Chair Professor at the University of Bremen, Germany. His research interests include speech information processing, natural language processing, and neuromorphic computing. Professor Li has served as the Editor-in-Chief of IEEE/ACM Transactions on Audio, Speech and Language Processing (2015-2018), the President of the International Speech Communication Association (ISCA, 2015-2017), and the President of Asia Pacific Signal and Information Processing Association (APSIPA, 2015-2016). He is a Fellow of the IEEE and the ISCA.

Nancy F. Chen received her Ph.D. from MIT and Harvard in 2011. She worked at MIT Lincoln Laboratory on her Ph.D. research in multilingual speech processing. She is currently leading research efforts in conversational AI and natural language generation with applications to healthcare, education, journalism, and finance at the Institute for Infocomm Research (I2R), A*STAR. Dr. Chen led a cross-continent team for low-resource spoken language processing, which was one of the top performers in the NIST Open Keyword Search Evaluations (2013-2016), funded by the IARPA Babel program. Dr. Chen is a senior IEEE member, an elected member of the IEEE Speech and Language Technical Committee (2016-2018, 2019-2021), associate editor of IEEE Signal Processing Letters (2019-2021) and the guest editor for the special issue of “End-to-End Speech and Language Processing” in the IEEE Journal of Selected Topics in Signal Processing (2017). Dr. Chen has received numerous awards, including Best Paper at APSIPA ASC (2016), the Singapore MOE Outstanding Mentor Award (2012), the Microsoft-sponsored IEEE Spoken Language Processing Grant (2011), and the NIH Ruth L. Kirschstein National Research Award (2004-2008). In addition to her academic endeavors, Dr. Chen has also consulted for various companies ranging from startups to multinational corporations in the areas of emotional intelligence (Cogito Health), speech recognition (Vlingo, acquired by Nuance), and defense and aerospace (BAE Systems).


Dr. Anthony Larcher is the head of the computer science Institute from Le Mans Université (FRANCE). His research interests include speaker recognition, speaker diarization and acoustic modeling as well asLifelong learning and explainability for speech processing. He is currently coordinating the ALLIES andExtensor projects ans has signed more than 60 international publications in major conference and journals.


Florian Metze is an Associate Research Professor at Carnegie Mellon University, in the School of Computer Science’s Language Technologies Institute. His work covers many areas of speech recognition and multimedia analysis with a focus on end-to-end deep learning. Currently, he focuses on multimodal processing of speech in how-to videos, and information extraction from medical interviews. He has also worked on low resource and multilingual speech processing, speech recognition with articulatory features, large-scale multimedia retrieval and summarization, along with recognition of personality or similar meta-data from speech.

Sponsors