Every week, we host leading scientistis who share insights on cutting-edge resarch topics in Conversational AI. This event is open to everyone!
More info here.
The IEEE MLSP is the main and oldest technical event sponsored by the MLSP Technical Committee of the IEEE Signal Processing Society. Its primary objective is to bring together leading experts, researchers, and industry professionals to explore the latest advancements, challenges, and opportunities in machine learning for signal processing (MLSP).
Event date: August 31-September 3, 2025 Istanbul/Turkey
More info here.
The ANNPR 2024 workshop is a forum for international researchers and practitioners working in all areas of neural network- and machine learning-based pattern recognition to present and discuss the latest research, results, and ideas in these areas.
The workshop will held at Concordia University on October 10-12, 2024.
More info here.
The first workshop on Explainable Machine Learning for Speech and Audio aims at fostering research in the field of interpretability for audio and speech processing with neural networks. The workshop focuses on fundamental and applied challenges to interpretability in the audio domain, tackling formal descriptions of interpretability, model evaluation strategies, and novel interpretability techniques for explaining neural network predictions.
Video Recordings available here.
SpeechBrain has established itself as a leading deep learning toolkit for speech processing in recent years, with impressive usage statistics to back it up. With an average of 100 daily clones and 1000 daily downloads on its GitHub repository, along with over 6,000 stars and 1100 forks, SpeechBrain is a popular choice among speech processing experts.
In this summit, we are excited to share the latest developments and updates on the SpeechBrain project, engage in an open and collaborative discussion with the community, and introduce it to a broader audience of speech professionals. We would like participants to stay up-to-date with the latest advancements made in SpeechBrain and speech processing technology. We also wish to gather, more interactively, the feedback from the community to better plan future developments. The event will take place four days after the main conference on August 28th.
Video Recordings available here.
I'm happy to co-organize this workshop at ICML 2020 on "self-supervision in audio and speech".
With our initiative, we wish to foster more progress on this important topic, and we hope to encourage a discussion amongst experts and practitioners from both academia and industry. Furthermore, we plan to extend the debate to multiple disciplines, encouraging discussions on how insights from other fields (e.g. computer vision and robotics) can be applied to speech, and how findings on speech can be used on other sequence processing tasks. The workshop will be conceived to further promote communication and exchange of ideas between machine learning and speech communities(people who attend conferences such as Interspeech and ICASSP).
8th December 2018 – Montreal
I was more than happy to co-organize this successful event. With this workshop, we tried to further foster some progresses on interpretability and robustness of modern deep learning techniques, with a particular focus on audio, speech and NLP technologies.
June - August 2019 – Montreal
In the context of JSALT 2019, I co-organized a team working on "Cooperative Ad-hoc Microphone Arrays for ASR". The team was composed of 18 speech experts (professors, researchers, post-docs, PhD students as well as graduate and undergraduate students) and worked for six weeks on distant speech recognition in challenging acoustic environments. A noise-robust version of a self-supervised multi-task speech encoder has been developed during this workshop.