Chief Research Scientist / Team Leader, Speech & Language Team, SB Intuitions Corp.
yui.sudo_at_sbintuitions.co.jp
Curriculum Vitae (updated Sep. 2025)
Chief Research Scientist / Team Leader, Speech & Language Team, SB Intuitions Corp.
yui.sudo_at_sbintuitions.co.jp
Curriculum Vitae (updated Sep. 2025)
Yui Sudo is a chief research scientist / team leader at SB Intuitions Corp. His expertise encompasses a wide range of topics, including automatic speech recognition and robot audition. He has published over 50 papers and received prestigious awards, including the IEEE SLT Best Paper Award, IEEE ASRU Best Reviewer Award, JSAI Incentive Award, and JSPE Young Researcher Award. He is a member of IEEE, ISCA.
🎓️Education
Mar. 2021: Ph.D. degree in Engineering, Tokyo Institute of Technology, Japan.
Mar. 2011: M.S. degree in Engineering, Keio University, Japan.
Mar. 2009: B.S. degree in Engineering, Keio University, Japan.
💼Work experience
Feb. 2025 - Present: Softbank
Apr. 2025 - Present: Chief Research Scientist / Team Leader, SB Intuitions Corp.
(Apr. 2025 - Present: Partially seconded to SoftBank Corp.)
Feb. 2025 - Mar. 2025: Senior Research Scientist, SB Intuitions Corp.
Apr. 2011 - Jan. 2025: Honda
Dec. 2020 - Jan. 2025: Senior Engineer, Honda Research Institute Japan Co., Ltd.
Feb. 2019 - Nov. 2020: Staff Engineer, Honda R&D Co., Ltd.
Apr. 2012 - Jan. 2019: Staff Engineer, Honda Engineering Co., Ltd.
(Apr. 2016 - Sep. 2016: Honda Engineering North America Inc.)
Apr. 2011 - Mar. 2012: Staff Engineer, Honda Motor Co., Ltd.
2024: IEEE SLT Best Paper Award (first author)
2024: JSAI Incentive Award (co-author)
2012: JSPE Young Researcher Award (first author)
Automatic Speech Recognition
Selected Papers:
Y. Sudo et al., “Contextualized Automatic Speech Recognition with Dynamic Vocabulary”, in Proc. SLT, 2024. (🏆IEEE SLT Best Paper Award🏆)
Y. Sudo et al., "DYNAC: Dynamic Vocabulary based Non-Autoregressive Contextualization for Speech Recognition", in Proc. INTERSPEECH, 2025.
Y. Sudo et al., "OWSM-Biasing: Contextualizing Open Whisper-Style Speech Models for Automatic Speech Recognition with Dynamic Vocabulary", in Proc. INTERSPEECH, 2025.
Invited talk:
"Contextualized End-to-End Automatic Speech Recognition Based on Deep Biasing," Applied Acoustics and Engineering Acoustics Joint Workshop, ASJ & IEICE, 2024 (in Japanese)
Speech Language Models
Selected Papers:
Y. Fujita, T. Mizumoto, A. Kojima, L. Liu, and Y. Sudo, "AC/DC: LLM-based Audio Comprehension via Dialogue Continuation", in Proc. INTERSPEECH, 2025.
T. Mizumoto, Y. Fujita, H. Shi, L. Liu, A. Kojima, and Y. Sudo, “Evaluating Japanese Dialect Robustness across Speech and Text-based Large Language Models”, in Proc. ASRU, 2025.
H. Shi, Y. Fujita, T. Mizumoto, L. Liu, A. Kojima, and Y. Sudo, “Serialized Output Prompting for Large Language Model-based Multi-Talker Speech Recognition”, in Proc. ASRU, 2025.
Robot Audition
Selected Papers:
Y. Sudo et al., “Online Adaptation of Fourier Series-Based Acoustic Transfer Function Model and Its Application to Sound Source Localization and Separation”, Advanced Robotics, 2024.
Y. Sudo et al., "Multichannel Environmental Sound Segmentation with Separately Trained Spectral and Spatial Features", Applied Intelligence, 2021.
Invited Talk:
"Deep-Learning-based Environmental Sound Segmentation - Integration of Sound Source Localization, Separation, and Classification -, " Tokyo BISH Bash #05, online, 2021 (in Japanese).