Can Large Audio Language Models Understand Audio Well?
SSEU-Bench: Speech, Scene and Events Understanding Benchmark for LALMs
SSEU-Bench: Speech, Scene and Events Understanding Benchmark for LALMs
"A comprehensive audio understanding benchmark for LALMs"
Han Yin and Jung-Woo Choi
Smart Sound Systems Lab, School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
Abstract
Recently, Large Audio Language Models (LALMs) have progressed rapidly, demonstrating their strong efficacy in universal audio understanding through cross-modal integration. To evaluate LALMs' audio understanding performance, researchers have proposed different benchmarks. However, key aspects for real-world interactions are underexplored in existing benchmarks, i.e., audio signals typically contain both speech and non-speech components, and energy levels of these components can vary significantly across different scenarios. Moreover, most benchmarks do not consider the joint understanding of speech, scene, and events within the same audio clip. In this work, we introduce SSEU-Bench, the first versatile audio understanding benchmark that explicitly accounts for energy differences between speech and non-speech audio, with both independent and joint understanding settings for speech, scene, and events. Furthermore, we demonstrate that some LALMs tend to degrade on certain tasks in a joint understanding setting. To address this issue, we introduce Chain-of-Thought, which effectively improves LALMs' joint audio understanding performance by decomposing complex tasks into simpler reasoning steps.
Dataset Overview
As shown in Fig. 1, in SSEU-Bench, each audio sample is mixed with a pure speech clip and a non-speech background audio clip with different SNRs. Clean speech clips are sourced from VCTK corpus [1], while non-speech background audio are sampled from DESED [2] and MAESTRO-Real [3]. All recordings are converted to a single channel at a sampling rate of 44.1 kHz. In detail, VCTK provides accurate text transcriptions for each speech clip. DESED and MAESTRO-Real offer event labels with precise timestamps.
After removing overlapping event categories between the two datasets, we retain 18 distinct sound event classes: Blender, Cat, Electric Shaver Toothbrush, Dog, Running Water, Vacuum Cleaner, Alarm Bell Ringing, Frying, Cutlery and Dishes, Children Voices, Footsteps, Car, Large Vehicle, Brakes Squeaking, Metro Approaching, Metro Leaving, Bird Singing and Wind Blowing.
DESED is primarily focused on Home scene, whereas MAESTRO-Real covers five different scenes, i.e., Cafe Restaurant, City Center, Grocery Store, Metro Station and Residential Area.
Audio Understanding Evaluation
To evaluate the audio understanding capability of LALMs, we present three different tasks, including Automatic Speech Recognition (ASR), Acoustic Scene Classification (ASC), and Audio Tagging (AT). A pre-trained text embedding model (i.e., ChatGPT-Text-Embedding-3-Large [4]) is applied to obtain the confidence scores of different classes.
For the three tasks (i.e., ASR, ASC, and AT), we use Word Error Rate (WER), macro-average Accuracy (mACC), and macro-average Average Precision (mAP) as the metrics, respectively, which have been applied in previous studies. All metrics are reported as percentages.
We have tested four LALMs, including LTU-AS [5], Qwen2-Audio-Instruct [6], Kimi-Audio [7] and Step-Audio 2 Mini [8] our SSEU-Bench. More detailed results please refer to our paper on arxiv.
References
[1] Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald, “Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit (version 0.92) [sound],” University of Edinburgh. The Centre for Speech Technology Research (CSTR), 2019, https://doi.org/10.7488/ds/2645.
[2] DESED: https://project.inria.fr/desed/
[3] Irene Mart´ın-Morat´o and Annamaria Mesaros, “Strong labeling of sound events using crowdsourced weak labels and annotator competence estimation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 31, pp. 902–914, 2023.
[4] https://platform.openai.com/docs/models/text-embedding-3-large
[5] Yuan Gong, Alexander H Liu, Hongyin Luo, Leonid Karlinsky, and James Glass, “Joint audio and speech understanding,” in IEEE Automatic Speech Recognition and Understanding Workshop. IEEE, 2023, pp. 1–8.
[6] Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, et al., “Qwen2-audio technical report,” arXiv preprint arXiv:2407.10759, 2024.
[7] Ding Ding, Zeqian Ju, Yichong Leng, Songxiang Liu, Tong Liu, Zeyu Shang, Kai Shen, Wei Song, Xu Tan, Heyi Tang, et al., “Kimi-audio technical report,” arXiv preprint arXiv:2504.18425, 2025.
[8] Boyong Wu, Chao Yan, Chen Hu, Cheng Yi, Chengli Feng, Fei Tian, Feiyu Shen, Gang Yu, Haoyang Zhang, Jingbei Li, et al., “Step-audio 2 technical report,” arXiv preprint arXiv:2507.16632, 2025.