We designed a listening test which aims to perceptually assess if our proposed affective 2-dimensional model of valence and arousal is indeed capturing affect from soundscape recordings. Ultimately, our test sheds light on the soundness of audio-content descriptors in retrieving soundscapes from large audio archives using affective dimensions. A secondary objective of our test is to understand the impact of the number of the low-level audio descriptors used in querying the dataset for capturing affective dimensions. In light of the redundancy of low-level audio descriptors and supported by the ranking established in our research, we compare soundscapes retrieved from audio description vectors using the entire set of adopted descriptors and a reduced number of the five best ranked descriptors
Each of the five soundscapes per question relate to the following categories:
All the soundscape recordings can be reached here:
Exercise 1)
Emo-Soundscapes
Emo-Soundscapes (opposite quadrant)
MScaper
MScaper (using 5 audio descriptors)
Mscaper (opposite quadrant)
Exercise 2)
Emo-Soundscapes
Emo-Soundscapes (opposite quadrant)
MScaper
MScaper (using 5 audio descriptors)
Mscaper (opposite quadrant)
Exercise 3)
Emo-Soundscapes
Emo-Soundscapes (opposite quadrant)
MScaper
MScaper (using 5 audio descriptors)
Mscaper (opposite quadrant)
Exercise 4)
Emo-Soundscapes
Emo-Soundscapes (opposite quadrant)
MScaper
MScaper (using 5 audio descriptors)
Mscaper (opposite quadrant)
Exercise 5)
Emo-Soundscapes
Emo-Soundscapes (opposite quadrant)
MScaper
MScaper (using 5 audio descriptors)
Mscaper (opposite quadrant)