The VerBIO dataset is a multimodal bio-behavioral dataset of individuals' affective responses while performing public speaking tasks in real-life and virtual settings. This data has been collected as part of a research study (EiF grant 18.02) jointly performed by HUBBS Lab and CIBER Lab at the University of Colorado Boulder and Texas A&M University. The aim of this study is to understand the relationship between bio-behavioral indices and public speaking anxiety in both real world and virtual learning environments. Also, this study explores the time-continuous detection of stress using multimodal bio-behavioral signals. This dataset contains audio recordings, physiological signals, self-reported measures, and time-continuous stress ratings from 344 public speaking sessions. During these sessions, 55 participants delivered short speeches on a given topic from newspaper articles, in front of a real or virtual audience. You can find more details on the dataset in the following papers:


M. Yadav, M. N. Sakib, E. H. Nirjhar, K. Feng, A. Behzadan, and T. Chaspari, "Exploring individual differences of public speaking anxiety in real-life and virtual presentations," in IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1168-1182, 1 July-Sept. 2022, doi: 10.1109/TAFFC.2020.3048299.


E. H. Nirjhar, and T. Chaspari, "Modeling Gold Standard Moment-to-Moment Ratings of Perception of Stress from Audio Recordings," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2024.3435502.


We are releasing an updated version of VerBIO dataset. This version includes the moment-to-moment (i.e., time-continuous) ratings of stress from four annotators and their aggregated ratings that can be used for continuous stress detection.

The first version of VerBIO dataset was released in 2021. This version includes the audio recording, physiological signal time-series from wearable sensors (i.e., from Empatica E4 and Actiwave), and self-reported measures (e.g., trait anxiety, state anxiety, demographics). 

The dataset is available for academic research only upon request. To obtain the dataset, please fill out the request submission form after agreeing to the terms and conditions. Please use an email affiliated with an academic institution to submit the request. Once your request has been verified, we will send a download link of the dataset to the provided email address. If you have any questions or comments, please contact Ehsanul Haque Nirjhar (nirjhar71 at tamu dot edu).


Terms and Conditions:

This dataset will be provided to the requestor for academic research purposes only, after verifying their submitted form. This dataset can not be used for commercial purposes. After receiving the data, the requestor can not redistribute or share the data with a third party, or put the data on a public website. If the requestor publishes their research work using this dataset, please cite the following paper:

M. Yadav, M. N. Sakib, E. H. Nirjhar, K. Feng, A. Behzadan, and T. Chaspari, "Exploring individual differences of public speaking anxiety in real-life and virtual presentations," in IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1168-1182, 1 July-Sept. 2022, doi: 10.1109/TAFFC.2020.3048299.

If you are using the moment-to-moment ratings in your research, please also cite the following paper:

E. H. Nirjhar, and T. Chaspari, "Modeling Gold Standard Moment-to-Moment Ratings of Perception of Stress from Audio Recordings," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2024.3435502.


What’s new in the updated version?


Other Related Publications: