Publications
[1] Sakib, M. N., Hagen, E., Mazza, N., Rani, N., Nirjhar, E. H., Chu, S. L., Chaspari, T., Behzadan, A. H., & Arthur Jr, W. (2024). Capitalizing on strengths and minimizing weaknesses of veterans in civilian employment interviews: Perceptions of interviewers and veteran interviewees. Military Psychology, 1-13.
[2] Nirjhar, E. H., Arthur, W., & Chaspari, T. (2024, November). Perception of Stress: A Comparative Multimodal Analysis of Time-Continuous Stress Ratings from Self and Observers. In Proceedings of the 26th International Conference on Multimodal Interaction (pp. 397-406).
[3] Nirjhar, E. H., Behzadan, A. H., & Chaspari, T. (2021, October). Knowledge-and data-driven models of multimodal trajectories of public speaking anxiety in real and virtual settings. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 712-716).
[4] Nirjhar, E. H., Behzadan, A., & Chaspari, T. (2020, May). Exploring bio-behavioral signal trajectories of state anxiety during public speaking. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1294-1298). IEEE.
[5] Nirjhar, E. H., Arthur, W., & Chaspari, T. (2024, November). Perception of Stress: A Comparative Multimodal Analysis of Time-Continuous Stress Ratings from Self and Observers. In Proceedings of the 26th International Conference on Multimodal Interaction (pp. 397-406).
[6] Nirjhar, E. H., Sakib, M. N., Hagen, E., Rani, N., Chu, S. L., Arthur, W., Behzadan, A. H., & Chaspari, T. (2022, October). Investigating the interplay between self-reported and bio-behavioral measures of stress: A pilot study of civilian job interviews with military veterans. In 2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 1-8). IEEE.
[7] Nirjhar, E. H., & Chaspari, T. (2024). Modeling Gold Standard Moment-to-Moment Ratings of Perception of Stress from Audio Recordings. IEEE Transactions on Affective Computing. DOI: 10.1109/TAFFC.2024.3435502
[8] Raether, J., Nirjhar, E. H., & Chaspari, T. (2022, November). Evaluating Just-In-Time Vibrotactile Feedback for Communication Anxiety. In Proceedings of the 2022 International Conference on Multimodal Interaction (pp. 117-127).
[9] Agarwal, A., Nirjhar, E.H., Behzadan, A.H., and Chaspari, T. (2021). Evaluating in-the-moment feedback in virtual reality based on physiological and vocal markers for personalized speaking training. In International Conference on Construction Applications of Virtual Reality (CONVR) (pp. 94-103).
[10] Verrap, R., Nirjhar, E., Nenkova, A., & Chaspari, T. (2022, December). “Am I Answering My Job Interview Questions Right?”: A NLP Approach to Predict Degree of Explanation in Job Interview Responses. In Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI) (pp. 122-129).
[11] Wendt. C. J., Nirjhar, E. H., & Chaspari, T. (2025, April). Linguistic Analysis of Veteran Job Interviews to Assess Effectiveness in Translating Military Expertise to the Civilian Workforce. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop) (pp. 343-355).
[12] Aggarwal, P., Mahajani, G., Malasani, P. K., Jamadagni, V., Wendt, C. J., Nirjhar, E. H., & Chaspari, T. Investigating the reasoning abilities of large language models for understanding spoken language in interpersonal interactions. Submitted.
[13] Yarlagadda, R. C., Aggarwal, P., Jamadagni, V., Mahajani, G., Malasani, P. K., Nirjhar, E. H., & Chaspari, T. (2024, November). An AI-Powered Interactive Interface to Enhance Accessibility of Interview Training for Military Veterans. In Companion Proceedings of the 26th International Conference on Multimodal Interaction (pp. 82-84).
[14] Chu, S. L., Karcz, M., Hashky, A., Rani, N., Chaspari, T., Arthur, Jr., W., Ragan, E. D. User Judgment of an AI Model is Biased by its Description: Investigating User Perceptions in Decision-Making Systems Through Job Interview Training. Submitted.
[15] Nirjhar, E. H. (2024). Expression and perception of stress in interpersonal communication through the lens of multimodal signals. Ph.D. Thesis. Texas A&M University, College Station, TX.
[16] Sakib, M. N. (2022). The Future Workforce: Exploring the Role of Artificial Intelligence and Technology in Workforce Skilling. Ph.D. Thesis. Texas A&M University, College Station, TX.
[17] Rather, J. (2022). Investigating the effects of physiology-driven vibro-tactile biofeedback for mitigating state anxiety during public speaking. M.S. Thesis. Texas A&M University, College Station, TX.
[18] Agarwal, A. (2021). Exploring real-time bio-behaviorally aware feedback interventions for mitigating public speaking anxiety. M.S. Thesis. Texas A&M University, College Station, TX.
[19] Myscich, A. K. (2022). A linguistic analysis to quantify over-explanation and under-explanation in job interviews. B.S. Thesis. Texas A&M University, College Station, TX.
[20] Fithian, E. (2025). Designing multimodal generative transformers for estimating job interview performance. B.S. Thesis. University of Colorado Boulder, Boulder, CO.
[21] Hagen, E., Sakib, M. N., Rani, N., Nirjhar, E. H., Chu, S. L., Chaspari, T. Behzadan, A. H., & Arthur, Jr., W. (2022). Interviewer Perceptions of Veterans’ in Civilian Employment Interviews and Suggested Interventions. Paper presented at the International Military Testing Association (IMTA) Conference, Raleigh, NC.