Dr. Qin's current research interests are sensing and learning from envrionment and humans. Her research provides theory-grounded, data-driven, and technology-assisted solutions for improving the performance, safety, and well-being of humans in interaction with complex engineered systems ranging from intelligent vehicles, transportation infrastructure to built environments.
Recently, she focuses on research problems arising from
Sensing, perception, planning, and control for autonomous vehicles
Image and video analysis for fast condition monitoring and assessment of transportation infrastructure
Human sensing, understanding, and performance metrology for human-technnology systems
AI-Assisted Analysis of Materials in Recycling Streams
AI Innovation Institute at Stony Brook University
01/01/2025-05/31/2026
Building the Data Infrastructure for Monitoring Rural Roads
USDOT through Rural Safe Efficient Advanced Transporta7on (R-SEAT) Center
06/01/2025-12/31/2026
Understanding Safety-critical Driving Scenes in Rural Roads
USDOT through Rural Safe Efficient Advanced Transporta7on (R-SEAT) Center
06/01/2025-12/31/2026
GAANN: Fellowship Program in Civil Engineering for Advancing Smart Civil Infrastructure Systems (SCIS) [flyer]
Department of Education
10/1/2021-09/30/2026
Most of my papers can be searched on Google Scholar. The following is some of my recent work.
Autonomous Vehicle (AV) and Advanced Driver Asssistance Systems (ADAS)
Liang, K., Li, K., Hu, X., & Qin*, R. (2026). CrashChat: A multimodal large language model for multitask traffic crash video analysis. In Proceedings of the 28th International Conference on Pattern Recognition (ICPR'26). Lyon, France. August 17-22, 2026. [manuscript] [code]
Li, K., Zhang, C., & Qin*, R. (2025). Multi-label scene classification for autonomous vehicles: Acquiring and accumulating knowledge from diverse datasets. [manuscript] [code].
Li, K., Yang, T., Liang, K., Hu, X., & Qin*, R. (2026). HMPDM: A diffusion model for driving video prediction with historical motion priors. In Proceedings of IEEE Intelligent Vehicle Symposium (IV) 2026. Detroit, MI, USA. June 22-25, 2026. [manuscript] [code]
Karim, M.M., Qin, R., & Wang, R. (2024) Fusion-GRU: A deep learning model for future bounding box prediction of traffic agents in risky driving videos. Transportation Research Record 2678, no. 9: 699-709.
Karim, M.M., Qin*, R., & Yin, Z. (2023). An attention-guided multistream feature fusion network for early localization of risky traffic agents in driving videos. IEEE Transactions on Intelligent Vehicles 9(1), 1792-1803. [manuscript] [code]
Karim, M.M., Li, Y., Qin*, R., & Yin, Z. (2022). A dynamic spatial-temporal attention network for early anticipation of traffic accidents. IEEE Transaction on Intelligent Transportation Systems 23(7): 9590-9600. [manuscript] [code]
Karim, M.M., Li, Y., & Qin*, R.(2022). Toward explainable artificial intelligence (XAI) for early anticipation of traffic accidents. Transportation Research Record 2626 (6): 743-755. [manuscript] [code]
Li, Y., Karim, M.M., Qin*, R., Sun, Z., Wang, Z., & Yin, Z. (2021). Crash report data analysis for creating scenario-wise, spatio-temporal attention guidance to support computer vision-based perception of fatal crash risks. Accident Analysis & Prevention 151, 105962. DOI: 10.1016/j.aap.2020.105962. [paper]
Transportation Infrastruture
Zhang, C., Liu, C., Li, K., Yin, Z., & Qin*, R. (2025). Inspector gaze‐guided multitask learning for explainable structural damage assessment. Computer‐Aided Civil and Infrastructure Engineering, 40(30), 5824-5841. [paper] [code]
Zhang, C., Huang, S., & Qin*, R. (2025). When segment anything model meets inventorying of roadway Assets. International Journal of Transportation Science and Technology, 20. 1-14. [paper]
Zhang, C., Li, K., Yin, Z., & Qin*, R. (2024). Weakly‐supervised structural component segmentation via scribble annotations. Computer-Aided Civil and Infrastructure Engineering, 40(5), 561-578. [paper] [code]
Zhang, C., Yin, Z., & Qin*, R. (2024). Attention-Enhanced Co-Interactive Fusion Network (AECIF-Net) for automated structural condition assessment in visual inspection. Automation in Construction 159, 105292. DOI: 10.1016/j.autcon.2024.105292 [manuscript] [code]
Zhang, C., Karim, M.M., & Qin*, R. (2023). A multitask deep learning model for parsing bridge elements and segmenting defects in bridge inspection images. Transportation Research Record. DOI: 10.1177/0361198123115541. [manuscript] [code]
Zhang, C., Karim, M.M., Yin, Z, & Qin*, R. (2022, June 5-8). A deep neural network for multiclass bridge element parsing in inspection image analysis. In Proceedings of 8th World Conference on Structural Control and Monitoring (8WCSCM). Orlando, FL, USA. [manuscript]
Karim, M.M., Qin*, R., Yin, Z., & Chen, G. (2022). A semi-supervised self-training method to develop assistive intelligence for segmenting multiclass bridge elements from inspection videos. Structural Health Monitoring 21(3), 835-852. DOI: 10.1177/14759217211010422. [manuscript] [code]
Zhao, T., Yin, Z., Qin, R., & Chen, G. (2019, August 4-7). Image data analytics to support engineers’ decision-making. In Proceedings of the 9th International Conference on Structural Health Monitoring of Intelligent Infrastructure (SHMII-9), St. Louis, MO, USA. [manuscript]
Human-Technology Systems
Li, Y., Zhang, D., Dong, P., Yao, S., & Qin*, R. (2025). A surface electromyography-based deep learning model for guiding semi-autonomous drones in road infrastructure inspection. Computer-Aided Civil and Infrastructure Engineering. [paper] [code]
Sultana, J., Qin, R. & Yin, Z. (2024). Seeing through expert's eyes: Leveraging radiologist eye gaze and speech report with graph neural networks for chest X-ray image classification. In Proceedings of the Asian Conference on Computer Vision (ACCV), pp. 2579-2595. [paper]
Li, Y., Parson, A., Wang, B., Dong, P., Yao, S., & Qin*, R. (2022). A multi-tasking model of speaker-keyword classification for keeping human in the loop of drone-assisted inspection. Engineering Applications of Artificial Intelligence, 117 (Part A), 105597. [manuscript] [code]
Li, Y., Karim, M.M., & Qin*, R. (2022). A virtual reality-based training and assessment system for inspector-drone cooperative bridge inspection. IEEE Transactions on Human Machine Systems, 52(4), 591-601. DOI: 10.1109/THMS.2022.3155373. [manuscript] [code]
Li, Y., Wang, B., Li, W., & Qin*, R. (2022). Simulation study of passing drivers’ responses to the automated truck-mounted attenuator system in road maintenance. Transportation Research Record. DOI:10.1177/03611981221144281. [manuscript] [code]
Li, Y., Karim, M.M., Qin*, R. (2021). A gaze data-based comparative study to build a trustworthy human-AI collaboration in crash anticipation. ASCE 2023 International Conference on Transportation Development (ICTD'23). Austin, TX, USA. June 14-17, 2023. [manuscript] [code]
Al-Amin, M., Qin*, R., Tao, W., Doell, D., Lingard, R., Yin, Z., & Leu, M.C. (2022). Fusing and refining CNN models for assembly action recognition in smart manufacturing. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 236(4), 2046-2059. DOI: 10.1177/09544062209315. [paper]
Al-Amin, M., Qin*, R., Moniruzzaman, M., Yin. Z., Tao, W., & Leu, M.C. (2023). An individualized system of skeletal data-based CNN classifiers for action recognition in manufacturing assembly. Journal of Intelligent Manufacturing 34: 633-649. DOI:10.1007/s10845-021-01815-x. [manuscript]
Al-Amin, M., Tao, W., Doell, D., Lingard, R., Yin, Z., Leu, M.C., & Qin*, R. (2019). Action recognition in manufacturing assembly using multimodal sensor fusion. Procedia Manufacturing 39, 158-167. DOI: 10.1016/j.promfg.2020.01.288. [manuscript]