🎖️ As part of the ORAU Innovation Partnerships Grant as Lead PI, we are organizing the upcoming event:
🎯 "LLMs Nexus: Bridging Technical Innovation and Ethical Horizons"
🗓 May 14, 2025 [8.30 am-1pm CT]
This event will foster meaningful dialogue on the advancements and societal implications of Large Language Models (LLMs), bringing together academic researchers, industry leaders, and policy experts to build a foundation for responsible AI development.
Distinguished Speaker Prof. M. Hadi Amini [8.30am-9.10am CST]
Title: (Distributed) AI for Interdependent Cyberphysical Systems
Abstract: Increasing integration of advanced computing and communication technologies requires secure and efficient computational methods to deal with complex decision-making problems. In the centralized settings, there is a need for control centers to solve large-scale learning and optimization problems on behalf of end-users, which increases the computational complexity and requires extensive information sharing.
This talk presents a comprehensive overview of the role of (distributed) AI for interdependent cyberphysical systems, where interdependencies among networks (ranging from power and transportation infrastructures to public safety) pose unique challenges for real-time decision-making and learning. The first part of this talk is devoted to motivating development of distributed/decentralized learning methods for interdependent decision making. These algorithms introduce major advantages as compared with centralized solutions, e.g., reducing the computational complexity of the large-scale machine learning problems and enabling scalability. Second part of this talk is devoted to two major research contributions of our group: I. Coupled Learning and Optimization for Interdependent Networks, where we introduce decentralized optimization and reinforcement learning algorithms that account for network inter dependencies and physical constraints; II. AI for Public Safety, focusing on how we can leverage AI to identify anomalous activities. The final part of the talk outlines emerging directions, particularly in Securing and Decentralizing Large Language Models (LLMs) as two promising research areas.
Brief bio: Dr. M. Hadi Amini is an Assistant Professor at the Knight Foundation School of Computing and Information Sciences at Florida International University. He is the founding director of Security, Optimization, and Learning for InterDependent networks laboratory (www.solidlab.network) and Associate Director of the USDOT National Center for Transportation Cybersecurity and Resiliency (TraCR). He received his Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University in 2019. He conducts research in federated learning, distributed optimization and learning algorithms, and their applications in real-world problems, such as cybersecurity, interdependent cyberphysical systems, and public safety. He received the 2025 IEEE Big Data Security Junior Research Award for excellent contributions to Big Data Security in Cyber Physical Systems. He is a Senior Member of IEEE, as well as the recipient of the Best Paper Award from “2019 IEEE Conference on Computational Science & Computational Intelligence”, the 2021 Best Journal Paper Award from “Springer Nature Operations Research Forum Journal”, 2025 FIU College of Engineering and Computing Faculty Excellence in Mentorship Award, 2024 FIU Top Scholar Award, Research and Creative Activities, Junior Faculty with Significant Grants (Sciences), and the 2023 FIU “Faculty Senate Excellence in Teaching Award”. He serves as Associate Editor of IEEE Transactions on Information Forensics and Security, and IEEE Transactions on Machine Learning in Communications and Networking.
Speaker: Dr. Ahmed Imteaj [9.10am-9.30am CST]
Title: Multimodal Large-Language Models: Current and Future Research Trends
Abstract: Multimodal Large Language Models (MLLMs) represent a significant advancement in AI by enabling systems to process and reason over both visual and textual data. In this talk, I will provide an overview of MLLMs, their core architecture and processing pipeline, and discuss why improving their security is becoming increasingly critical. I will highlight their growing impact in real-world applications, particularly in transportation, and conclude with key research directions that aim to enhance their reliability, efficiency, and adaptability in the future.
Brief bio: Dr. Ahmed Imteaj is a tenure-track Assistant Professor of Computer Science at Southern Illinois University Carbondale, where he leads the Security, Privacy, and Intelligence for Edge Devices Lab (SPEED Lab). Over the past few years, Dr. Imteaj has made remarkable strides in both research and teaching. He is the recipient of multiple prestigious awards, including the NSF CRII Grant, a U.S. Department of Homeland Security Grant, the ORAU Research Innovation Partnership Grant, and the 2024 SIUC Outstanding Teacher of the Year Award, nominated for Rising Scholar Award and Early Career Faculty Excellence Award at SIU. Dr. Imteaj earned his Ph.D. in Computer Science from Florida International University in 2022, recognized with the prestigious FIU Real Triumph Graduate Award. During his time at FIU, he also earned his M.Sc. degree, recognized with the Outstanding Master’s Degree Graduate Award. He has a B.Sc. degree in Computer Science and Engineering.
Dr. Imteaj’s research spans Robust and Trustworthy AI, Federated Learning, Large Vision-Language Models (VLMs), and Cybersecurity. His contributions have been recognized with other numerous accolades, including the 2022 Outstanding Graduate Scholar of the Year at FIU, the 2021 Best Graduate Student in Research Award at FIU, and a Best Paper Award at the 2019 IEEE CSCI’19 conference. Dr. Imteaj has authored over 70 peer-reviewed publications in journals and conferences, and published a book as the lead author.
Speaker: Dr. Minhaj Nur Alam [9.30am-10.00am CST]
Title: Language and Vision-Language Models for ophthalmology
Bio: Dr. Minhaj Nur Alam is a biomedical imaging scientist with expertise in the field of ophthalmic imaging biomarkers and artificial intelligence (AI). Specifically, his research interest is heavily focused on quantitative imaging biomarker development and application of AI in medicine and healthcare. He is currently working as an Assistant Professor of Medical Imaging and AI at the department of Electrical and Computer Engineering (ECE), University of North Carolina (UNC) at Charlotte, where he is the director of the Quantitative Imaging and AI Lab. His group is currently working on advanced AI algorithms such as federated learning (FL) and self-supervised learning for applications in Ophthalmology. The AI algorithms along with the ophthalmic LLM and VLMs have huge potential and impact in improving clinical and vision care. Dr. Alam's research is funded by multiple grants from the National Eye Institute (NEI) at the National Institute of Health (NIH). Dr. Alam holds a PhD in Bioengineering from the University of Illinois at Chicago (UIC) and got his postdoctoral training at Stanford University School of Medicine, where he was affiliated with the departments of Biomedical Data Science, Ophthalmology, and the Center for AI in Medical Imaging.
Speaker: Dr. Abdur Rahman Bin Shahid [10.10am-10.30am CST]
Bio: Dr. Abdur Rahman Bin Shahid is an Assistant Professor of Computer Science at Southern Illinois University, Carbondale, IL, USA. His research focuses on cybersecurity, deep learning, adversarial ML, multimodal AI, usable security and privacy, generative AI, Cyber-Physical Systems, and Internet of Things.
Abstract: Wearable AI systems, particularly those used for Human Activity Recognition (HAR), are becoming increasingly central to applications in healthcare, security, and personal fitness, driven by the proliferation of smart devices and sensor-rich wearables. However, this growing reliance on machine learning introduces new vulnerabilities, most notably, poisoning attacks that can compromise model integrity and system reliability. In this talk, we explore the potential of Large Language Models (LLMs) as zero-shot reasoning agents for detecting and sanitizing such poisoning attacks in sensor-based HAR systems. Building on our ongoing research in integrating LLMs with cyber-physical systems, we examine how LLMs can interpret raw sensor data to defend against adversarial manipulation, without requiring retraining or large labeled datasets. We present a case study to evaluate the effectiveness of prominent LLMs, ChatGPT-3.5, ChatGPT-4, and Gemini, in identifying and correcting poisoned labels in HAR datasets. Our results provide early evidence supporting the feasibility of using LLMs for real-time, context-aware defense mechanisms that enhance data integrity and trustworthiness in wearable AI systems.
Speaker: John Gounley [10.30am-11.00am CST]
Computational Scientist, Oak Ridge National Laboratory
Presentation title: Democratizing AI for Cancer with Privacy Preserving Synthetic Data Generation for Cancer Case Identification
Bio: John Gounley is a computational scientist in the Computational Sciences and Engineering Division at Oak Ridge National Laboratory, where he leads the Scalable Biomedical Modeling group.
Speaker: Deepti Gupta [11.00am-11.30am CST]
Title: A Semantic Framework for Vendor Privacy Policy Compliance Using LLMs
Abstract: Ensuring privacy policy compliance with evolving data protection regulations is increasingly complex, especially when organizations rely on third-party vendors. Regulatory documents are often lengthy and difficult to interpret, while vendor privacy policies frequently lack the detail required for full legal compliance. In this talk, I will present a novel framework that combines Large Language Models (LLMs) and a domain-specific knowledge graph to automatically verify the alignment between organizational privacy policies and regulatory requirements. Using Retrieval Augmented Generation, the system identifies relevant sections of a privacy policy that correspond to specific regulations, achieving a correctness score of 0.9. The extracted information is structured within a semantic knowledge graph that supports efficient querying, contextual interpretation,
Bio: Dr. Deepti Gupta is an Assistant Professor at Texas A&M University-Central Texas. After receiving her PhD, she joined Goldman Sachs as a Cloud Security Architect. She also worked as a faculty member in the Department of Computer Science at Huston-Tillotson University, Austin. She received her Ph.D. degree in Computer Science from the University of Texas at San Antonio (UTSA) and also received her M.S. degree in Computer Science from UTSA. She has worked as an Adjunct Faculty in the Department of Computer Science at St. Edward University, Austin. Dr. Gupta’s research interests lie in the areas of security and privacy in the Internet of Things (IoT) leveraging cloud and edge computing. Her research interests also include the application of AI and Machine Learning to secure IoT and CPS infrastructures in various application domains, such as smart healthcare, wearable IoT and smart home. She is also interested in designing federated learning algorithms to deal with non-IID data using game theory. She also developed novel anomaly detection models and fine-grained access control models to develop secure infrastructure for IoT. She has several conference and journal publications, and also continually serves as an expert reviewer for various journals and technical program committees for several conferences and workshops. Dr. Gupta has received National Science Foundation (NSF) and Department of Defense (DoD) awards. She is an active team member of IEEE ComSoc Young Professionals, AnitaB.org, WiCyS, and also co-chair of the N2Women fellowship.
For more information or to explore potential research collaborations, feel free to reach out at: 📧 imteaj[at]ieee.org
🤝 Always open to meaningful conversations, innovative ideas, and collaborative opportunities!