SHIELD LAB
Secure and Trustworthy Intelligent Systems Lab
SHIELD LAB
Secure and Trustworthy Intelligent Systems Lab
🔐 Intelligence Meets Integrity: Securing the Future of AI-Driven Systems
How can we design, deploy, and audit intelligent systems, especially those powered by large language and multimodal AI models, in ways that are secure, privacy-preserving, and trustworthy across their entire lifecycle of use, interaction, and adaptation?
Welcome to the cybersecurity research group at the School of Computing of Southern Illinois University Carbondale, led by Dr. Abdur Rahman Bin Shahid. Our lab focuses on advancing the security, privacy, and trustworthiness of intelligent systems in a rapidly evolving digital landscape. As AI becomes deeply embedded in physical, social, and cognitive environments, from wearable sensors and mobile apps to autonomous systems and immersive virtual platforms, new risks emerge in how users interact with these technologies and how systems learn from and act upon human data. We study how these AI-driven technologies introduce vulnerabilities, behavioral privacy risks, and governance challenges. Our research spans the full lifecycle of intelligent systems, from initial design and deployment to real-world interaction and long-term adaptation.
Our Current Research Focus Areas:
🔍 Behavioral privacy risks in multimodal human-AI interaction (e.g., inference from handwriting, voice, gesture, and biosignals)
🛡️ Trustworthy AI design for resource-constrained, real-time systems operating under uncertainty (e.g., wearables, robots, autonomous vehicles)
♻️ Lifecycle-based security and auditing frameworks, ensuring system integrity from data collection to model deployment and adaptation
⚖️ Robustness, fairness, and explainability as actionable properties, not just ideals, especially for AI used in high-stakes contexts like health, autonomy, and infrastructure
🔧 Our research integrates core advances in AI, security, and systems to examine how large-scale models and emerging interfaces reshape trust and privacy in human-AI interaction. We focus on technologies such as large language models (LLMs), vision-language models (VLMs), and generative AI forensics, investigating their behavioral risks, misuse potential, and forensic traceability. Our work spans mobile and wearable platforms, as well as immersive technologies like augmented reality (AR) and virtual reality (VR), where multimodal inputs introduce new vectors for leakage and manipulation. We also explore federated and distributed learning frameworks that support collaborative intelligence while preserving data sovereignty and reducing centralized vulnerabilities.
🔗 [Explore Our Work] · [Meet the Team] · [Join Us]
Secure and Trustworthy Intelligent Systems (SHIELD) Lab
EGRA-409D
1230 Lincoln Dr, Carbondale, IL 62901