Cognitive & Network Security Laboratory (CNS Lab) advances security at the intersection of cognitive intelligence and cloud-native networked systems. We design explainable, actionable, and scalable defenses that transform runtime signals into autonomous and trustworthy decisions, ensuring security without compromising performance.
Human-centric / immersive security in social VR (visual/UI deception, behavior safety)
``I would be victimized without realizing it'': Design and Evaluation of Deceptive UI Attacks in Social VR (under review)
Social VR platforms are rapidly growing but face emerging risks from deceptive virtual interfaces. This paper introduces four novel UI-based attacks that covertly manipulate users into performing unintended actions. Implemented on VRChat and validated through a 30-participant user study, the work exposes how adversarial content can exploit perception and interaction cues in immersive environments—highlighting new challenges in human-centric cognitive security.
HarassGuard: Detecting Harassment Behaviors in Social Virtual Reality with Vision-Language Models (under review)
HarassGuard introduces a vision–language model (VLM)–based framework for detecting physical harassment in social VR environments using only visual input. Unlike prior biometric or reactive solutions, it ensures privacy by avoiding voice or motion data. Using a newly constructed IRB-approved harassment dataset and VLM fine-tuning with prompt engineering, the system effectively recognizes ambiguous or avatar-specific behaviors. Experiments show that HarassGuard outperforms traditional vision models, enabling accurate and privacy-preserving safety detection in immersive virtual worlds.
LLM/RAG/agentic pipelines for incident analysis and response; reliability/hallucination auditing
Measuring Hallucination in Large Language Models for Cyber Threat Intelligence: An Exploratory Study (under review)
Large Language Models (LLMs) are increasingly applied to cybersecurity analysis but can generate hallucinations—false or misleading facts not grounded in evidence. This paper introduces an automated framework that measures and visualizes hallucination in LLM outputs using domain-specific NER and factual similarity metrics. Using 4,940 real-world cybersecurity articles, the study reveals how hallucinations manifest in practice and provides a reproducible methodology for improving the reliability and trustworthiness of AI-driven security systems.
Host/runtime cryptojacker defense and lightweight response
CryptoGuard: Lightweight Hybrid Detection and Response to Host-based Cryptojackers in Linux Cloud Environments, ASIACCS 2025 (paper)
Cryptojacking attacks in Linux-based cloud environments are notoriously stealthy and costly. CryptoGuard introduces a lightweight hybrid defense that integrates detection and remediation through scalable eBPF-based monitoring. Using sketch and sliding-window syscall profiling, it captures behavioral patterns with minimal overhead and applies a two-phase deep learning classifier for precise identification. Evaluated on 123 real-world samples, CryptoGuard achieved over 92–96% F1-score while maintaining only 0.06% CPU overhead, demonstrating practical and scalable protection for cloud hosts.
Serverless repository/supply-chain measurement at scale
The Hidden Dangers of Public Serverless Repositories: An Empirical Security Assessment, ESORICS 2025 (paper)
Public serverless repositories accelerate application development but introduce new supply-chain attack surfaces. This work presents the first large-scale security assessment of serverless components and IaC templates, analyzing 2,758 components and 125,936 IaC scripts across multiple repositories. The study uncovers systemic risks such as outdated packages, misused secrets, exploitable configurations, and opportunities for malicious code injection. This paper also provides practical mitigation guidelines and establishes a foundational understanding of the security posture of the serverless ecosystem.
Traffic obfuscation and censorship resistance
MUFFLER: Secure Tor Traffic Obfuscation with Dynamic Connection Shuffling and Splitting, INFOCOM 2025 (paper)
MUFFLER introduces a dynamic connection-level traffic obfuscation system to strengthen Tor’s resistance to flow correlation attacks. Unlike prior padding- or delay-based defenses, it remaps real user connections into multiple virtual paths in real time, generating distinct egress traffic patterns with minimal cost. Experiments demonstrate that MUFFLER reduces powerful correlation attacks to a 1% true positive rate at a false positive rate of 10⁻², while incurring only 2.17% bandwidth and far lower latency than existing methods.
High-performance fabric security (RDMA/SmartNIC/MEC)
Noisy Neighbor: Exploiting RDMA for Resource Exhaustion Attacks in Containerized Clouds, SecAssure@ESORICS 2025 (paper)
This work uncovers new resource exhaustion attacks targeting RDMA-enabled container clouds. Through experiments on NVIDIA BlueField-3, it identifies two critical threats—state saturation and pipeline saturation—that can cause up to 93.9% bandwidth loss and over 1,000× latency increase in co-located containers. To mitigate these issues, Noisy Neighbor introduces a threshold-driven telemetry framework that classifies RDMA resources into adaptive tiers and throttles abusive workloads in real time, restoring predictable and secure multi-tenant performance.
SDN control/data-plane security, protocol fuzzing, topology deception, and policy consistency
Ambusher: Exploring the Security of Distributed SDN Controllers Through Protocol State Fuzzing, Transactions on Information Forensics and Security 2024 (paper)
This paper introduces a protocol state fuzzing framework called Ambusher for uncovering hidden vulnerabilities in distributed SDN and SD-WAN controllers. By inferring simplified protocol state machines from complex controller clusters, it efficiently explores attack states that traditional fuzzers miss. Evaluated on real SD-WAN deployments across campus and enterprise networks, Ambusher discovered six previously unknown vulnerabilities, revealing new attack surfaces in distributed control-plane architectures.
EqualNet: A Secure and Practical Defense for Long-term Network Topology Obfuscation, NDSS 2022 (paper)
EqualNet addresses the problem of topology information leakage in networks, which can enable large-scale Link Flooding Attacks (LFAs). It introduces a proactive network obfuscation defense that equalizes traceroute flow distributions across nodes and links, preventing adversaries from identifying critical bottlenecks while maintaining operational visibility for network operators. Built and evaluated on a Software-Defined Networking (SDN) prototype, EqualNet demonstrates effective long-term obfuscation in networks of various scales and offers significantly stronger resilience against topology inference attacks.