Rising Star Symposium Series
The IEEE TCCN Special Interest Group for AI and Machine Learning in Security conducts a rising star symposium series where emerging scholars (e.g., senior PhD candidates and postdocs) present their research to a broader audience with the intention of fostering more mentorship, collaboration, and employment opportunities between the speakers and audience.
Upcoming events
TBA for 2025
Past events
Recorded videos in our YouTube channel: https://www.youtube.com/channel/UCsDvVnQCC5QclwpyL7J1FFA
Tao Li
NYU, USA
Title: Towards Agent-based Autonomous Network Security
Date: November 21, 2024; Time: 12PM ET
Registration: Please register at
https://gmu.zoom.us/meeting/register/tJAlcOyuqjwjHNbYOhmRtdzliwTlTXaZHhxX
Abstract: Security of cyber-physical network systems, such as 5G/6G communication networks, vehicular networks, and the Internet of Things, has become increasingly critical nowadays. Traditional security mechanisms rely primarily on manual operations, which can be slow, expensive, and ineffective in the face of the dynamic landscape of adversarial threats. This problem will only be exacerbated as attackers leverage artificial intelligence (AI) to automate their workflows. As a countermeasure, safeguarding critical network systems also calls for autonomous defensive operations that delegate security decisions to AI agents. This talk presents our agent-based framework for autonomous attack detection and response using reinforcement learning (RL) and large language models (LLM). To address conventional RL's reactive nature, we propose a new RL paradigm, conjectural online RL (coRL), to equip the security agent with predictive power when dealing with the agent's epistemic uncertainty over the attacker's presence and actions. The intuition of coRL is to endogenize the epistemic uncertainty as part of the RL process: the agent maintains an internal world model as a conjecture of the uncertainty, and the learned conjecture produces valid predictions consistent with environment feedback induced by epistemic uncertainty. To mitigate the RL agent's reliance on stylized modeling and textual data pre-processing, we further incorporate LLMs into the agentic framework to deliver end-to-end autonomous cyber operations. We finally conclude the talk by discussing the path ahead to building fully autonomous security agents.
Bio: Tao Li is a Ph.D. candidate in Electrical Engineering at New York University, affiliating with NYU Center for Cybersecurity. He received his B.S. in mathematics from Xiamen University in 2018. His research focuses on game theory and multi-agent learning theory, which advances novel methodologies and frameworks on predictive reinforcement learning, non-equilibrium analysis, and meta-learning control for secure and resilient cyber-physical system design, defense, and management. His continued enthusiasm and efforts have won him the Dante Youla Award for research excellence at NYU and led to publications in control, robotics, and security conferences, such as ICRA, CDC, and INFOCOM, as well as journals, including IEEE TIFS, TSPN, TITS, and TRC.
Dr. Junyuan Hong
UT Austin, USA
Title: GenAI-Based Chatbot for Early Dementia Intervention
Date: September 19, 2024; Time: 1PM ET
Registration: Please register at
https://gmu.zoom.us/meeting/register/tJElcOCqqDsiE9ObhH7OHOW7ZsgYwqaCOqXZ
Abstract: Mild Cognitive Impairment (MCI) is a prodrome stage of Alzheimer's Disease and related dementias (AD/RD). Its detection is essential for early intervention and trial cohort enrichment of AD/RD. A recent clinical trial, I-CONECT, demonstrated that engaging in frequent cognitively stimulating conversations could be an effective strategy against social isolation and cognitive decline due to MCI. However, the widespread deployment of such interventions faces challenges, particularly due to the need for trained human interviewers to conduct the conversations. We propose using an AI-based chatbot to replace human interviewers, thereby improving the accessibility of this therapeutic approach. Because of the high-stake and high-cost nature of such aging research, we developed an automatic interactive benchmark, dubbed AI-CONECT Virtual Benchmark, to thoughtfully and scalably investigate if Large Language Models (LLMs) can implement the essential protocols used in the I-CONECT intervention trial for stimulating cognitive functions through cognitively demanding and engaging conversations. Driven by the benchmark, we designed an AI-based chatbot for early dementia intervention, from which we demonstrate the potential of AI for older adult healthcare. Meanwhile, we emphasize that such an application demands comprehensive consideration of privacy and safety. We present thorough benchmarks of LLMs and show that the trustworthiness still demands great efforts.
Bio: Junyuan Hong is a postdoctoral fellow at the Institute for Foundations of Machine Learning (IFML) and the Wireless Networking and Communications Group (WNCG) at UT Austin, under the guidance of Dr. Zhangyang Wang. He earned his Ph.D. in Computer Science and Engineering from Michigan State University, where he was mentored by Dr. Jiayu Zhou. He also holds a B.S. in Physics and an M.S. in Computer Science from the University of Science and Technology of China. He is recognized as one of the MLCommons Rising Stars in 2024 and a finalist for the VLDB 2024 best paper. Junyuan's long-term research vision is to develop Holistic Trustworthy AI for Healthcare. His recent work addresses the pressing challenges in AI for Dementia Healthcare, focusing on Privacy-Centric Trustworthy Machine Learning. His research emphasizes the importance of fairness, robustness, security, and inclusiveness, all within the framework of privacy constraints.
Christo Thomas
Virginia Tech, USA
Title: Next-Generation AI for Next-Generation Wireless Networks
Date: August 13, 2024; Time: 11AM ET (tentative)
Registration: Please register at https://gmu.zoom.us/meeting/register/tJwsd-morjkrG9cywihKI5AyF3g-FqDYFz2t
Virtual and Physical at Stevens Institute of Technology, USA
Abstract: Despite the basic premise that next-generation wireless networks (e.g., 6G) will embrace artificial intelligence (AI) integration, current efforts mostly extend existing "AI for wireless" paradigms qualitatively or incrementally. Creating AI-native wireless networks faces technical hurdles due to data-driven, training-intensive AI limitations, such as black-box models, limited reasoning and adaptability, data dependency, and energy inefficiency. In this talk, we propose a forward-looking framework grounded in causal reasoning that fosters explainable, reasoning-aware AI-native wireless networks to overcome these challenges. Further, we also discuss that incorporating neuro-symbolic AI into future wireless networking holds great promise, as it combines the understanding of the relations among intricate wireless concepts (symbolic component) with the expressive power of the neural networks. Further, we illustrate the potential of causal reasoning and neuro-symbolic AI frameworks through the example of an emerging field such as semantic communications that craft a nuanced semantic language between the communicating nodes. This language aims to compute a minimalistic and generalizable semantic representation that enhances communication efficiency by incorporating advanced reasoning components at both ends of the communication process. Finally, we touch upon the fundamental principles of developing "universal foundation models" (aka wireless specific generative AI models) that are driven by three distinct characteristics: 1) integration of multi-modal sensing data, 2) grounding sensory input via causal reasoning and retrieval-augmented generation (RAG), and 3) instructibility to environmental feedback through logical and mathematical reasoning enabled by neuro-symbolic AI.
Bio: Christo Kurisummoottil Thomas is a postdoctoral fellow at the Electrical and Computer Engineering Department at Virginia Tech. He received his PhD from EURECOM, France, in the year 2020. He also has a few years of industrial experience with Qualcomm, Intel, and Broadcom, developing physical layer algorithms for 4G and 5G wireless modem devices. His research interests include building reasoning-native semantic communications for next-generation networks, generalizable and explainable AI for wireless systems, and variational Bayesian inference. His research has led to several publications in journals such as IEEE TWC, JSAIT, VTM, OJCOMS, and IEEE conferences such as GLOBECOM, ICC, ICASSP, and many others.
Nasim Soltani
Northeastern University, USA
Title: Deep Learning for Next-G Wireless Communications
Date: April 18, 2024; Time: 11AM ET
Registration: Please register at https://gmu.zoom.us/meeting/register/tJArfuiopzwqGNDlj7_5VA1fDjOGJjmMB8GD
Abstract: Wireless communications has been revolutionized by the use of deep learning for the physical layer applications. The benefits of applied AI/ML for wireless has inspired researchers and innovators to propose a fully AI-based paradigm for 6G communications. In this talk, we introduce the advantages and address the challenges of using AI/ML in two key areas in the physical layer: (i) spectrum sensing and (ii) signal reception and decoding. In the first area, we show the power of deep learning for signal detection and localization in increased noise regime in the citizen broadband radio service (CBRS) band, and we introduce a deep learning method for radio frequency (RF) fingerprinting hovering unmanned aerial vehicles (UAVs). In the second area, we show how deep learning can be leveraged to design waveforms with reduced communication overhead that lead to an increase in communication throughput.
Bio: Nasim Soltani is a PhD candidate at the Electrical and Computer Engineering Department at Northeastern University. Her research interest is broadly applied AI/ML for wireless communications. She has worked on applications of deep learning for spectrum sensing and signal classification including RF fingerprinting, as well as neural-network-based wireless receivers for next-G communication systems. Her work appears in different IEEE venues including IEEE TWC, TVT, TMC, JSAC, ComMag, WirelessComMag, IoTMag, INFOCOM and others.
Dr. André Gomes
Commonwealth Cyber Initiative, USA
Title: The Road to Ultra-reliability in Future Mobile Networks
Date: March 7, 2024; Time: 10AM ET
Registration: Please register at https://gmu.zoom.us/meeting/register/tJYsc-ippzsuGd33xdDjdl5t5i1trH_1PmO4
Abstract: A key difference between today's and tomorrow's wireless networks is the increasing need for ultra-reliability to support emerging mission-critical communication services such as URLLC (in 5G) and HRLLC (in 6G). However, supporting these services network-wide is challenging because of their stringent ultra-reliability and latency requirements (e.g., reliability is often ≥ 99.999% and maximum latency ≤ 1 ms) and the stochastic nature of wireless networks. A natural way of increasing reliability and reducing latency is to provision additional network resources (e.g., spectrum, network density) to compensate for adversarial network conditions (e.g., fading, interference, mobility, time-varying load). This talk will address what it takes to support network-wide ultra-reliable communication and introduce a framework for network dimensioning based on meta-distributions. Our analysis shows that the required magnitude of resources can be beyond what is typically available (or even practical) in today's networks. We will discuss multi-operator connectivity sharing (mobiles multi-connect to operators in sharing arrangement) as an alternative to facilitate network-wide ultra-reliable communication.
Bio: André Gomes has recently obtained his Ph.D. in Computer Engineering from Virginia Tech and is currently a Postdoctoral Researcher with the Commonwealth Cyber Initiative (cyberinitiative.org). His research interests lie in designing, building, and evaluating networks for reliability: networks that can perform under stringent performance requirements and withstand failures, attacks, and disasters. He has worked on topics such as ultra-reliable communication, network softwarization, and reconfigurable intelligent surfaces.
Changgang Zheng
University of Oxford, UK
Title: Toward In-Network ML on Programmable Network Devices
Date: February 13, 2024; Time: 11AM ET
Registration: Please register at https://gmu.zoom.us/meeting/register/tJ0qf-mrrTIvHtUkFhhvr5FYLp7Xh5Ifx8Jf
Abstract: Machine learning is widely used by data-intensive applications. However, standard accelerators struggle to handle the volume of data and meet low-latency requirements. In-network machine learning, offloading of machine learning tasks to run within the network devices, is an emerging solution to this problem. This presentation will introduce the concept of in-network machine learning, the technology and its implementation, including three general model mapping methodologies. As in-network machine learning is a resource-constrained machine learning problem, two solutions will be introduced: a distributed deployment solution, and a hybrid deployment solution. The talk will explore a range of security-related applications of in-network machine learning, including anomaly detection in a wide area network, traffic analysis at the IoT edge, bot detection and others. Finally, the talk will introduce a framework for rapid deployment of in-network machine learning on a range of targets, as well as several open-source projects for community use.
Bio: Changgang Zheng is a final-year DPhil (PhD) student in Engineering Science at the University of Oxford. Before joining Oxford, he was a research assistant in UESTC Data Mining Lab. His research interests include networking, in-network computing, and machine learning. A primary focus is in-network machine learning, and its application to a range of applications: from cyber-security to financial transactions. Changgang's research led to publications at ACM CoNEXT, IEEE HPSR, IEEE IoT Journal and others.
Mohammad Saidur Rahman
Rochester Institute of Technology, USA
Title: Machine Learning for Cyber Defense: From Network Security and Endpoint Security Perspectives
Date: January 17, 2024; Time: 4PM ET
Registration: Please register at https://gmu.zoom.us/meeting/register/tJUtfumrqjgoGNTuKR5H56xBJ6FVQPMGQffQ
Abstract: With the emergence of Machine Learning (ML) as a solution to complex problems across various domains, its application in providing intelligent and automated security solutions is crucial, especially in addressing the shortage of cybersecurity workforce. However, ML in cybersecurity faces unique challenges, including managing diverse data types, handling sparse data, adapting to data distribution shifts over time, and addressing the multifaceted nature of security data. To address these challenges, state-of-the-art focuses on investigating vulnerabilities and defenses enabled by machine learning in two key areas of security – network security and endpoint security, with a particular emphasis on malware analysis and network traffic analysis. In this talk, the speaker will present two research projects that represent the breadth of his research interests: 1) Advancing adversarial machine learning to defend against website fingerprinting attacks, a type of traffic analysis attack that deanonymizes the client of a privacy-enhancing technology such as Tor with high confidence, a significant concern in network security; and 2) Developing continual learning systems for dynamic and intelligent malware classification to tackle the influx of a massive number of malware and benign software instances, which addresses a critical endpoint security challenge.
Bio: Mohammad Saidur Rahman is a Ph.D. candidate in Computing and Information Sciences at the Rochester Institute of Technology (RIT) and a security research intern at Cisco Quantum Lab, Cisco Research. His primary research focus includes machine learning for security and privacy, particularly in the problem spaces of malware analysis and network traffic analysis. He is also working on quantum key distribution (QKD) enabled solutions to protect network and computer systems for the post-quantum era at Cisco Quantum Lab. His research has led to publications in several security conferences and journals, including IEEE S&P, ACM CCS, IEEE TIFS, PoPETS, and machine learning conferences like CoLLAs.
Dr. Nadia Yoza Mitsuishi
National Institute of Standards and Technology, USA
Title: 5G and 4G Coexistence Measurements Using Software-Defined Radio
Date: December 12, 2023; Time: 1PM ET
Registration: Please register at https://gmu.zoom.us/meeting/register/tJMpduivrD8tEtALG_XwSHnYQjyDOUX0YUyr
Abstract: We present a test setup to evaluate the performance of three wireless coexistence scenarios. The first case consists of intercell interference (ICI) between two 4G long-term evolution (LTE) downlink channels from adjacent cells, the second case represents mutual downlink interference between LTE and Wi-Fi, and the third case is based on mutual ICI between LTE and 5G. LTE and 5G are configured in frequency-division duplexing (FDD) mode and the setup is based on software-defined radios (SDR) running open-source software, while the Wi-Fi system is based on development boards. Our setup provides precise adjustment of communication and ICI channel gains, the measurement of SDR receiver internal noise power and noise figure, and the measurement of link signal-to-interference-and-noise ratio (SINR). The network performance is measured under varying mutual interference conditions. This setup can be used to study further coexistence scenarios.
Bio: Nadia Yoza Mitsuishi is a postdoctoral researcher at the National Institute of Standards and Technology (NIST), at the Shared Spectrum Metrology group. She holds a PhD and MS degrees in Telecommunications from the University of Colorado Boulder. Her areas of interest are spectrum sharing, radio propagation and software-defined radio.
Dr. Chia-Yi Yeh
Massachusetts Institute of Technology (MIT)
and Brown University, USA
Title: Absolute Security in Terahertz Wireless Links
Date: November 15, 2023; Time: 2PM ET
Registration: Please register at https://gmu.zoom.us/meeting/register/tJEqdOytqz0iHNEAz6mEKbZAf9R4vi4Zg_Ak
Abstract: Information-theoretic security will be key for post-quantum security as it holds regardless of the eavesdropper’s computational capabilities. In this talk, I will share our proposed Absolute Security scheme that achieves information-theoretic security for general wideband high-frequency links. Our design is a hybrid approach that relies on both active manipulation of physical layer radiation and linear secure coding to achieve security. In the first step, we utilize antenna physics to engineer a large spatial region in which Eve is “blind” in at least one frequency channel, regardless of which one. At the same time, Bob always receives all frequencies. Next, we design a linear secure coding scheme so that Eve fails to solve the linear system once she misses even one frequency channel, and thus she obtains no information regarding the message, thus realizing information-theoretic security. With this approach, we show, in both theory and experiments, that increasing the number of frequency channels enhances the data rate and the blind region simultaneously, which is a unique advantage among security schemes.
Bio: Chia-Yi Yeh is currently a Postdoctoral Associate in the Department of Electrical Engineering and Computer Science (EECS) at MIT and the School of Engineering at Brown University, under Prof. Muriel Médard and Prof. Daniel M. Mittleman. She received her Ph.D. and M.S. in Electrical and Computer Engineering from Rice University in 2021 and 2017 under the supervision of Prof. Edward W. Knightly, and her B.S. in Electrical Engineering from National Taiwan University in 2014. Her research interests are the design, implementation, and experimental demonstration of next-generation wireless systems for communication, security, and sensing based on theoretical foundations, for systems including massive MIMO, millimeter wave, and terahertz networks.