Research Focus
Research Focus
Exploring the transformative potential of large language models (LLMs) like ChatGPT and Claude.ai, our research investigates their diverse applications in enhancing the cybersecurity of smart grid systems. This includes leveraging Generative AI (GenAI) for proactive anomaly detection, zero-day threat identification, and response optimization in power systems. By integrating GenAI models into anomaly detection systems (ADS), our work demonstrates how advanced AI algorithms can significantly outperform traditional machine learning (ML) models in detecting and mitigating evolving cyber threats. The research further evaluates the technical capabilities and constraints of GenAI models in handling large-scale data typical of smart grid infrastructures. Our contributions include creating frameworks for real-time cybersecurity monitoring, applying in-context learning, and using retrieval-augmented generation (RAG) techniques to adapt and respond to specific threats dynamically. Additionally, our work highlights the importance of human-in-the-loop (HITL) approaches to refine the decision-making capabilities of AI, ensuring contextual awareness and accuracy in critical security scenarios. By bridging the gap between advanced AI technologies and practical power system security requirements, this research area lays the groundwork for more robust, adaptable, and scalable smart grid cybersecurity solutions in the era of increasingly sophisticated cyberattacks.
This research area outlines key contributions in advancing cybersecurity for IEC61850-based digital substations, focusing on vulnerability assessments of multicast messages such as GOOSE and SV communications. This research introduces automated cybersecurity testing frameworks capable of identifying anomalies through simulated cyberattacks such as spoofing, replay, and injection attacks. The results highlight the inefficiencies of commercial Intelligent Electronic Devices (IEDs) in detecting fabricated packets, emphasizing the importance of proactive testing methods. Further, an anomaly detection in energy management systems (EMS) based on HMI screens can be analyzed. The research integrates advanced algorithms to detect abnormal system behavior and cyber intrusions in power system HMIs, reinforcing operational reliability and security through innovative detection methodologies. This work contributes significantly to the fields of digital substations and EMS by proposing robust solutions for real-time anomaly detection and cybersecurity resilience.
The growing integration of distributed energy resources (DERs) into power distribution networks has put pressure on utilities to improve their systemic awareness and implement load control techniques in the behind-the-meter (BTM) systems. Utilities need to monitor the households’ consumption by predicting the integrated load pattern. They need to use intelligent algorithms to provide good accuracy and assessment of households’ energy consumption and to preserve their privacy and security when an attack or anomalous behavior occurs. You have developed advanced load forecasting methods utilizing machine learning techniques, such as Long Short-Term Memory (LSTM) networks and Stacked Autoencoders (SAE). These models are designed to accurately predict energy consumption for behind-the-meter (BTM) DERs, facilitating more efficient energy management and grid reliability.
A normal operation of SMs in households without abnormal activity
For autonomous vehicle (AV) perception modules, adversarial attacks have the potential to cause machine learning algorithms to predict inaccurate output labels. As new kinds of attacks are created, it is crucial to enhance the AI security of AVs, so that, they can defend themselves against a variety of attack scenarios using the appropriate mitigation strategies. AVs that are resilient to adversaries can improve their ability to navigate roadways safely by decreasing the probability that they would incorrectly identify road signs or other objects. Our research has focused on enhancing the security of AV perception systems through integrated threat analysis and context-aware methods. These approaches aim to identify and mitigate potential vulnerabilities, thereby improving the resilience and safety of AVs.
Traffic Sign Recognition and Object Classification in Autonomous Driving Systems
This research introduces innovative classification techniques designed to address the challenges posed by unexpected noise intrusions in autonomous vehicle (AV) systems. Noise intrusions—arising from environmental factors such as extreme weather conditions, sensor malfunctions, or adversarial attacks—can severely impair the ability of AV systems to accurately interpret their surroundings. Recognizing these challenges, your work proposes novel approaches that enhance the robustness and reliability of AV perception systems.
Context-Awareness Methodology in Autonomous Vehicles
This research explores the integration of active inference and context awareness to address the cyber-physical security challenges in autonomous vehicles (AVs). It highlights vulnerabilities in machine learning (ML) algorithms used for traffic sign recognition (TSR) and object classification (OC), emphasizing their limitations in handling unexpected anomalies and partial observations. It introduces the use of active inference models based on Partially Observable Markov Decision Processes (POMDPs) to improve decision-making in partially observable and uncertain scenarios. It presents a comprehensive analysis of TSR and OC algorithms, evaluates the role of context-aware safety measures, and proposes solutions for managing abnormal driving scenarios, such as fallen traffic signs, rotated signs, and adversarial attacks. By incorporating partial observations and Bayesian inference, the active inference model demonstrates its potential to enhance prediction accuracy and mitigate cyber threats, offering a robust framework for safe and reliable autonomous navigation.