Mohammad Yusufi
Nasir Plummer
Raymond Solis
Advisor: Yong Shi
CS 4850
Spring 2025
Malware Generation: https://colab.research.google.com/drive/19RDmnB33Am8Li8vtSgNytWvtvj2pGhHZ?usp=sharing
Malicious URL Generation: https://colab.research.google.com/drive/1k6OSBRf5qhasMHStH-XkHsKT4PJI2gKV?usp=sharing
Email Phishing Generation: https://colab.research.google.com/drive/1p0PNnfihs0MOJgFETOcgmAAhRZGuap3d?usp=sharing
Threat Detection and Anomaly Generation: https://colab.research.google.com/drive/1hsIyLJp87ZqyAxtCgK4CTL3_3RJ-fTbX?usp=sharing
Adversarial Attacks Generation: https://colab.research.google.com/drive/1VajriuaRimPlvG8W_R_VG1Kc76PKoysI?usp=sharing
Github: https://github.com/Raymond-Solis/YS2-GenAI-CyberSec.git
The growing technological world has brought upon the dramatic increase in the use and development of Artificial intelligence. With these advances in technology comes increased digital and cyber threats which often outpace the attempts to mitigate them. This research project investigates the transformative role of generative AI in enhancing cybersecurity defenses, such as anomaly detection, intrusion detection systems, malware analysis, and incident response automation, while also exploring its potential to exacerbate threats like deepfake phishing, polymorphic malware, and adversarial attacks.
Artificial Intelligence (AI) has emerged as a transformative technology, offering innovative solutions for data analysis, decision-making, and automation across various domains, including computer vision, malware detection, and drug discovery. Among the many branches of AI, Generative AI stands out as a powerful subset that leverages machine learning models to learn from existing data and generate new, synthetic data. This capability has opened new possibilities in creative content generation, natural language processing, and even cybersecurity. Generative AI models, such as Large Language Models (LLMs) like ChatGPT and Google’s Gemini, have demonstrated remarkable potential in understanding and generating human-like text, code, and other forms of data.
In the realm of cybersecurity, the rapid evolution of digital technologies has led to increasingly sophisticated cyber threats, ranging from malware and phishing attacks to advanced persistent threats (APTs). Traditional cybersecurity measures often struggle to keep pace with these evolving threats, creating a pressing need for innovative solutions. Generative AI has emerged as a double-edged sword in this context. On one hand, it offers powerful tools to enhance cybersecurity defenses, such as anomaly detection, intrusion detection systems (IDS), email filtering, and automated incident response. On the other hand, it also introduces new risks, including the generation of deepfake phishing campaigns, polymorphic malware, and adversarial attacks that exploit vulnerabilities in AI systems.
The intersection of Generative AI and cybersecurity is a rapidly evolving field, with significant implications for both defense and offense. While Generative AI can automate and enhance cybersecurity processes, it can also be weaponized by malicious actors to launch more sophisticated and scalable attacks. This duality underscores the importance of understanding the capabilities, limitations, and ethical implications of Generative AI in cybersecurity.
The motivation behind this project stems from the dual nature of Generative AI in cybersecurity. On the one hand, Generative AI holds immense potential to revolutionize cybersecurity by automating threat detection, improving incident response, and enhancing vulnerability management. For example, Generative AI can analyze vast amounts of data to identify patterns indicative of cyber threats, generate secure code, and even automate patch management. These capabilities can significantly reduce the workload of cybersecurity professionals and improve the overall resilience of digital systems.
On the other hand, Generative AI also introduces new challenges and risks. Malicious actors can exploit Generative AI to create highly convincing phishing emails, generate polymorphic malware that evades detection, and launch adversarial attacks that manipulate AI systems. The rise of deepfake technology and synthetic identities further complicates the cybersecurity landscape, making it increasingly difficult to distinguish between legitimate and malicious activities. These threats highlight the urgent need for robust defenses and ethical guidelines to mitigate the risks associated with Generative AI.
Given this context, there is a critical need to explore how Generative AI can be harnessed to improve cybersecurity defenses while also addressing the threats it poses. This project aims to bridge this gap by developing a system that demonstrates both the benefits and risks of Generative AI in cybersecurity, along with potential solutions to mitigate these risks.
The primary objective of this project is to explore the intersection of Generative AI and cybersecurity, focusing on both its defensive and offensive applications. Specifically, the project aims to:
Understand the Fundamentals of AI and Generative AI: Begin by learning the foundational concepts of AI, machine learning, and Generative AI, including the underlying models and algorithms that enable these technologies.
Explore Generative AI in Cybersecurity: Investigate how Generative AI can be applied to enhance cybersecurity defenses, including areas such as anomaly detection, intrusion detection systems (IDS), email filtering, malware analysis, and automated incident response.
Identify Threats Posed by Generative AI: Examine the potential risks associated with Generative AI, such as deepfake phishing, automated phishing campaigns, malware generation, polymorphic malware, adversarial attacks, and social engineering.
Develop a Python-Based System: Implement a system in Python (using Google Colab) that demonstrates the dual role of Generative AI in cybersecurity. The system will showcase how Generative AI can improve cybersecurity defenses while also simulating potential threats.
Propose Solutions to Mitigate Risks: Based on the findings, propose and implement solutions to prevent or mitigate the risks posed by Generative AI in cybersecurity. This includes developing ethical guidelines, improving model robustness, and enhancing detection mechanisms.
Promote Ethical and Secure AI Practices: Emphasize the importance of ethical considerations, transparency, and responsible AI development in cybersecurity applications. This includes addressing issues such as bias, misinformation, and the misuse of Generative AI.
By achieving these objectives, this project aims to provide a comprehensive understanding of the role of Generative AI in cybersecurity, highlighting both its potential benefits and risks. The ultimate goal is to contribute to the development of more secure, ethical, and resilient cybersecurity systems in the age of Generative AI.