The increasing sophistication of SQL injection (SQLi) attacks poses a persistent threat to database security. Machine learning-based detection methods have emerged as a viable solution, offering the ability to identify malicious patterns within web traffic dynamically. This study compares the performance of two machine learning architectures - a multilayer perceptron-based Long Short-Term Memory network, and a Kolmogorov-Arnold Network - in detecting known SQLi attacks. Long Short-Term Memory models, a variant of Recurrent Neural-Networks, are well-suited for temporal-sequential data and have demonstrated efficacy in various natural language processing tasks and SQLi detection. Conversely, Kolmogorov-Arnold Networks leverage mathematical insights from functional approximation theory, offering a promising but less explored approach to machine learning. In addition to evaluating these models on datasets containing legitimate and malicious SQL queries, this research investigates their ability to detect simulated 0-Day SQLi attacks after training. Key analyzed metrics include detection accuracy, computational efficiency, and robustness against 0-Day/obfuscation techniques. By providing a comprehensive comparative analysis, this study seeks to identify the strengths and limitations of each model contributing to the development of more effective and adaptive cybersecurity focused machine learning systems.
This work assesses the performance and capability of commercially available Large Language Models (LLMs) to automatically apply mapping from organizational cybersecurity policies to the National Institute of Standards and Technology (NIST) Cybersecurity Framework Version 2.0. Currently, this mapping process is manually conducted, which is resource-intensive, inconsistent, and difficult to scale, prompting an increased interest in AI-assisted solutions. Using a previously completed, manually verified gold-standard dataset, each model was tested under identical conditions and measured for accuracy, coverage, and consistency when aligning policy text with NIST CSF functions and subcategories.
Analysis of results reveals that all three models successfully identified high-level NIST CSF 2.0 functions. However, each model struggled with precise subcategory mappings, exhibited over mapping tendencies, and produced inconsistent outputs across repeated runs. This data shows that commercial out-of-the-box LLMs can provide benefit in terms of speed in first-pass mapping. However, this still requires a human-in-the-middle approach that combines AI-generated mapping with expert validation.
Unmanned Aerial Systems (UAS) or drones are increasingly used in modern warfare to a devastating and frightful effect. Current conflicts in Ukraine and in the Middle East have illustrated how small drones pose a significant threat to infantry. Having the ability to effectively counter these threats will be a crucial component to any modern fighting force that wants to be able to protect their soldiers and give them the means to prevail on the battlefield.
This study evaluates Prowler, an open-source security auditing framework, for its effectiveness in automated compliance validation within Amazon Web Services (AWS) environments. A controlled AWS sandbox incorporating IAM, EC2, S3, and CloudTrail resources was provisioned, and deliberate misconfigurations were introduced to replicate realistic cloud security drift. Baseline compliance was assessed using Prowler’s multi-framework scanning engine, which maps findings to CIS, Benchmarks, NIST SP 800-53, ISO/IEC 27001, and related controls. All findings were manually verified against AWS Console configurations and remediated through the application of security hardening actions. Results validate Prowler as a lightweight, technically robust auditing tool suitable for DevSecOps workflows, continuous compliance pipelines, and cost-constrained organizations requiring transparent, standard-aligned cloud security assessments.
In this capstone project, a scoping literature review of cybersecurity measures in renewable energy systems is presented. The review uses 63 peer‑reviewed articles published between 2022 and 2025 from the IEEE Xplore and ACM Digital Library to evaluate the strengths, weaknesses, and general effectiveness of current security protocols. The analysis shows that most research focuses on a few high‑impact threats, especially False Data Injection and Denial‑of‑Service attacks, which mainly exploit vulnerabilities in communication networks and Distributed Energy Resource components. Standards like IEC 61850 and the NIST Cybersecurity Framework appear recurrently in the literature. Three main research directions are identified: improving system resilience through redundancy and fault tolerance, enhancing threat detection by machine learning methods, and designing adaptive protection schemes for systems with high DER penetration. Although there is a clear effort to safeguard renewable energy infrastructure, important gaps remain, particularly in practical implementation challenges, security issues of new technologies, and the human factors in operation. Future research should address SCADA/ICS environments and specific critical infrastructure contexts and include rigorous testing under realistic conditions. These steps are vital for increasing the cybersecurity resilience of renewable energy systems against evolving threats.
There is a growing threat to the United States (U.S.) relating to the cyber domain. This paper delves into what a cyber-attack is and how it fits with UN charter Article 2(4). The proper justification for cyber warfare and the moral obligation of the U.S. to improve its cyber capabilities will also be discussed. Lastly research into the cyber threat landscape as well as the cost of cyber incidents is conducted. The focus is to enable a proper response to cyber threats from the U.S. that align with international treaties and law.
Artificial Intelligence and Machine Learning represent the future of computing, but they also represent a long-standing goal in computing theory. Through the analysis of over 50 scholarly articles and conference journals, this study aims to provide a brief overview of these technologies by studying their origin, the differences between the two technologies, and the various advantages and disadvantages their union brings to the cyber-sphere. Despite often being discussed as one large overarching technology, Artificial Intelligence and Machine Learning are more of a familial branch of technology that utilize each other to create a self-learning technological wonder. In cyber-security, these technologies are often leveraged for defensive AI systems active today, however various proof-of-concepts show just how dangerous these technologies can be when misused. Leveraging this research, this study aims to provide potential future implications for the cyber-sphere as a result of the growth of these technologies.
This paper investigates the feasibility of using consumer grade AI such as ChatGPT, Google’s Gemini, X’s Grok, and Microsoft’s Copilot to determine whether emails are likely to be phishing emails or not. An experiment is set up where each AI engine is given emails to examine, and the results of the various chat engines are compared.
Deepfake technology, initially developed for entertainment, has increasingly become a significant threat in digital forensics, misinformation, and cybercrime. This paper evaluates the effectiveness of the Autopsy deepfake detection plug-in, a forensic tool designed to identify AI-generated manipulated images and videos using Support Vector Machine (SVM) algorithms. Testing involved analyzing authentic and manipulated media within realistic forensic workflows. Results indicated that the plug-in detected approximately 45.5% of manipulated images successfully but exhibited a concerning false positive rate of 40% for authentic media. Additionally, video detection capabilities were found non-functional, and the tool lacked the integration of critical metadata analysis, limiting its forensic utility. Comparisons with specialized deepfake detection tools, such as Resemble AI, Deepware Scanner, and Sensity AI, highlighted the Autopsy plug-in’s inconsistent detection accuracy and limitations in practical scenarios. The findings highlight the necessity for further development of comprehensive, reliable forensic tools capable of addressing the evolving challenges posed by advanced deepfake technologies.
This study focused on the application of NLP particularly GPT models in order to improve on database security which is prone to certain shortcomings of traditional security approaches. The work presents the configuration, fine-tuning and assessment of a customized GPT model with domain-specific data for automating tasks like threat identification, encryption, compliance tests, and more. The use of iterative prompt engineering ensured that the model was appropriately fine-tuned to handle difficult database security issues with precision and applicability. The research pilot tested the model with 16 participants consisting of database administrators, cybersecurity practitioners, and computer science students. Feedback was collected through surveys and structured tests and analysis were made on parameters like accuracy and relevance and user satisfaction obtained. Conclusion showed that GPT model customized for database security purpose to generate recommendations outperformed the general (generic) GPT. The study showed that AI has great potential as a database security solution, however, it has its drawbacks that include limitation in the size of the dataset and challenges in niche settings. The recommendation made for future work in this thesis are to include larger datasets; to use combined inputs from vision, language generation and other modalities; and, to cover more ethical concerns. This work contributes to enhancing the database security by demonstrating the ability of AI models in fighting novel security threats.
University cybersecurity labs often require isolated environments to support malware analysis, penetration testing, and secure networking coursework. However, these same isolation requirements hinder remote access, making it difficult for students with off-campus obligations to participate fully. The COVID-19 (Coronavirus Disease 2019) pandemic has highlighted the need for secure and flexible remote access to educational labs. This paper presents the design and evaluation of a secure, scalable, and cost-effective remote access model tailored to academic cybersecurity environments. Implemented at the TB 206 cybersecurity lab at Austin Peay State University (APSU), the solution integrates pfSense an open-source firewall and router platform, Cisco-managed switches for VLAN segmentation, and Tailscale to enable zero-trust, identity-based remote connectivity [12][15]. The solution enforces strict access controls through both pfSense firewall rules and Tailscale Access Control Lists (ACLs), while leveraging open-source tools and commodity hardware. A phased deployment strategy ensured operational stability at each stage, from VLAN design to remote testing. Results demonstrate that segmented, role-based remote access can be securely implemented without exposing internal services to the public internet. The proposed methodology serves as a replicable blueprint for other institutions seeking to modernize their lab infrastructure.
Jonathan D. Lensert (Dec 2025). Generative AI Security: Understanding Concepts and Foundations of Security
To be Updated.