LLM-CyberSec: The 1st Workshop on Large Language Models and Cybersecurity
co-located with IEEE TPS 2024
October 28 - 30, 2024, Washington D.C., USA
co-located with IEEE TPS 2024
October 28 - 30, 2024, Washington D.C., USA
Overview
The rapid advancement and widespread adoption of large language models (LLMs) present significant opportunities and challenges in cybersecurity. Understanding and mitigating their vulnerabilities is crucial as these models become integral to various applications. This workshop aims to address the security concerns surrounding LLMs, focusing on vulnerability analysis, security attacks and defenses, and enhancing model robustness. By bringing together experts and researchers, we seek to foster a collaborative environment for developing innovative solutions to safeguard these powerful tools. The workshop will cover identifying potential weaknesses in LLMs, exploring sophisticated attack techniques, and formulating robust defense mechanisms to ensure resilience against emerging threats.
Topics
LLM-CyberSec workshop aims to cover the interdisciplinary topics. Prospective participants are expected to focus on recent progress or breakthroughs on both research and industrial results, or a research vision/position statement, or empirical studies as well as experience reports on the following topics of interests including wishing critical infrastructure protection and resilience context, but are not limited to:
Security of Large Language Models
Vulnerability Analysis: Identifying and mitigating vulnerabilities in LLMs.
Security Attacks and Defenses: Techniques for attacking LLMs and defences against such attacks.
Model Robustness: Enhancing the robustness of LLMs against various security threats.
Use of Large Language Models for Cybersecurity
Threat Detection: Leveraging LLMs for detecting and responding to cyber threats.
Anomaly Detection: Using LLMs to identify unusual patterns and potential security breaches.
Code Vulnerability Detection: Leveraging LLMs to detect programming code (e.g., C, Python) vulnerabilities.
Phishing and Fraud Detection: Applying LLMs to identify phishing attempts and fraudulent activities.
Incident Response: Enhancing incident response capabilities with LLM-driven automation and insights.
Security Policy and Governance: Developing security policies and governance frameworks using LLMs.
Trust and Privacy in Large Language Models
Data Privacy: Ensuring the privacy of training data and preventing data leakage.
Ethical and Responsible AI: Addressing ethical concerns and promoting responsible use of LLMs.
IP Protection: Protecting the LLM itself and tracing/detecting generated content.
Trustworthy AI: Building trust in LLMs through transparency, explainability, and reliability.
Privacy-Preserving Techniques: Methods to protect user data and maintain privacy in LLMs.