Hilton Marco Island Beach Resort and Spa
Marco Island, Florida, USA,
May 17-20, 2026
Security, Privacy, and
Ethics in AI
Special Track at the 39th International FLAIRS conference
Security, Privacy, and
Ethics in AI
Special Track at the 39th International FLAIRS conference
Call for Papers
Artificial intelligence (AI) is rapidly reshaping every aspect of society, transforming industries, public services, and personal life. While AI promises efficiency, innovation, and unprecedented opportunities, it also brings with it significant challenges—ranging from security vulnerabilities and privacy breaches to algorithmic bias, social inequality, and ethical dilemmas. These risks, if left unaddressed, threaten to erode public trust and undermine the safe adoption of AI.
The Security, Privacy, and Ethics in AI special track provides a dedicated forum to examine and address the multifaceted risks that accompany AI’s progress. We will explore how to secure AI systems against malicious or unintended consequences, safeguard personal data, and ensure AI development and deployment remain aligned with fairness, transparency, and accountability. This requires not only technological advancements but also thoughtful policy, regulatory, and societal responses.
Topics of interest include, but are not limited to, technical strategies to strengthen AI system security and resilience against cyber threats; privacy-preserving machine learning and data protection mechanisms; approaches to mitigate bias and discrimination in AI datasets and models; ethical frameworks for responsible AI design and deployment; policy and legal measures to address emerging risks in AI and smart systems; case studies on AI-related incidents and regulatory responses; and educational as well as capacity-building strategies for embedding AI ethics in practice.
The track encourages contributions from researchers, practitioners, policymakers, and industry experts, fostering an interdisciplinary dialogue to bridge technical, legal, and ethical perspectives. Sessions will address the urgent need for practical solutions—ranging from secure system design and privacy safeguards to regulatory reforms and international governance models.
In recent years, the global community has moved toward establishing AI ethics principles and guidelines. However, the challenge lies in translating these principles into enforceable policies and operational practices. This special track will serve as a platform to share cutting-edge research, best practices, and forward-looking strategies for embedding ethical considerations into AI innovation. By bringing together diverse voices, we aim to develop actionable insights that can shape a safe, fair, and trustworthy AI society.
Topics
The track will explore the following topics in general and welcome all related discussions to artificial intelligence security, privacy, and Ethics:
Security and Privacy
Security and Privacy risks and preservation in ML model building process.
Security in reusing/sharing AI, ML, DL models (e.g. model poisoning)
Leakage of confidential recordings from smart speakers/devices/platforms
Security risk events detection, security intelligence
Semantic models and knowledge engineering for security and privacy
Security and privacy behavior modeling for autonomous intelligent systems
Security and privacy intelligence and analytics of Big data
Security and privacy-aware decision support
Security and privacy mechanisms for intelligent IOT
Fake news, content tampering, voice alteration, truthfulness of data
Al trustworthiness
AI Ethics
Explainable and interpretable AI
Long-term impact and sustainable AI
Unfairness, biases, racism, ethical issues in intelligent agents and AI models
Responsible and accountable AI
Human autonomy and control on AI
Adversarial and threat actors in deep learning and their ethical guidelines
Process transparency and accessibility in AI algorithms
Privacy and data collection ethics
Discriminatory AI models for housing, employment, and other decision-making
Online risk behavior detection algorithm (e.g. Trolls, hate, suicides, etc.)
Digital divide and stakeholder engagement
AI Policy and Regulations
AI in government
Regulation and governance on AI technology
Predictive policing, court sentencing, incarceration
Information and psychological warfare
Privacy, intellectual property, and data protection
Public-private cooperation norms
International cooperation norms
AI standardization
AI education and distribution policy
Public perception and communication
Online attacks/storms, social media bullies, and other online ills
Knowledge Infrastructure for Security, Privacy & Trust
Security/privacy/trust ontology
Knowledge Graph for security/privacy/fraud reasoning and detection
Reasoning and inference for security, privacy, and trust assessment
Design requirements for enhancing security, privacy and trust in AI system
Knowledge Production Value Chain protection
Workforce Education and training for AI security, privacy, and trust
Security, privacy and trust solutions in Applied AI systems in the following domain specific areas but not limited to:
AI, GenAI risk/impact assessment in Enterprise/Business Systems
AI for cyber security, intrusion detection, network security
Self-driving cars
Robotics, chatbots
AI security for medicine, Healthcare
Privacy, Ethics & Trust in Connected Healthcare (wearables, iot)
AI-driven smart cities, transportation, public safety
Scientific discovery, bioinformatics
Prediction and profiling models in Business, supply chain
Governments & the non-profit sector
National security and Cyber Defense Applications
Education, e-learning, student tracking
Mobile security, location privacy
Embedded systems, grid systems, cyber physical systems security,
Multimedia content sharing
Cloud-based systems
Social media, behavior tracking
Entity tracking, surveillance
Authentication, biometrics
Forensics, intrusion detection
Face recognition, image/NLP/video manipulations, etc.
Submission Guidelines
Submitted papers must be original, and not submitted concurrently to a journal or another conference. Double-blind reviewing will be provided, so submitted papers must use fake author names and affiliations. Papers must follow the FLAIRS template guidelines (FLAIRS-39 - Call For Papers ) and be submitted as a PDF through the EasyChair conference system. (Do NOT use a fake name for your EasyChair login; your EasyChair account information is hidden from reviewers.)
FLAIRS will not accept any paper which, at the time of submission, is under review for or has already been published or accepted for publication in a journal or another conference. Authors are also required not to submit their papers else where during FLAIRS's review period. These restrictions apply only to journals and conferences, not to workshops and similar specialized presentations with a limited audience and without archival proceedings. Authors will be required to confirm that their submissions conform to these requirements at the time of submission.
Important Dates
Abstract submission deadline: January 19, 2026 (Abstract submission is mandatory to submit full paper!)
Paper submission deadline: January 26, 2026
Paper acceptance notifications: March 9, 2026
Camera ready version due: April 6, 2026
Program Committee
Track Chair
Hun-Yeong Kwon, Korea University, South Korea, (khy0@korea.ac.kr)
Program Committee Members
Sang-kyun Lee, Korea University, South Korea
Junghee Lee, Korea University, South Korea
Beop-Yeon Kim, Korea University, South Korea
Sang-Pil Yoon, Korea University, South Korea
Ji-Hun Lim, Korea University, South Korea
Hyesung Park, Korea University, South Korea
Sang-Hyuk Cha, Pace University, USA
Loni Hagen, University of South Florida, USA
Lisa Webley, University of Birmingham, United Kingdom
Mi-Ryang Kim, Sungkyunkwan University, South Korea
Tai-Won Oh, Kyungil University, South Korea
Teryn Cha, Essex County College, USA
Mario Fritz, CISPA Helmholtz Center for Information Security, Germany
Yoon-Seok Ko, National Information Society Agency, South Korea
Further Information