Daytona Beach, Florida, USA, May 20-23, 2025
Navigating AI:
Security, Privacy, Ethics, and Regulation
Special Track at the 38th International FLAIRS conference
Call for Papers
Artificial intelligence (AI) is becoming an integral part of our daily lives. AI is driving innovation across various industries, permeating not only the public sector but also our homes and workplaces. However, AI can also lead to issues such as social inequality through data bias, breaches of privacy, and concerns over security and safety, which are causing widespread apprehension across society.
To prepare for a safe and trustworthy AI era, it is essential not only to improve AI technology but also to focus on how to achieve technological innovation while minimizing the risks associated with AI.
To this end, it is necessary to ensure the security of AI systems, prevent malicious or unintended consequences caused by AI, respect privacy, and develop and use AI fairly and ethically through technical, social, and institutional responses.
This special track aims to explore solutions to the security, privacy, and ethical risks faced by AI society, with the ultimate goal of realizing public values based on safety and trust. It seeks to inspire innovative ideas from an interdisciplinary community.
The track will be a venue for sharing security, privacy, and ethical issues arising in the AI model-building cycles in the context of using AI systems. It will provide a venue for discussing technical, social/behavioral, and policy solutions to counter cyber threats for the AI and Machine Learning systems from design to use.
In addition, the track will provide sessions to discuss the adverse effects caused by Al, ethical values, and review the direction of Al/smart system policy and regulations. it will offer sessions to discuss the negative impacts of AI, ethical values, and review policies and regulatory directions for AI/smart systems. In recent years, numerous ethical side effects have surfaced as AI services have been adopted. The international community has begun to establish ethical principles and guidelines to address these ethical issues, and there is a growing demand for concrete regulatory and policy measures in response to ethical failures. Furthermore, we aim to share new ideas on various strategies, including educational approaches, for embedding and practicing AI ethics.
Topics
The track will explore the following topics in general and welcome all related discussions to artificial intelligence security, privacy, and Ethics:
Security and Privacy
Security and Privacy risks and preservation in ML model building process.
Security in reusing/sharing AI, ML, DL models (e.g. model poisoning)
Leakage of confidential recordings from smart speakers/devices/platforms
Security risk events detection, security intelligence
Semantic models and knowledge engineering for security and privacy
Security and privacy behavior modeling for autonomous intelligent systems
Security and privacy intelligence and analytics of Big data
Security and privacy-aware decision support
Security and privacy mechanisms for intelligent IOT
Fake news, content tampering, voice alteration, truthfulness of data
Al trustworthiness
AI Ethics
Explainable and interpretable AI
Long-term impact and sustainable AI
Unfairness, biases, racism, ethical issues in intelligent agents and AI models
Responsible and accountable AI
Human autonomy and control on AI
Adversarial and threat actors in deep learning and their ethical guidelines
Process transparency and accessibility in AI algorithms
Privacy and data collection ethics
Discriminatory AI models for housing, employment, and other decision-making
Online risk behavior detection algorithm (e.g. Trolls, hate, suicides, etc.)
Digital divide and stakeholder engagement
AI Policy and Regulations
AI in government
Regulation and governance on AI technology
Predictive policing, court sentencing, incarceration
Information and psychological warfare
Privacy, intellectual property, and data protection
Public-private cooperation norms
International cooperation norms
AI standardization
AI education and distribution policy
Public perception and communication
Online attacks/storms, social media bullies, and other online ills
Knowledge Infrastructure for Security, Privacy & Trust
Security/privacy/trust ontology
Knowledge Graph for security/privacy/fraud reasoning and detection
Reasoning and inference for security, privacy, and trust assessment
Design requirements for enhancing security, privacy and trust in AI system
Knowledge Production Value Chain protection
Workforce Education and training for AI security, privacy, and trust
Security, privacy and trust solutions in Applied AI systems in the following domain specific areas but not limited to:
AI, GenAI risk/impact assessment in Enterprise/Business Systems
AI for cyber security, intrusion detection, network security
Self-driving cars
Robotics, chatbots
AI security for medicine, Healthcare,
Privacy, Ethics & Trust in Connected Healthcare (wearables, iot)
AI-driven smart cities, transportation, public safety,
Scientific discovery, bioinformatics
Prediction and profiling models in Business, supply chain
Governments & the non-profit sector
National security and Cyber Defense Applications
Education, e-learning, student tracking
Mobile security, location privacy
Embedded systems, grid systems, cyber physical systems security,
Multimedia content sharing,
Cloud-based systems,
Social media, behavior tracking
Entity tracking, surveillance
Authentication, biometrics,
Forensics, intrusion detection,
Face recognition, image/NLP/video manipulations, etc.
Submission Guidelines
Submitted papers must be original, and not submitted concurrently to a journal or another conference. Double-blind reviewing will be provided, so submitted papers must use fake author names and affiliations. Papers must follow the FLAIRS template guidelines (https://www.flairs-38.info/call-for-papers) and be submitted as a PDF through the EasyChair conference system. (Do NOT use a fake name for your EasyChair login; your EasyChair account information is hidden from reviewers.)
FLAIRS will not accept any paper which, at the time of submission, is under review for or has already been published or accepted for publication in a journal or another conference. Authors are also required not to submit their papers else where during FLAIRS's review period. These restrictions apply only to journals and conferences, not to workshops and similar specialized presentations with a limited audience and without archival proceedings. Authors will be required to confirm that their submissions conform to these requirements at the time of submission.
Important Dates
Abstract submission deadline: January 20, 2025 (Abstract submission is mandatory to submit full paper!)
Paper submission deadline: January 27, 2025
Paper acceptance notifications: March 10, 2025
Camera ready version due: April 9, 2025
Program Committee
Track Chair
Hun-Yeong Kwon, Korea University, South Korea, (khy0@korea.ac.kr)
Program Committee Members
Junghee Lee, Korea University, South Korea
Moon-Ho Joo, Korea University, South Korea
Sang-Pil Yoon, Korea University, South Korea
Teryn Cha, Essex County College, USA
Loni Hagen, University of South Florida, USA
Sang-kyun Lee, Korea University, South Korea
Lisa Webley, University of Birmingham, United Kingdom
Beop-Yeon Kim, Korea University, South Korea
Mi-Ryang Kim, Sungkyunkwan University, South Korea
Tai-Won Oh, Kyungil University, South Korea
Seung-Youn Dho, Kwangwoon University, South Korea
Kyung Jin CHA, Hanyang University, South Korea
Ji-Hun Lim, Korea University, South Korea
Further Information