1st International Workshop on Fortifying Networks with Trustworthy AI (F-NetAI)
in conjunction with IEEE PerCom 2025, March 17 – 21, 2025, Washington DC, USA (Walter E. Washington Convention Center)
in conjunction with IEEE PerCom 2025, March 17 – 21, 2025, Washington DC, USA (Walter E. Washington Convention Center)
Accepted papers will be published in IEEE Xplore.
The deadline for paper submission has been extended to December 01, 2024.
Importance Dates:
· Paper submission deadline: December 01, 2024
· Paper notification: January 08, 2025
· Camera Ready Deadline: February 2, 2025
Paper Submission and Review Process:
· Originality: Submissions must be original and not currently under consideration elsewhere.
· Formatting: Papers must be submitted in electronic PDF format, adhering to the IEEE LaTeX or Microsoft Word templates.
· Length: 6 pages for technical content (10pt font, 2-column format), including figures, tables, and references.
· Review Process: Rigorous double-blind review. Authors are expected to anonymize submissions to ensure fairness.
Submission link (EasyChair): https://easychair.org/my/conference?conf=percom2025
Topic of interest: The workshop will address the growing need for Trustworthy Artificial Intelligence (AI) in Network Communication, Computing, Applications, and Security of Cloud, Fog, Edge, and Distributed Computing environments. As AI plays an increasingly central role in optimizing network performance and detecting threats, concerns regarding its trustworthiness come to the forefront. The accepted papers will be published in IEEE Xplore.
The workshop will explore a range of technical issues related to trustworthy AI in networks, including (but not limited to):
Explainability and Interpretability of AI models: How can we understand the decision-making process of AI models, particularly those based on deep learning and foundation models, used for network traffic analysis, intrusion detection, and other security functions? How can we make these models more transparent to foster trust and facilitate debugging within the network communication and computing environment?
Bias and Fairness in AI-powered Network Management: As AI models become more complex, including Generative AI models, how do we ensure they are fair and unbiased in network management tasks such as resource allocation or anomaly detection? Can these models discriminate against specific types of users or traffic potentially impacting network security?
Data Security and Privacy in AI-driven Networks Computing: How can we ensure the security of sensitive data used to train and operate network AI models, especially when considering the potential for data poisoning attacks? How can we protect user privacy while leveraging AI for network functionalities, particularly when utilizing zero/one-shot learning techniques that may require less data?
Robustness and Vulnerability of AI Models in Network Computing: How can we make AI models deployed for network computing tasks and applications resilient to adversarial attacks, including those specifically crafted to exploit vulnerabilities in Generative AI models? How can we detect and mitigate potential vulnerabilities that could disrupt network operations or compromise data security?
Generative AI and Foundation Models for Security and Networking: Generative AI models capable of creating realistic data can be misused for malicious purposes. How can we secure these models to prevent them from being used to generate adversarial examples or other forms of network attacks? Can foundation models, which act as the backbone for various AI applications, be secured to prevent exploitation across different functionalities? How can techniques like adversarial training and explainability methods be leveraged to improve the robustness and transparency of Generative AI models used in network security applications?
Human-in-the-Loop AI for Network Management with Zero/One-Shot Learning: How can we best integrate human expertise with AI algorithms to create a collaborative environment for network management and security, especially when utilizing zero/one-shot learning techniques that may necessitate human guidance? How can human oversight address potential biases or vulnerabilities in AI models?
Case Studies and Real-World Applications: Showcasing successful case studies of trustworthy AI implementations in pervasive network computing or security operations. Fostering discussions on the practical challenges and best practices for deploying trustworthy AI solutions in real-world network environments.
Contact:
If you have any questions, please send us an email to: ieeefnetai@gmail.com