Call for Papers
Submission Details:
In this workshop, we wish to stimulate the exchange of novel ideas and interdisciplinary perspectives. To do this, we will accept two different types of submissions:
Full papers, presenting novel and original work (max. 14 pages, excluding references)
Abstracts of already published work or interactive contributions, like demo's (max. 2 pages, excluding references)
Instructions:
Papers can be submitted through this link: https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023/Track/25/Submission/Create
All papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files, and the copyright form can be downloaded here.
Up to 10 MB of additional materials (e.g. proofs, audio, images, video, data, or source code) can be uploaded with your submission. The reviewers and the program committee reserve the right to judge the paper solely on the basis of the main paper; looking at any additional material is at the discretion of the reviewers and is not required.
All papers need to be ‘best-effort’ anonymized. We strongly encourage making code and data available anonymously (e.g., in an anonymous GitHub repository via Anonymous GitHub or in a Dropbox folder). The authors may have a (non-anonymous) pre-print published online, but it should not be cited in the submitted paper to preserve anonymity. Reviewers will be asked not to search for them.
Authors of full papers can opt for their papers to be included in the ECML PKDD post-workshop proceedings of the CCIS series
At least one author of each accepted paper is required to attend the workshop to present. For the accepted papers, we plan to have regular talks and additional poster presentations to foster further discussions, based on local venue capabilities.
Topics of Interest:
We invite contributions that deal with bias and fairness in various types of learning tasks (including but not limited to supervised learning, unsupervised learning, reinforcement learning, ranking, generative models, etc.) and ML systems, using any type of data (tabular, text, images, videos, speech, multimodal, etc.) and learning setup (batch, non i.i.d., federated, etc.). We especially welcome interdisciplinary work, bridging Computer Science with fields like Human-Computer-Interaction, Law and Social Sciences.
Contributions may concern the fairness auditing/assessment of ML systems, surrounding topics like:
Auditing practices and tools
Best practices and legal frameworks around audits
xAI for understanding/auditing biases
Visual analytics for understanding/auditing biases
Society’s perception of algorithmic fairness
Case studies
Privacy-aware fairness audits
Other contributions may deal with the design of fairer algorithms:
Fairness-aware data collection
Fairness-aware data processing
Human in the loop approaches for fairness
Fairness-aware learning in multimodal and multi-attribute data
Case studies on fairness-aware learning
Important Dates:
Paper Submission Deadline: 12.06.2023
Paper Acceptance Notification: 12.07.2023
Workshop Date: 22.09.2023