Privacy-preserving data analysis has become essential in the age of Machine Learning (ML) where access to vast amounts of data can provide gains over tuned algorithms. A large proportion of user-contributed data comes from natural language e.g., text transcriptions from voice assistants.
It is therefore important to curate NLP datasets while preserving the privacy of the users whose data is collected, and train ML models that only retain non-identifying user data.
The workshop aims to bring together practitioners and researchers from academia and industry to discuss the challenges and approaches to designing, building, verifying, and testing privacy preserving systems in the context of Natural Language Processing.
Information about the workshop's topics of interest can be found in the Call for Papers.
- Abstract Deadline: November 30, 2019
- Submission Deadline: December 7, 2019
- Acceptance Notification: December 27, 2019
- Camera-ready versions: January 10, 2020
- Workshop: February 7, 2020