Workshop on Language Resources for Responsible AI

Traditionally, Artificial Intelligence (AI) systems have been developed to maximize accuracy on benchmark tasks and datasets. However, when these systems are deployed in the real world, ethical considerations need to be taken into account in order to build trust in users and make sure that the systems do not cause any harm to individuals or society. With the emergence of societal awareness about the need for responsible AI, new regulations and standards are being released, such as GDPR enforced by the European Union (2018), China’s Cyber Security Law and the General Principles of the Civil Law (2017), and Canadian national standards for the ethical design and use of Artificial Intelligence (2019). However, the technology in its current state lacks the necessary tools for AI developers to comply with these regulations. There is an urgent need for tools that can help:


    • Researchers - to investigate how ethical considerations should be taken into account while designing AI systems;
    • Companies - to ensure their products meet ethical requirements, to apply ethics-by-design frameworks, and to earn the trust of their clients;
    • End users - to be able to understand and to challenge automatic decisions when necessary;
    • Policy makers and governments - to be able to audit and scrutinize AI systems for compliance with policies and regulations.


This one-day workshop will provide a forum to present and discuss research work on creation and use of language resources and tools specifically designed to ensure ethical behavior of AI systems. For more information see the Call for Papers.


THE WORKSHOP AT LREC-2020 HAS BEEN CANCELLED. IT WILL BE MOVED TO ANOTHER VENUE.