This last module will be focused on different audit techniques and socio-technical approaches to algorithmic discrimination. We consider algorithms as socio-technical systems because its creation, implementation and supervision depends on a network of actors and is constrained by cultural and social contexts. In the EU, the approach to Trustworthy AI encompasses a set of expectations over these systems: accountability, explainability, interpretability, transparency and oversight. However, from a socio-technical perspective there are several limitations that could fail to comply with these principles. We will explore the main concepts and requirements regarding discrimination and biases in AI from this high-level point of view and present diverse toolkits for algorithmic auditing, and the discussion of the main concepts of the so-called Trustworthy AI.
Printed materials:
Ada Lovelace Institute - Expert critic on AI Act
Nick Seaver: Knowing algorithms – 11 pages
Open Government Partnership - Executive report on algorithmic accountability on the public sector
MIT Technology Review - Hundreds of AI tools have been built to catch covid. None of them helped