Increasingly, the public calls for algorithmic accountability via public audits, but full transparency may open a company for privacy and security breaches, as well as make moderation and adversarial AI blocking harder. How might we introduce transparency while protecting privacy and security?
A large body of work on AI ethics has emerged that focuses on moral values. Similarly, there is starting to be a body of work on transparent documentation such as factsheets and model cards. The former relates to ‘what ought to be,’ and the latter to ‘what is.’ Bringing ‘what is’ closer to ‘what ought to be’ requires the two literatures to be expressed in the same language, but that seems to be mostly missing.
There is often a trade-off between privacy, profiling, and the ability to measure performance by sub-groups to assess if there is algorithmic bias. What options are available when the sub-group metadata is not available or not appropriate to collect? There cannot be fairness through unawareness, but can there be both fairness and unawareness?
Debates abound when it comes to explainability, and there are several overlapping or competing concepts, such as interpretability, actionable recourse, rationalization, and verification. I will explore tensions between these different concepts, including their varying goals and uses, and why it can be confusing to discuss an area as broad (and contested) as explainability.