Motivated from the problems faced by under-served communities or in under-resourced settings, we are working to find new ways to define and quantify fairness in machine learning and resource allocation. We are also building intelligent decision-making and resource allocation systems that effectively trade-off fairness, efficiency, and transparency while being robust to uncertainty/noise in the data.
How can we balance the inherent trade-offs between fairness, efficiency, and transparency in a way that reflects the priorities of stakeholders such as policy-makers?
How can we build intelligent systems that are fair and inter- pretable so that they can be deployed in socially sensitive contexts? How can we measure and control biases in automated decision-making systems?