Trusted AI Toolkits
Authors: Vijay Arya, Karthikeyan Natesan Ramamurthy, and Prasanna Sattigeri
AI models exceed human performance in a wide variety of applications domains today. However in the absence of complementary algorithms that help bolster human trust in these models, they seldom succeed in practical deployments. This work will showcase three open source AI toolkits that address different dimensions of user trust: AI Fairness 360 (AIF360), AI Explainability 360 (AIX360), and Uncertainty quantification (UQ) 360.
AIX360 toolkit supports the interpretability of datasets and machine learning models and includes ten diverse state-of-the-art explainability methods, two evaluation metrics, and an extensible software architecture that organises these methods according to their use in the AI modelling pipeline. AIF360 toolkit helps users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. UQ360 provides data science practitioners and developers access to state-of-the-art algorithms to streamline the process of estimating, evaluating, improving, and communicating uncertainty of machine learning models as common practices for AI transparency.
Author Bios
Vijay Arya is a senior researcher in IBM Research AI at the IBM India Research Lab where he works on problems related to Trusted AI. Vijay has 15 years of combined experience in research and software development. His research work spans Machine learning, Energy & smart grids, network measurements & modeling, wireless networks, algorithms, and optimization. His work has received outstanding technical achievement awards at IBM and has been deployed by power utilities in USA. Before joining IBM, Vijay worked as a researcher at National ICT Australia (NICTA) and received his PhD in Computer Science from INRIA, France, and a Masters from Indian Institute of Technology (IIT) Delhi. He has served on the program committees of IEEE, ACM, and IFIP conferences, he is a senior member of IEEE & ACM, and has more than 60 conference & journal publications and patents.
Karthikeyan Natesan Ramamurthy is a research staff member in IBM Research AI at the Thomas J. Watson Research Center, Yorktown Heights, NY. He is intrigued by the interplay between humans, machines, and data and the societal implications of machine learning. He has been involved in the development of the open source trustworthy machine learning toolkits - AI Fairness 360 and Uncertainty Quantification 360. He holds a PhD in electrical engineering from Arizona State University.
Prasanna Sattigeri is a Research Staff Member at IBM Research. His research interests include Bayesian inference, deep generative modeling, uncertainty quantification, and related subareas of machine learning and AI. His current work focuses on developing theory and practical systems for machine learning applications that demand constraints such as reliability, fairness, and interpretability. He completed his Ph.D. in Electrical Engineering from Arizona State University in 2014. His dissertation research was on learning latent structure in data using unsupervised methods. He obtained his Bachelors (B.Tech.) degree in Electronics & Communication Engineering from the National Institute of Technology India in 2008.