Explainable AI

As the field of Artificial Intelligence(AI) is experiencing growth, it’s becoming harder to explain the reasoning behind the complex mathematical decisions these models make, an inherent problem of the latest techniques brought by ensembles, Deep Neural Networks, etc. that were not present in the last hype of AI (namely, expert systems and rule based models). As these models are increasingly being employed to make important predictions in critical contexts like healthcare, autonomous driving, finance, etc. the demand for explainability is increasing.

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. XAI can help troubleshoot and improve model performance while helping understand the behaviors of AI models. 

Various XAI methods exist today. We are working on Posthoc Local Explainability. A posthoc XAI method receives a deployed AI model as input, then generates useful approximations of the model's working. Local model interpretation is a set of techniques that explains a specific prediction and the effect of specific feature value on that prediction.