KDD 2023 Tutorial On

"AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases"

With the growing adoption of AI, trust and explainability have become critical which has attracted a lot of research attention over the past decade and has led to the development of many popular AI explainability libraries such as AIX360, Alibi, OmniXAI, etc. Despite that, applying explainability techniques in practice often poses challenges such as lack of consistency between explainers, semantically incorrect explanations, or scalability. Furthermore, one of the key modalities that has been less explored, both from the algorithmic and practice point of view, is time-series. Several application domains involve time-series including Industry 4.0, asset monitoring, supply chain or finance to name a few.
The AIX360 library has been incubated by the Linux Foundation AI & Data open-source projects and it has gained significant popularity: its public GitHub repository has over 1.3K stars and has been broadly adopted in the academic and applied settings. Motivated by industrial applications, large scale client projects and deployments in software products in the areas of IoT, asset management or supply chain, the AIX360 library has been recently expanded significantly to address the above challenges. AIX360 now includes new techniques including support for time-series modality introducing time series based explainers such as TS-LIME, TS Saliency, and TS-ICE. It also introduces improvements in generating model agnostic, consistent, diverse, and scalable explanations, and new algorithms for tabular data. 
In this hands-on tutorial, we provide an overview of the library with the focus on the latest additions, time series explainers and use cases such as forecasting, time series anomaly detection or classification, and hands-on demonstrations based on industrial use-cases selected to demonstrate practical challenges and how they are addressed. The audience will be able to evaluate different types of explanations with a focus on practical aspects motivated by real deployments.

Tutorial Outline

At a high level, we plan to split the tutorial into 2 sessions. In Session I we will provide an introduction, an overview of the AIX360 library and the latest additions, motivations from the practical deployments in areas such as Industry 4.0 or asset management, all followed by a more detailed discussion of the new techniques added to AIX360 addressing domain of time series and other extensions. After a short break, Session II will consist of several hands-on segments covering several practical tutorials which will demonstrate the use of the new explainers in the context of real applications or use cases from the supply chain and Industry 4.0 domain. 

Session I: Introduction and overview of new capabilities 

[ 10 - 15 min ] Break

Session II: Hands on applications using explainers from AIX360

Tutors Short Bio

Amaresh Rajasekharan 

Amaresh Rajasekharan is a data scientist and executive architect with IBM Sustainability Software group. During his career of over 26 years, Amaresh has worked on a wide ranging solutions and products, involving various industry verticals such as Electronics, Automotive, Aerospace, Retail & Distribution sectors. In his current role he is responsible for infusing AI and Machine learning capabilities in IBM's Maximo Application Suite for industry specific use cases.

Amit Dhurandhar

Dr. Dhurandhar’s research has primarily focused on understanding AI methods through their statistical/output behaviors. He has worked on projects spanning multiple industries such as Semi-conductor manufacturing, Oil/Gas, Procurement, Retail, Utilities, Airline, Health Care. His current research includes proposing methods for enhancing trust in systems by developing methods that try to explain or understand their behaviors. His recent work was featured in Forbes and PC magazine with corresponding technical contribution in leading venues such as NeurIPS, Science, Nature Communications. His research has received the AAAI deployed application award, AAAI HCOMP best paper honorable mention as well as being selected as Best of ICDM twice. For his work on explainability he was invited to attend Schloss Dagstuhl seminars as well as provide multiple invited talks including a industry keynote at ACM CODS-COMAD 2021. He also Co-led the creation of the AI Explainability 360 open source toolkit. Besides research impact, his work has also gone into IBM product and he has received Outstanding Technical Achievement as well as IBM Corporate award. He has been an Area Chair and SPC member for top AI conferences as well as has served on National Science Foundation (NSF) panels for the small business innovative research (SBIR) program.

Giridhar Ganapavarapu

Giridhar Ganapavarapu is an Advisory Research Software Engineer at IBM Research, Yorktown Heights, NY focusing on design and development of scalable AI lifecycle applications and systems. In the past few years, he has worked in various domains such as IoT, Manufacturing, Taxes, Telecom and Digital marketing for retail. And his expertise is in multiple technical areas such as Applied Machine Learning, AI Life Cycle Management, Scalable Systems and Software Engineering. He has pursued his M.S. in computer science from University of Florida, USA in 2016 and his B.E. in electronics \& communications engineering from Birla Institute of Technology and Sciences-Pilani, Hyderabad in 2012.

Kanthi Sarpatwar

Kanthi Sarpatwar is a senior research staff member at IBM T.~J.~Watson Research Center. He holds a Ph.D.~in Computer Science from the University of Maryland (college park) and has been part of IBM Research for the past 8 years. Dr. Sarpatwar’s research focus has been algorithms and optimization with a special focus on Machine Learning applications. He has worked on different pillars of trustworthy AI such as differential privacy, explainability and homomorphic encryption. He has published more than 20 papers in top theory and ML conferences such as NeurIPS, SODA, AAMAS, ESA etc., and has filed more than 20 patents (incl. 10 granted) with USPTO. He has served as the PC member of multiple ML/AI conferences such as NeurIPS, ICML, UAI, AAAI etc.

Natalia Martinez Gil

Natalia Martinez received her Ph.D. degree in Electrical and Computer Engineering from Duke University, Durham, NC, USA in 2022. Natalia has been advised by Dr. Guillermo Sapiro, and her research has been focused on efficient fairness solutions for machine learning applications. She previously, received her B.S. degree in Electrical Engineering from Universidad de la Republica, Montevideo, Uruguay in 2015. Her research interests are in different aspects of trustworthy AI, her work has been published in venues such as IEEE, ICML and NeurIPS. After completion of her Ph.D., she joined IBM Research as an AI Research Scientist, and has been actively involved in the development of explainability capabilities for the aix360 time series library.

Roman Vaculin

Roman Vaculin is a principal research scientist and senior manager at IBM Research where he leads a team of researchers and software engineers focusing on research in the areas of artificial intelligence in domains including asset management, IoT, Industry 4.0 and supply chain. His research interests are at an intersection of distributed systems, artificial intelligence, blockchain and advanced analytics. He was a Fulbright scholar at Carnegie Mellon University and a DAAD scholar at University of Saarland. He received his PhD in Computer Science at Charles University, Prague. He was named IBM Master Inventor for his significant contributions to the IBM patents portfolio, he has published over 100 research articles and patents, and his work was recognized in multiple IBM outstanding technical accomplishment awards. Dr. Vaculin made contributions to several areas including data-centric business processes, blockchain, social media analytics, and financial services analytics. His work has impacted products such as IBM Maximo, Watson Analytics, Client Insight for Wealth Management, or IBM Case Manager. He has served as a principal investigator on several government projects and has lead client projects with major global companies.

Sumanta Mukherjee

Sumanta Mukherjee is a researcher at IBM Research, India. He has received PhD. in Applied mathematics from the Indian Institute of Science, Bangalore, India. His recent work focuses on time series analysis and explainability. He is an active contributor to the aix360 time series explainability. He has presented his research at several conference venues, viz. KDD, IEEE BigData, COD-COMADS.

Vijay Arya

Vijay Arya is a Senior Researcher at IBM Research India and part of IBM Research AI group where he works on problems related to Trusted AI. Vijay's research work spans Machine learning, Energy, Networking, algorithms, and optimization. Vijay is author of IBM Research's AI Explainability 360 opensource toolkit and his work on applying machine learning algorithms to improve models of power distribution networks has been implemented by US utilities. Before joining IBM, Vijay worked as a researcher at National ICT Australia and received his PhD in Computer Science from INRIA, France. He has served on the program committees of IEEE, ACM, NeurIPS, ICML, ICLR, KDD, AAAI, IJCAI conferences. He is a senior member of IEEE \& ACM and has more than 70 conference \& journal publications and patents.


References

Additional Links