Date: January 02 - 04, 2021, Bangalore, India

During the last decade, traditional data-driven deep learning (DL) has shown remarkable success in essential natural language processing tasks, such as relation extraction. Yet, challenges remain in developing artificial intelligence (AI) methods in real-world cases that require explainability through human interpretable and traceable outcomes. The scarcity of labeled data for downstream supervised tasks and entangled embeddings produced as an outcome of self-supervised pre-training objectives also hinders interpretability and explainability. Additionally, data labeling in multiple unstructured domains, particularly healthcare and education, is computationally expensive as it requires a pool of human expertise. Consider Education Technology, where AI systems fall along a “capability spectrum” depending on how extensively they exploit various resources, such as academic content, granularity in student engagement, academic domain experts, and knowledge bases to identify concepts that would help achieve knowledge mastery for student goals. Likewise, the task of assessing human health using online conversations raises challenges for current statistical DL methods through evolving cultural and context-specific discussions. Hence, developing strategies that merge AI with stratified knowledge to identify concepts that would delineate healthcare conversations patterns and help healthcare professionals decide. Such technological innovations are imperative as they provide consistency and explainability in outcomes. This tutorial discusses the notion of explainability and interpretability through the use of knowledge graphs in (1) Healthcare on the Web, (2) Education Technology. This tutorial will provide details of knowledge-infused learning algorithms and its contribution to explainability for the above two applications that can be applied to any other domain using knowledge graphs.

For more information, visit https://aiisc.ai/xaikg/

Recent advances in statistical and data-driven deep learning demonstrate significant success in natural language understanding without using prior knowledge, especially in structured and generic domains, where data is abundant. On the other hand, in text processing problems that are dynamic and impact the society at large, existing data-dependent, state-of-the-art deep learning methods remain vulnerable to veracity considerations and especially, high volume that masks small, emergent signals. Statistical natural language processing methods have shown poor performance in capturing: (1) Human well being online especially in evolving events (e.g. mental health communications on Reddit, Twitter), (2) Culture and context specific discussion on the web (e.g. humor detection, extremism on social media), (3) Social Network Analysis (help-seeker and care-provider) during pandemic or disaster scenarios, and (4) Explainable methods of learning that drive technological innovations and inventions for community betterment. In such social hypertext, leveraging the semantic-web concept of knowledge graphs is a promising approach to the enhancement of deep learning and natural language processing.

According to Piagetian human learning theory, the activation of existing schema guides the apprehension of experience to support the generation of context sensitive responses. Activating prior knowledge connects current and past experience for identifying relations, supporting explanation, reducing ambiguity, structuring new knowledge, and application to novel materials. Further, human learning does not necessarily rely on large amounts of (annotated) cases to proceed. Because prior knowledge is so powerful in human learning, its incorporation at various levels of abstraction in deep learning could benefit outcomes. Example the desiderata include compensating for data limitations, improving inductive bias, generating explainable outcomes and enabling trust. These are particularly useful for data-limited but otherwise complex, evolving problems in domains such as mental healthcare, online social threats and epidemic/pandemic.

Despite the general agreement that structured prior knowledge and tacit knowledge (the inferred outcome of a model) resulting from deep learning should be combined, there has been little progress. Recent debates on Neuro-Symbolic AI , the inclusion of innate priors in deep learning, and AI fireside chat have identified knowledge-infused learning to improve explainability, interpretability, and trust in AI systems.

In this tutorial, we take use cases from the aforementioned two social good applications (Mental Health, Radicalization) and multimodal aspects of social media (e.g. scene understanding from images, video and text (hypermedia/hypertext) often found in documentation of critical events to explore the modern aspect of hypertext using semantic web in the form of Knowledge Graphs (KG). Specifically, the tutorial will provide a detailed walkthrough on Knowledge Graphs and their utility in developing knowledge-infusion techniques for interpretable and explainable learning for text, video, images, and graphical data on the web with the following agenda: Motivate the novel paradigm of knowledge-infused learning using computational learning and cognitive theories. Describe the different forms of knowledge, methods of automatic modeling of KG, and infusion methods in deep/machine learning. Discuss application-specific evaluation methods specifically for explainability and reasoning using benchmark datasets and knowledge-resources that show promise in advancing the capabilities of deep learning. Future directions of KGs and robust learning for the Web and Society.