See the full schedule here.
Title: Scaling Knowledge Graph Inference and Ontology Integration
Keywords: Knowledge graphs, Graph neural networks, scalability
Abstract: Graph ML models (such as graph neural networks) have been widely applied and queried for graph analysis with promising results. Among the fundamental tasks are (a) Ontology Matching (OM), which aims to find similar concepts in two given ontologies, and (b) Graph Inference, which invokes the inference process of a trained graph ML model to get the output of test graphs. These two tasks remain time-consuming for large-scale graph analysis. In this talk, I will introduce our recent work on tackling these two related challenges. (1) We present a graph compression method, which can generate small graphs from a large graph, that can be directly queried to obtain the output of graph neural networks with little or no effort for decompression. The method utilizes an inference equivalence relation to identify and cluster "inference-indistinguishable" nodes, thereby reducing the graph size without affecting the inference results if directly queried. We demonstrate that this technique enables us to scale graph inference to billion-scale graphs. (2) We introduce Kroma, an LLMs-enhanced OM framework that incrementally maintains a concept graph (a provably smallest, unique hierarchical graph representation) from a stream of text corpus input. We conclude by highlighting the connection between these methods to scientific workflow management.
Related Papers
Inference Friendly Graph Compression for Graph Neural Networks https://www.researchgate.net/publication/390893144_Inference-friendly_Graph_Compression_for_Graph_Neural_Networks