1st Workshop co-located with EKAW-24, Amsterdam, Netherlands
Language models (LMs) have been considered promising in numerous knowledge engineering (KE) tasks, such as knowledge extraction, knowledge base construction, and curation. However, their adoption introduces new challenges for evaluation. The assessment of LM-generated results remains limited, lacking a comprehensive and formally defined framework, and relies heavily on human effort, making it difficult to compare methods and reproduce experiments.
The ELMKE workshop addresses this critical gap by spearheading a community-driven effort to standardize evaluation. It seeks to unite expertise, perspectives, and pioneering works to advance novel paradigms for evaluating LMs in KE. This workshop will showcase innovative and published papers focusing on evaluation methods for diverse KE tasks, such as completion and generative tasks. Additionally, discussions will explore challenges related to transparency, human evaluation, and broader reflections on the implications of evaluation methods. By establishing plans for platforms and dashboards for collaborative work, participants and the community can contribute to the design and implementation of robust evaluation methods and benchmarks, fostering targeted discussions and long-term collaboration.
Papers Submission Deadline: September 27, 2024
Papers Notification: October 15, 2024
Early Bird Registration: October 17, 2024
Papers Camera Ready: November 20, 2024
Conference Days: November 26-28, 2024
All submission deadlines are 23:59:59 AoE (anywhere on earth).
Valentina Tamma is an associate professor at the University of Liverpool, where she leads the Knowledge Engineering group. Her research lies in ontology and knowledge graph engineering, and especially in open and distributed environments. She is interested in Artificial Intelligence methods to investigate mechanisms for ontology engineering and dynamic knowledge evolution and adaptation. In this context she is investigating Ontology design, ontology management, semantic integration, ontology evolution, and knowledge acquisition. She has authored several Ontologies and Knowledge Sharing. She is co-chair of the Knowledge Graphs Interest Group at the Alan Turing Institute, which aims to facilitate research and innovation on Knowledge Graphs in the UK and beyond, and she serves as Area Editor for the newly established Transactions on Graph Data and Knowledge journal.
In this talk, Valentina will argue that LLMs can play an important role in ontology engineering and can be so effective that we might create novel approaches, more flexible, where LLMs support ontology engineers and domain expert alike in reaching an agreement on the way they want to model their knowledge. She will, in particular, discuss how they can help in reducing the effort involved in acquiring the requirements and formulate competency questions either to kickstart the ontology creation or to validate or refactor an ontology.
π Welcome & Workshop Opening Session
13:40 - 14:05
π [Presentation] Large Language Model for Ontology Learning In Drinking Water Distribution Network Domain
Yiwen Huang, Erkan Karabulut and Victoria Degeler
14:05 - 15:00
ποΈ [Keynote] Ontology Engineering Revisited: The Rise of The LLMs
Valentina Tamma, University of Liverpool
15:00 - 15:30
β Coffee Break
15:30 - 15:55
π [Presentation] Investigating Vividness Bias In Language Models Through Art Interpretations
Laura Samela, Enrico Daga and Paul Mulholland
15:55 - 16:20
π [Presentation] LLMs4Life: Large Language Models for Ontology Learning in Life Sciences
Nadeen Fathallah, Steffen Staab and Alsayed Algergawy
16:20 - 16:45
π [Presentation] From Text to Knowledge: Leveraging LLMs and RAG for Relationship Extraction in Ontologies and Thesauri
Antonios Georgakopoulos, Jacco van Ossenbruggen and Lise Stork
16:45 - 17:00
π¬ Discussion & Feedback Session
Jiaoyan Chen, University of Manchester
Hang Dong, University of Exeter
Jacopo de Berardinis, University of Manchester
Nitisha Jain, King's College London
Ioannis Reklos, King's College London
Paola Espinoza Arias, BASF
Xue Li, University of Amsterdam
Laura Menotti, University of Padua
Daniil Dobriy, Vienna University of Economics and Business
Nicolas Lazzari, University of Bologna
Samah Alkhuzaey, University of Liverpool
George Hannah, University of Liverpool
Novel evaluation approaches for LMs in generative KE tasks include (but are not limited to) ontology description and summarization generation.
Novel evaluation approaches for LMs in completion tasks in KE include (but are not limited to) subsumption inference.
Challenges and limitations of existing evaluation methods for LMs in KE through review, revisit, and comparison of different methods
Examination of human factors in the evaluation of LMs in KE tasks, novel human-centered evaluation principles, strategies, paradigms, and interfaces
Exploration of hybrid evaluation methods that consider both quantitative and qualitative results
Datasets and benchmarks for LMs evaluation in KE tasks include novel methods for dataset and benchmark creation.
Methods and metrics for evaluating the trustworthiness, interpretability, and explainability of LM-generated results
Methods for detecting and mitigating bias
Techniques for detecting and evaluating hallucinations and inconsistencies in the output of language models, with a focus on enhancing experiment replicability
Examination of data leakage and its impact on the evaluation of language models
Criteria for identifying effective benchmarks, evaluations, and metrics
Tasks and scenarios within knowledge engineering where LMs can potentially contribute, along with the associated evaluation methods
Accepted papers will be published at CEUR-WS.org proceedings!
Submission format and guidelines:
All submissions must be formatted using the template for submissions to the CEUR Workshop Proceedings (single column format). An Overleaf template for LaTeX users is available at here.
All papers have to be submitted electronically via EasyChair.
We encourage submissions in two formats including:
Full papers: 10 - 15 pages, including references
Short papers: 5 - 9 pages, including references
Submissions to workshops must be original. Papers that have been previously published or are under review for another journal, conference or workshop must not be considered for publication.
Further, at least one author of each accepted workshop paper has to register for the conference. Please note that workshop attendance is only granted to registered conference participants.