ELMKE: Evaluation of Language Models in Knowledge Engineering

2nd Workshop co-located with ESWC-25, Portorož, Slovenia

Language models (LMs) have been considered promising in numerous knowledge engineering (KE) tasks, such as knowledge extraction, knowledge base construction, and curation. However, their adoption introduces new challenges for evaluation. The assessment of LM-generated results remains limited, lacking a comprehensive and formally defined framework, and relies heavily on human effort, making it difficult to compare methods and reproduce experiments.

The ELMKE workshop addresses this critical gap by spearheading a community-driven effort to standardize evaluation. It seeks to unite expertise, perspectives, and pioneering works to advance novel paradigms for evaluating LMs in KE. This workshop will showcase innovative and published papers focusing on evaluation methods for diverse KE tasks, such as completion and generative tasks. Additionally, discussions will explore challenges related to transparency, human evaluation, and broader reflections on the implications of evaluation methods. By establishing plans for platforms and dashboards for collaborative work, participants and the community can contribute to the design and implementation of robust evaluation methods and benchmarks, fostering targeted discussions and long-term collaboration.

Important Dates

All submission deadlines are 23:59:59 AoE (anywhere on earth).