4rd Workshop co-located with ISWC 2026, Bari, Italy
Language models (LMs) have been considered promising in numerous knowledge engineering (KE) tasks, such as knowledge extraction, knowledge base construction, and curation. However, their adoption introduces new challenges for evaluation. The assessment of LM-generated results remains limited, lacking a comprehensive and formally defined framework, and relies heavily on human effort, making it difficult to compare methods and reproduce experiments.
The ELMKE workshop addresses this critical gap by spearheading a community-driven effort to standardize evaluation. It seeks to unite expertise, perspectives, and pioneering works to advance novel paradigms for evaluating LMs in KE. This workshop will showcase innovative and published papers focusing on evaluation methods for diverse KE tasks, such as completion and generative tasks. Additionally, discussions will explore challenges related to transparency, human evaluation, and broader reflections on the implications of evaluation methods. By establishing plans for platforms and dashboards for collaborative work, participants and the community can contribute to the design and implementation of robust evaluation methods and benchmarks, fostering targeted discussions and long-term collaboration.
Papers Submission Deadline: July 24, 2026
Papers Notification: August 7, 2026
Early Bird Registration: to be announced in 2026
Papers Camera Ready: August 14, 2026
Workshop Day: October 25-26, 2026
All submission deadlines are 23:59:59 AoE (anywhere on earth).