The conference venue is the Pullman Hotel in Hanoi, Vietnam
Given the large amount of recent research in this area, we believe that this tutorial is extremely timely. It is especially relevant to the Knowledge Representation and Reasoning community, particularly regarding the use of LLMs for annotation tasks in NLP and knowledge representation. The tutorial covers the following areas which illustrate its relevance to the community:
(1) Semantic Annotation and Knowledge Representation: Semantic annotation plays a crucial role in KR tasks.
(2) Evaluation methods for LLM-generated annotations: This is crucial for assessing their suitability and understanding their strengths and limitations.
(3) Auto-label Tools for Annotating Tasks: By exploring how LLMs can automate the annotation process, KR conference attendees can gain insights into emerging tools and methodologies for enhancing KR efforts.
(4) Limitations of LLMs: Understanding the limitations of LLMs is essential for KR practitioners considering their integration into knowledge representation and reasoning systems.
(1) Language models: Exposure to basics of Transformers and language models like BERT and GPT-2.
(2) Basic knowledge about recent trends in large language models.
(3) Basic understanding of NLP and KR tasks and benchmarks.
However, even if attendees are not familiar with these concepts, we will provide a brief overview during the tutorial before delving into the main content.
[1] Vaswani et al. 2017, Attention Is All You Need, NeurIPS 2017.
[2] Devlin et al. 2018, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL 2018.
[3] Brown et al. 2020, Language Models are Few-Shot Learners, NeurIPS 2020.
[4] Chung et al. 2022, Scaling Instruction-Finetuned Language Models, JMLR 2022.
[5] Li et al. 2024, CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society, NeurIPS 2023.
[6] Su et al. 2022, Selective Annotation Makes Language Models Better Few-Shot Learners, ICLR 2023.
[7] Li and Qiu 2023, MoT: Memory-of-Thought Enables ChatGPT to Self-Improve, EMNLP 2023.
[8] Santurkar et al. 2023, Whose Opinions Do Language Models Reflect?, ICML 2023.
[9] Zheng et al. 2024, Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, NeurIPS 2023.
University of Bonn, Germany
Tu Berlin, Germany
Microsoft, India
University of Bonn, Germany