1st Workshop on Reliable Evaluation of LLMs for Factual Information (REAL-Info)

June 3, 2024
Collocated with ICWSM 2024

Dates

Workshop on Reliable Evaluation of LLMs for Factual Information (REAL-Info)


LLMs can potentially influence various information tasks of millions of users, ranging from personal content creation to education, financial advice, and mental health support. However, there is also a growing concern about LLMs' ability to identify and generate factual information. Currently, there is no standardized approach for evaluating the factuality of LLMs.This half-day workshop will enable a broad and diverse conversation regarding assessing the factuality of content generated by LLMs and their overall performance in factuality-related tasks, e.g., fact-checking, misinformation detection, rumor detection, and stance detection. The objective is to encourage the development of new evaluation approaches, metrics, and benchmarks that can better gauge LLMs' performance in terms of factuality. Additionally, the workshop will explore human-centric approaches for mitigating and correcting inaccuracies to enhance LLMs' factuality for critical applications. This workshop will facilitate interactions among academic and industry researchers in computational social sciences, natural language processing, human-computer interaction, data science, and social computing.


Organizing Committee

If you have any questions, please reach out to the organizing committee. 

Sarah Preum

Dartmouth College

Björn Ross

University of Edinburgh 

Syed Ishtiaque Ahmed

University of Toronto 

Daphne Ippolito

Carnegie Mellon University