This website provides the supplementary materials for the paper "Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models", which presents a detailed research workflow and experiment results not shown in the paper due to the page limit.
The website is organized as follows:
Home page: The motivation why a study on risk assessment for LLMs is urgently needed, which is followed by an illustration and an introduction of our research workflow
Prompt: The prompt template used in the paper, as well as supplementary results on Llama2.
NLP Task: Main results for old NLP-related models.
Code Task: Main results for old code-related models, and supplementary results in RQ4.
Concrete Example: Examples of response w.r.t different uncertainty estimation methods.
Our Code: The replication package for our paper with detailed documentation.
The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains. However, erroneous generations, such as false predictions, misinformation, and hallucination made by LLMs, have also raised severe concerns for the trustworthiness of LLMs’, especially in safety, security and reliability-sensitive scenarios, potentially hindering real-world adoptions. While uncertainty estimation has shown its potential for interpreting the prediction risks made by general machine learning (ML) models, little is known about whether and to what extent it can help explore an LLM’s capabilities and counteract its undesired behavior. To bridge the gap, in this paper, we initiate an exploratory study on the risk assessment of LLMs from the lens of uncertainty. In particular, we experiment with twelve uncertainty estimation methods and four LLMs on four prominent natural language processing (NLP) tasks to investigate to what extent uncertainty estimation techniques could help characterize the prediction risks of LLMs. Our findings validate the effectiveness of uncertainty estimation for revealing LLMs’ uncertain/non-factual predictions. In addition to general NLP tasks, we extensively conduct experiments with four LLMs for code generation on two datasets. We find that uncertainty estimation can potentially uncover buggy programs generated by LLMs. Insights from our study shed light on future design and development for reliable LLMs, facilitating further research toward enhancing the trustworthiness of LLMs
Workflow summary of subject tasks, LLMs and selected uncertainty estimation methods