29-30 July 2024
ELLIS Robust LLMs Workshop
Oxford, UK
ELLIS Workshop on Robustness in Large Language Models (RobustLLMs) is a two-day event hosted by the Oxford Department of Statistics at held at Keble College. The workshop will feature keynotes and invited talks, discussions and poster sessions focused on the role of robustness in improving factuality and reasoning, defending against adversarial inputs, enhancing reliability for real-world applications, and dealing with hallucinations in LLMs.
Funding to cover travel and attendance costs is available for Europe-based students and early career researchers. Application deadline: 31st May
Confirmed Speakers
Alexander (Sasha) Rush (Cornell Tech, HuggingFace)
Frank Hutter (University of Freiburg)
Iryna Gurevych (TU Darmstadt)
Jonas Geiping (MPI Tübingen)
Mrinmaya Sachan (ETH Zurich)
Sharon Li (UW Madison)
Subbarao Kambhampati (Arizona State)
Yarin Gal (University of Oxford)
[more to come!]
Call for Posters
We encourage you to share your research and contribute to a vibrant exchange of ideas by presenting a poster at the workshop.
Submit your contribution via OpenReview by 31st May
Please submit a title, abstract, and a PDF of your contribution (could be a poster or paper). Accepted submissions will have their titles and abstracts published on the workshop website. We encourage submissions of already published work, work in progress, work under submission, or late-breaking results.
Topics
Implications of Robustness on Safety, Hallucinations, Factuality and Reasoning:
Investigating robustness against adversarial inputs (prompt injections) specifically tailored to deceive LLMs.
Addressing robustness in the context of distributional shifts and their impact on LLM performance.
The role of robustness in improving factuality and reasoning of LLMs, detecting and mitigating hallucinations etc.
Enhancing LLM Reliability for Real-world Applications:
Techniques for uncertainty quantification in LLMs to improve decision-making reliability.
The role of diverse and challenging datasets in assessing and enhancing the reliability of LLMs.
Verification of LLM properties to ensure reliability and trustworthiness in real-world applications.
Societal, Ethical, and Legal Considerations:
Legal aspects concerning the robustness of LLMs, including liabilities related to misinterpretations and erroneous outputs.
Strategies for ensuring that LLMs are developed with fairness and without bias, promoting ethical AI practices.
Examining the robustness requirements of LLMs in critical sectors such as legal, healthcare, and content moderation, where the stakes are particularly high.
Innovation and Future Directions:
Novel approaches for improving the robustness of LLMs through architecture innovations, training methodologies, and data augmentation techniques.
The potential of hybrid models that combine the strengths of LLMs with other AI techniques for enhanced robustness.
Anticipating future challenges and opportunities in the evolving landscape of LLM robustness and reliability.
Contact
If you have any questions please contact ellis DOT robustmlworkshop AT gmail DOT com