Failure Modes in the Age of Foundation Models

In 2023 foundation models have disrupted the ML field. What are the limits of these models and how can we learn from negative results?

Key Dates

Paper Submission Deadline -  October 1, 2023 October 4, 2023 (Anywhere on Earth)

Review Deadline - October 23, 2023 (Anywhere on Earth)

Paper Acceptance Notification - October 27, 2023 (Anywhere on Earth)

Camera Ready & Poster Submission - November 30, 2023 (Anywhere on Earth)

In-Person Workshop - December 16, 2023

In the past year, tools such as ChatGPT, Stable Diffusion and SegmentAnything have had an immediate impact on our everyday lives.  Many of these tools have been built using foundation models, that is, very large models (having billions or trillions of parameters) trained on vast amounts of data (Bommasani et al., 2021). The excitement around these foundation models and their capabilities might suggest that all the interesting problems have been solved and artificial general intelligence is just around the corner (Wei et al., 2022; Bubeck et al., 2023). 

At this year’s I Can’t Believe It’s Not Better workshop we invite papers to cooly reflect on this optimism and to demonstrate that there are in fact many difficult and interesting open questions. The workshop will specifically focus on failure modes of foundation models, especially unexpected negative results. In addition, we invite contributions that will help us understand current and future disruptions of machine learning subfields as well as instances where these powerful methods merely remain complementary to another subfield of machine learning. 


Contributions on the failure modes of foundation models might consider: 


Besides failure modes of foundation models, this workshop also considers their impact on the ML ecosystem and potential problems that remain to be solved by these new systems. In this context, relevant questions include: 

I Can't Believe It's Not Better

This workshop forms one workshop in a series as part of the larger I Can't Believe It's Not Better (ICBINB) activities. We are a diverse group of researchers promoting the idea that there is more to machine learning research than tables with bold numbers. We believe that understanding in machine learning can come through more routes than iteratively improving upon previous methods and as such this workshop aims to focus on understanding through negative results. Previous workshops have focused on ideas motivated by beauty and gaps between theory and practice in probabilistic ML, we also run a monthly seminar series aiming to crack open the research process and showcase what goes on behind the curtain. Read more about our activities and our members here.

Accessiblity

ICBINB aims to foster an inclusive and welcoming community. If you have any questions, comments, or concerns, please reach out to us

Whilst we will have a range of fantastic speakers appearing in person at the workshop we understand that many people are not able to travel to NeurIPS at this moment in time. It is our aim to make this workshop accessible to all, all talks will be viewable remotely.

References