I Can't Believe It's Not Better:
Challenges in Applied Deep Learning
Workshop at ICLR 2025
Why don’t deep learning approaches always deliver as expected in the real world?
Dive deep into the pitfalls and challenges of applied deep learning.
Key Dates
Paper Submission Deadline: February 3rd, 2025
Review Period: February 4th, 2025 - February 28th, 2025
Paper Acceptance Notification: March 5th, 2025
Camera Ready & Poster Submission: March 19th, 2025
In-Person Workshop: April 27th, 2025 or April 28th, 2025 (TBD)
The specified due dates are set for 11:59 pm AOE (Anywhere on Earth).
In recent years, we have witnessed a remarkable rise of deep learning (DL), whose impressive performance on benchmark tasks has led to increasing ambitions to deploy DL in real-world applications across all fields and disciplines [1, 2, 3, 4, 5]. However, despite its potential, DL still faces many challenges during deployment in dynamic, real-world conditions, exposing practical limitations that are often overlooked in controlled benchmarks.
Current publication mechanisms tend to prioritize solutions that work on standard bench, lacking a platform to systematically collect real-world failure cases. Moreover, discussions about these failures are usually confined within specific domains, with limited cross-domain interaction, even though these failures may have similar underlying causes. Establishing a platform for collecting and sharing real-world challenges and failures of DL can address fundamental issues to facilitate more successful deployment of DL across domains, and enhance understanding of theoretical and empirical weaknesses in machine learning (ML) research.
Building such a platform and fostering this community has been the continuous goal of our I Can’t Believe It’s Not Better (ICBINB) initiative. As DL systems have become increasingly present in everyday life also for non-scientific people, we want to put a special focus on real-world applications now. Therefore, in this proposed ICBINB workshop, we aim to explore the challenges, unexpected outcomes, and common principles underlying similar issues and failure modes encountered across various fields and disciplines when deploying DL models in real-world scenarios. We will focus the discussion on:
Challenges & failure modes: We will invite papers from diverse fields including but not limited to healthcare, scientific discovery, robotics, education, equality & fairness, and social sciences to discuss the challenges and failure modes when deploying DL models for domain-specific applications as well as the underlying reasons. The failure modes may include suboptimal performance, concerns with the safety and reliability of applying DL models in unpredictable real-world applications, as well as ethical and societal challenges.
Common challenges across domains & underlying reasons: We aim to discuss common reasons or patterns in challenges and failure modes across disciplines, which may include, but are not limited to, data-related issues (e.g., distribution shift, bias, label quality), model limitations (e.g., ethics, fairness, interpretability, scalability, domain alignment), and deployment challenges (e.g., computational demands, hardware constraints).
I Can't Believe It's Not Better Initiative
This workshop forms one workshop in a series as part of the larger I Can't Believe It's Not Better (ICBINB) activities. We are a diverse group of researchers promoting the idea that there is more to machine learning research than tables with bold numbers. We believe that understanding in machine learning can come through more routes than iteratively improving upon previous methods and as such this workshop aims to focus on understanding through negative results. Previous workshops have focused on ideas motivated by beauty and gaps between theory and practice in probabilistic ML, we also run a monthly seminar series aiming to crack open the research process and showcase what goes on behind the curtain. Read more about our activities and our members here.
Accessibility and Contact
ICBINB aims to foster an inclusive and welcoming community. If you have any questions, comments, or concerns, please contact us at: cant.believe.it.is.not.better@gmail.com
Whilst we will have a range of fantastic speakers appearing in person at the workshop we understand that many people are not able to travel to ICLR at this moment in time. It is our aim to make this workshop accessible to all, all talks will be viewable remotely.
Subscribe to the ICBINB mailing list here: The ICBINB Initiative - Google Groups
References
[1] Anthony Hu, Lloyd Russell, Hudson Yeo, Zak Murez, George Fedoseev, Alex Kendall, Jamie Shotton, and Gianluca Corrado. Gaia-1: A generative world model for autonomous driving. arXiv preprint arXiv:2309.17080, 2023.
[2] Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao Gao, Kexin Huang, Ziming Liu, Payal Chandak, Shengchao Liu, Peter Van Katwyk, Andreea Deac, et al. Scientific discovery in the age of artificial intelligence. Nature, 620(7972):47–60, 2023b.
[3] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583–589, 2021.
[4] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. URL https://openai.com/research/ video-generation-models-as-world-simulators.
[5] Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. MetaGPT: Meta programming for a multi-agent collaborative framework. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VtmBAGCN7o.