Papers
Award Winners
Accepted Papers
Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs Dobrik Georgiev, Pietro Lio, Jakub Bachurski, Junhua Chen, Tunan Shi (Poster)
Analyzing the factual knowledge of parameter efficient instruction tuned mid-size Large Language Models Anmol Nayak, Hariprasad Timmapathini (Poster)
Can Visual Scratchpads With Diagrammatic Abstractions Augment LLM Reasoning? Joy Hsu, Gabriel Poesia, Jiajun Wu, Noah Goodman (Poster)
The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" Lukas Berglund, Meg Tong, Maximilian Kaufmann, Mikita Balesni, Asa Stickland, Tomasz Korbak, Owain Evans (Poster)
A Study on the Calibration of In-context Learning Hanlin Zhang, YiFan Zhang, Yaodong Yu, Eric Xing, Himabindu Lakkaraju, Sham Kakade (Poster)
Can LLM-Generated Misinformation Be Detected? Canyu Chen, Kai Shu (Poster)
Why Does ChatGPT Fall Short in Providing Truthful Answers? Shen Zheng, Jie Huang, Kevin Chang (Poster)
How Many Raters Do You Need? Power Analysis for Foundation Models Christopher M Homan, Shira Wein, Chris Welty, Lora Aroyo (Poster)
Structure-Aware Path Inference for Neural Finite State Transducers Weiting Tan, Chu-Cheng Lin, Jason Eisner (Poster)
Adversarial Attacks and Defenses in Large Language Models: Old and New Threats Leo Schwinn, David Dobre, Stephan Günnemann, Gauthier Gidel (Poster)
On the performance of Multimodal Language Models Utsav Garg, Erhan Bas (Poster)
Transformer-Based Large Language Models Are Not General Learners: A Universal Circuit Perspective Yang Chen, Yitao Liang, Zhouchen Lin (Poster)
How (not) to ensemble LVLMs for VQA Lisa Alazraki, Lluis Castrejon, Mostafa Dehghani, Fantine Huot, Jasper Uijlings, Thomas Mensink (Poster)
A Natural Experiment on LLM Data Contamination in Code Generation Manley Roberts, Himanshu Thakur, Christine Herlihy, Colin White, Samuel Dooley (Poster)
Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination Haoqiang Kang, Xiao-Yang Liu (Poster)
Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation Yuhui Zhang, Brandon McKinzie, Zhe Gan, Vaishaal Shankar, Alexander Toshev (Poster)
A Negative Result on Gradient Matching for Selective Backprop Lukas Balles, Cedric Archambeau, Giovanni Zappella (Poster)
Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months Fady Rezk, Antreas Antoniou, Henry Gouk, Timothy Hospedales (Poster)
Towards Better Understanding of Domain Shift on Linear-Probed Visual Foundation Models Eric Heim (Poster)
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations Aleksandar Petrov, Philip Torr, Adel Bibi (Poster)
Exploring DINO: Emergent Properties and Limitations for Synthetic Aperture Radar Imagery Joseph Alejandro Gallego Mejia, Anna Jungbluth, Laura Martínez-Ferrer, Francisco Dorr, Matthew Allen, Freddie Kalaitzis, Raúl Ramos-Pollán (Poster)
Filter bubbles and affective polarization in user-personalized large language model outputs Tomo Lazovich (Poster)
Interactive Model Correction with Natural Language Yoonho Lee, Michelle Lam, Helena Vasconcelos, Michael Bernstein, Chelsea Finn (Poster)
SentimentPulse: Temporal-Aware Custom Language Models vs. GPT-3.5 for Consumer Sentiment Lixiang Li, Nagender Aneja, Alina Nesen, Bharat Bhargava (Poster)
Self-Evaluation Improves Selective Generation in Large Language Models Jie Ren, Yao Zhao, Tu Vu, Peter Liu, Balaji Lakshminarayanan (Poster)
A Study on Improving Reasoning in Language Models Yuqing Du, Alexander Havrilla, Sainbayar Sukhbaatar, Pieter Abbeel, Roberta Raileanu (Poster)
Zero-shot capabilities of visual language models with prompt engineering for images of animals Andrea Tejeda Ocampo, Eric Orenstein, Kakani Young (Poster)
Do Language Models Know When They're Hallucinating References? Ayush Agrawal, Mirac Suzgun, Lester Mackey, Adam Kalai (Poster)
Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models Adhithya Prakash Saravanan, Rafal Kocielnik, Roy Jiang, Pengrui Han, Anima Anandkumar (Poster)
Compositional Generalization in Vision-Language Models uses the Language Modality only Chenwei Wu, Patrick haffner, Li Li, Stefano Ermon, Rong Ge (Poster)
Surprising Deviations from Bayesian View in In-Context Learning Madhur Panwar, Kabir Ahuja, Navin Goyal (Poster)
Are large language models good annotators? Jay Mohta, Kenan Ak, Yan Xu, Mingwei Shen (Poster)
Exploring and Improving the Spatial Reasoning Abilities of Large Language Models Manasi Sharma (Poster)
Can Segment Anything Model Improve Semantic Segmentation? Maryam Qamar, Donghoon Kim, Muhammad Salman Ali, Chaoning Zhang, Sung-Ho Bae (Poster)
Segment Anything Model (SAM) Enhances Pseudo-Labels for Weakly Supervised Semantic Segmentation Tianle Chen, Zheda Mai, Ruiwen Li, Wei-Lun Chao (Poster)