Aakriti Agrawal, Mucong Ding, Zora Che, Chenghao Deng, Anirudh Satheesh, John Langford, Furong Huang
How can we harness the collective capabilities of multiple Large Language Models (LLMs) to create an even more powerful model? This question forms the foundation of our research, where we propose an innovative approach to weak-to-strong (w2s) generalization—a critical problem in AI alignment. Our work introduces an easy-to-hard (e2h) framework for studying the feasibility of w2s generalization, where weak models trained on simpler tasks collaboratively supervise stronger models on more complex tasks. This setup mirrors real-world challenges, where direct human supervision is limited. To achieve this, we develop a novel AdaBoost-inspired ensemble method, demonstrating that an ensemble of weak supervisors can enhance the performance of stronger LLMs across classification and generative tasks on difficult QA datasets. In several cases, our ensemble approach matches the performance of models trained on ground-truth data, establishing a new benchmark for w2s generalization. We observe an improvement of up to 14% over existing baselines and average improvements of 5% and 4% for binary classification and generative tasks, respectively. This research points to a promising direction for enhancing AI through collective supervision, especially in scenarios where labeled data is sparse or insufficient.
This figure illustrates the complete pipeline of our EnsemW2S method for easy-to-hard generalization using w2s generalization. In a realistic scenario, weak teachers are adept at answering easy questions but must supervise strong models to tackle hard problems. In the leftmost portion, we show that we train weak models on easy data, strong models on hard data, and transfer models on pseudo labels generated by the weak model on hard data. Ultimately, we aim to increase the Performance Gap Recovered (PGR). On the right, we depict how our EnsemW2S algorithm chooses the correct answer at the token level. At the bottom, we provide an example of easy and hard data for the Quartz dataset for e2h generalization, highlighting the importance of distinguishing between easy and hard data for realistic w2s generation.
Aggregated results over all weak and strong model pairs for binary classification task on Sciq dataset and supervised fine-tuning task on Quartz dataset. Our approach (gray) outperforms baseline (blue) for all datasets.
On comparison performance of a single weak model (dark color) with the combined weak models (Lighter hue shows improvement), it is observed that smaller models show greater improvement, which is expected since boosting works best when weak models are diverse. Using EnsemW2S, smaller models can diversify through the data sampling step; however, larger models tend to learn all possible information and cannot learn something different with each round.
@article{agrawal2024ensemw2s,
title={EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?},
author={Agrawal, Aakriti and Ding, Mucong and Che, Zora and Deng, Chenghao and Satheesh, Anirudh and Langford, John and Huang, Furong},
journal={arXiv preprint arXiv:2410.04571},
year={2024}
}