Voyager: A Training Free Approach for Generating Diverse Datasets using LLMs
Avinash Amballa, Yashas Malur Saidutta, Chi-Heng Lin,
Vivek Kulkarni, Srinivas Chappidi
Samsung Research America
Voyager: A Training Free Approach for Generating Diverse Datasets using LLMs
Avinash Amballa, Yashas Malur Saidutta, Chi-Heng Lin,
Vivek Kulkarni, Srinivas Chappidi
Samsung Research America
Large language models (LLMs) are increasingly being used to generate synthetic datasets for the evaluation and training of downstream models. However, prior work has noted that such generated data lacks diversity. In this paper, we propose Voyager, a novel principled approach to generate diverse datasets. Our approach is iterative and directly optimizes a mathematical quantity that optimizes the diversity of the dataset using the machinery of determinantal point processes. Furthermore, our approach is training-free, applicable to closed-source models, and scalable. In addition to providing theoretical justification for the working of our method, we also demonstrate through comprehensive experiments that Voyager significantly outperforms popular baseline approaches by providing a 1.5-3x improvement in diversity.
Algorithm
If you find our project useful, please consider citing:
@misc{amballa2025voyagertrainingfreeapproach,
title={VOYAGER: A Training Free Approach for Generating Diverse Datasets using LLMs},
author={Avinash Amballa and Yashas Malur Saidutta and Chi-Heng Lin and Vivek Kulkarni and Srinivas Chappidi},
year={2025},
eprint={2512.12072},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.12072},
}