Evaluating the Utilities of Large Language Models in Single-cell Data Analysis

scEval presents a systematic evaluation of the effects of hyper-parameters, initial settings, and stability for training single-cell Large Language Models (LLMs), and provide guidelines for pre-training and fine-tuning. Our work summarizes the current state of single-cell LLMs, and points to their constraints and avenues for future investigation.

Links

More information about the project can be found at these links:

scEval Platform - Python module implementing the metrics and wrapping the methods used in the evaluation.

Acknowledgements

We appreciate the suggestions from Yingxin Lin, Haotian Cui, Christina Theodoris, Rex Ying and Minsheng Hao.

Citation

If any part of the project is useful for your work, please cite:

Liu, T., Li, K., Wang, Y., Li, H. and Zhao, H. (2023). Evaluating the Utilities of Large Language Models in Single-cell Data Analysis. Preprint. DOI: https://doi.org/10.1101/2023.09.08.555192Β 

Β© 2023 Tianyu Liu, Kexing Li, Yuge Wang, Hongyu Li, Hongyu Zhao

Page Designed by Hongyu Li