Investigate natural langugage processing shcemes private LLM development using LLMs. Our aims are to explore the state-of-the-art natural language processing algorithms using LLMs for achieving effective long-term memory and private systems(LLM/RAG) and improving performance as in:
Fintetuned LLM for Private LLM/RAG
Admin 2021-04-08 👁️ 207
1.Motivation
- Using thrid-party LLMs like ChatGPT gives the best performance on open domain tasks but gives rise to the risk of exposing sensitive client data
- Using an open LLM like LLaMA2 on a privately hosted server can solve this issue but results in lower performance
- Thus, there is a need to develop private LLMs with better performance on tasks requested by clients while maintaining privacy of client data
2. Research Goal and Issue
- Goal : Develop a private LLM/RAG(Retrieval-Augmented Generation) system
Develop finetuning methods using multitask leraning to improve the response generation ability of the private LLM
Improve the performance of the LLM by combining it with ISPL RAG model
- Issue
MoE models encompass parameter complexity and training instability
Hypernetworks : pose scalability challenges
3. Approach
- Develop a private LLM
Collect and process data for building training dataset
Finetune LLM using multitask learning
Integrate finetuned LLM with RAG
4.Result
- the performance evaluation of the fine-tuned model was much better than the pre-trained model