Blogs

More blogs are coming soon, stay tuned!

Feb 7, 2024

Over the past years, foundation models have caused a paradigm shift in machine learning due to their unprecedented capabilities for zero-shot and few-shot generalization. However, despite the success in modalities such as natural language processing and computer vision, foundation models for time series forecasting has lagged behind. We introduce Lag-Llama: the first open-source foundation model for time series forecasting!
Model weights Demo GitHub Paper Tweet


Dec 30, 2023

This is a progress update (stay tuned - next update mid-January)  on our CL-FoMo (Continually-Learned Foundation Models) suite of open-source LLMs, containing four 410M models, and four 9.6B models, trained on Pile, SlimPajama (SP), Mix Pile+SP, and Continual (Pile, SP). On most benchmarks, our 9.6B continual model is competitive with the performance of the baseline joint model trained on mixed Pile+SP data (we call it IID Pile+SP to underscore that samples in the mix are identically distributed), and  attains similar performance to RedPajama 7B and Pythia 12B on featured benchmarks. 


Dec 24, 2023

The Robin team is proud to present Robin - a suite of Visual Language Models (VLMs). These models are competitive with  the stateof the art models of similar scale. 

Models: https://huggingface.co/agi-collective

Code: https://github.com/AGI-Collective/Robin/releases/tag/v1.0.0

Oct 31, 2023
Hindi is one of the most widely used languages, spoken by more than 600 Million people; however, no open-source high-quality Hindi LLM was built so far, perhaps due to the relative scarcity of available training datasets. We use continual pretraining with 600B Hindi/English tokens on top of  9.6B model from the CL-FoMo Suite pretrained on English language Pile dataset; as a result: