Multimodal Foundation Models:
Time-Series and Beyond
Thursday, October 9, 6650 Rue Saint-Urbain, Agora, Montreal
Thursday, October 9, 6650 Rue Saint-Urbain, Agora, Montreal
Context is Key: A Benchmark for Forecasting with Essential Textual Information
Beyond Naïve Prompting: Strategies for Improved Zero-shot Context-aided Forecasting with LLMs
Chronos: Learning the Language of Time Series
Foundation Models for Time Series Analysis: A Tutorial and Survey
VideoGPT: Video Generation using VQ-VAE and Transformers
UniTS: A Universal Time Series Analysis Framework Powered by Self-Supervised Representation Learning
Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training
10:00 - 10:15 Introduction: Recent Progress in Time Series Foundation Models, Irina Rish (CERC-AAI Lab, UdeM/Mila)
10:15 - 10:40 Context is Key: A Benchmark for Forecasting with Essential Textual Information, (Slides), Andrew Williams & Arjun Ashok(Mila)
10:40 - 11:05 Beyond Naïve Prompting: Strategies for Improved Zero-shot Context-aided Forecasting with LLMs, (Slides), Arjun Ashok (Mila)
11:05 - 11:15 Coffee break
11:15 - 11:40 Mantis: towards a powerful foundation model for time series classification, (Slides), Vasilii Feofanov (Paris Noah's Ark Lab)
11:40 - 12:05 The Importance of Proper Tokenization to Get the Most Out of Your Time-Series, (Slides), Alexis Roger (Mila)
12:05 - 12:30 DELLMPHI: A Multi-Turn Method for Multi-Agent Forecasting, (Slides), Andrew Williams (Mila)