The LOT school will provide 20 hours of frontal lessons.
8:30- 9:00
9:00 - 11:00
Title: "Neural Propagation in the Framework of Cognidynamics"
Coffe Break 11:00 - 11:30
11:30 - 13:30
Title: "On continual learning and the dynamics of forgetting"
Lunch Break 13:30 - 14:30
14:30 - 16:30
Title: "Collectionless AI: The World of NARNIAN"
Coffe Break 16:30 - 17:00
17:00 - 18:00
Dinner 20:00
9:00 - 11:00
Title: "Test-Time Training: Toward a Post-Dataset World"
Coffe Break 11:00 - 11:30
11:30 - 13:30
Title: "New Avenues in Long-Sequence Processing without Attention"
Abstract: When applied to sequential data, transformers showcase a fundamental challenge: their attention mechanism leads to quadratic complexity with respect to sequence length. This issue extends to graph transformers, where complexity scales quadratically with the number of nodes in the network. Today, we'll explore theoretically grounded alternatives to the attention mechanism that hinge on carefully parametrized linear (in the hidden state) recurrent neural networks. Unlike the more commonly known LSTMs and GRUs, these new RNNs are both provably expressive and GPU-efficient. Such efficiency boost allows to scale up linear RNNs, and achieve competitive performance as well as linear inference time (in contrast to attention). Today, we will walk through the structure of some of these modern linear RNNs, and discuss applications to language modeling, vision, and DNA processing.
Lunch Break 13:30 - 14:30
14:30 - 16:30
Title: "Reservoir Computing Methods for Time Series Classification"
Abstract: An introduction to reservoir computing models, leveraging the power of untrained recurrent networks for time series classification tasks. After a brief overview of the topic, students will implement state-of-the-art reservoir computing models using a provided implementation and compete to develop the best predictor
Coffe Break 16:30 - 17:00
17:00 - 18:00
Dinner 20:00
9:00 - 11:00
Title: "Machine Learning Security: Lessons Learned and Future Challenges"
Coffe Break 11:00 - 11:30
11:30 - 13:30
Title: "Beyond Zero-Shot Generalization: Strengthening Vision-Language Models with Adaptation & Personalization"
Abstract: Groundbreaking achievements in Vision-Language pretraining have increased the interest in crafting Vision-Language Models (VLMs) that can understand visual content alongside natural language, enabling a new definition of zero-shot classification. Despite huge pretraining databases, VLMs still face limitations, suffering from performance degradation in case of large train-test dissimilarity and requiring the design of highly generalizing textual templates. Test-Time Adaptation (TTA) and Few-Shot Adaptation (FSA) can effectively improve the robustness of VLMs by adapting a given model to online inputs or to a specific downstream task given a few annotated examples. In this talk, I will present an overview of a few recent works from my research group on TTA, FSA and other strategies aimed at improving the robustness of VLMs across various downstream tasks.
Lunch Break 13:30 - 14:30
14:30 - 16:30
Focoos AI, Fabio Cermelli
ContinualIST, Vincenzo Lomonaco
Motus ML, Giacomo Ziffer
Coffe Break 16:30 - 17:00
17:00 - 18:00
Dinner 20:00
9:00 - 11:00
Title: "The online learning paradigm of machine learning"
Abstract: Online Convex Optimization is the mathematical framework underlying the design and analysis of online learning algorithms. In online learning, models are sequentially trained on data streams. For this reason, this paradigm is well suited in applications where new data is being generated all the time. In this talk I will describe the main algorithmic tools of online convex optimization, derive mathematical guarantees on their performance, and show connections to other related areas.
Coffe Break 11:00 - 11:30
11:30 - 12:30
12:30 - 13:30
Lunch Break 13:30 - 14:30