Andrea Asperti
Full Professor at the Department of Computer Science and Engineering,
University of Bologna.
Andrea Saracino
Associate Professor at the Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna.
Alessandro Benfenati
Assistant Professor at the Department of Environmental Science and Policy,
University of Milan.
Enrico Mensa
Research associate at the Department of Computer Science, University of Turin.
Elena Loli Piccolomini
Full Professor at the Department of Computer Science and Engineering,
University of Bologna.
Federico Pilati
Research associate at the Department of Sociology and Social Research,
University of Milano Bicocca.
In this module, students will be introduced to the main families of generative models, including GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and diffusion models. These approaches have become central tools in Generative AI, each offering different strengths in how they learn and produce realistic data such as images, text, or audio. The session will present the core ideas behind each model, the types of problems they are suited for, and the architectures commonly used—such as convolutional networks, encoder-decoder frameworks, and attention-based designs. In addition to the theoretical overview, the module includes hands-on activities in Python, where students will implement and experiment with generative models to better understand how they work in practice.
TBA
This module is a focused but foundational overview of the mathematics underlying Generative AI. This lecture introduces the core mathematical concepts behind Generative AI, with the goal of providing students with a solid foundation for understanding how generative models work. We begin with probabilistic modeling, focusing on how complex data can be represented through latent variables and joint distributions. The session then explores approximate inference techniques, particularly variational methods, which are central to training models when exact computation is not feasible. Optimization plays a key role, and we discuss gradient-based methods and useful tricks that make training possible. The lecture also touches on the structure and geometry of latent spaces, which are essential for understanding how generative models produce realistic outputs. Finally, we offer an introduction to recent advances such as diffusion models, highlighting their unique approach to data generation through noise modeling. The lecture balances mathematical depth with intuitive explanations, preparing students to engage with a range of modern generative approaches.
"An introduction to LLMs and the alignment problem" by Enrico Mensa
This presentation offers an accessible introduction to Large Language Models (LLMs). We'll explore what LLMs actually are and how they learn to understand and generate human language by training on enormous amounts of text from the internet, books, and other sources.
We will focus on the training process and, in particular, on alignment, which can lead to a crucial problem: misalignment. While these models become remarkably good at producing human-like text, they don't always do what we actually want them to do. We'll discuss how LLMs can generate convincing but false information, reflect harmful biases present in their training data, and exhibit concerning behaviors like sycophancy - where models tell users what they think they want to hear rather than providing accurate or helpful responses. Using examples and experiments from the research literature, we'll examine why these alignment challenges arise from the fundamental training process and what they mean for everyday users of AI tools.
In this presentation, we explore medical imaging as a domain where deep learning — and more recently, Generative AI — comes into action. Medical imaging encompasses technologies such as Computed Tomography, Magnetic Resonance Imaging, Positron Emission Tomography, and ultrasound, all of which play a vital role in modern diagnostics and treatment planning. These methods generate complex, high-dimensional data that demand advanced tools for interpretation and enhancement. Generative AI offers powerful solutions: it can synthesize realistic images to augment datasets, improve image resolution and clarity, and reconstruct or complete scans from incomplete data. This module highlights how generative models are not just theoretical tools, but practical assets already shaping the future of medical imaging.
This presentation will navigate the evolving landscape of AI-generated image detection, tracing the technological arms race from early generative networks to today's sophisticated diffusion models. We'll explore what deepfake detectors are and how they learn to distinguish real photos from synthetic ones by training on large datasets of both authentic and AI-generated images. We will focus on the training process and, in particular, on a crucial problem: generalization. While these models become good at spotting fakes from generators they have been trained on, they don't always work on images created by new, unseen AI systems. We'll discuss how a detector trained on specific technologies can experience a significant performance drop when faced with fakes from fundamentally different architectures. This failure occurs because detectors often learn the specific "fingerprints" of known generators, leaving them unprepared for the different artifacts produced by novel techniques. Using a temporal evaluation framework that simulates this technological arms race, we'll examine why these generalization challenges arise and what they mean for building robust, future-proof detection tools.
This presentation explores the impact of Generative AI on propaganda strategies and disinformation in today’s hybrid media ecosystems. Focusing on the interplay between AI-driven content and social media platforms, it examines the use of synthetic images in political communication—such as recent cases in the United States—and the rise of deepfakes, moving from perceived fears to concrete risks. Particular attention is given to new forms of information manipulation, including coordinated inauthentic behavior powered by AI agents. All of this is framed within the dynamics of the attention economy, which shapes the circulation and effectiveness of political messaging in contemporary communication environments.