When: May 23, 2023, at 4:00 - 6:00 PM
Where: Room 5161.0222 (Bernoulliborg)
Abstract: In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding tasks. However, the internal computations of these models are hardly interpretable due to their highly nonlinear structure, hindering their usage for mission-critical applications requiring trustworthiness and transparency guarantees. This presentation will introduce interpretability methods used for tracing the predictions of language models back to their inputs and discuss how these can be used to gain insights into model biases and behaviors. Throughout the presentation, several concrete examples of language model attributions will be presented using the Inseq interpretability library.
Plan of the presentation:
Why is interpretability needed in NLP?
Feature attribution: Why's and How's
How do Transformers Encode Factual Knowledge? A Case Study
Future of Language Model Interpretability & Discussion
Bio: Gabriele Sarti is a Ph.D. student in the Computational Linguistics Group (GroNLP) at the University of Groningen, Netherlands. Previously, he worked as a research intern at Amazon Translate NYC, a research scientist at Aindo, and a research assistant at the ItaliaNLP Lab (CNR-ILC, Pisa). His research aims to improve our understanding of generative neural language models’ inner workings, aiming to enhance the controllability and robustness of these systems for human-AI collaboration.
When: May 9, 2023, at 4:00 - 6:00 PM
Where: Room 5161.0041B (Bernoulliborg)
Abstract: Aquaponics is a sustainable farming technique that aims to grow plants while harvesting fish by exploiting the nitrogen cycle in a closed-loop water circuit. Such a system requires careful management of water pressure, fish food, water quality, indoor temperature, light, and other variables. Although standard industrial control techniques perform reasonably well in small-scale aquaponics, they do not consider objectives such as plant growth or fish comfort. Hence, we consider a reinforcement learning (RL) approach as adaptive control to include them. However, training an RL agent is difficult as sampling the environment is expensive regarding system safety. Therefore, in this talk, we will discuss potential strategies to address safety during the training of an RL agent in aquaponics.
When: April 25, 2023, at 4:00 PM
Where: Room 5161.0041B (Bernoulliborg)
Abstract: In the last decade, we have witnessed an explosion in the performance and rise to fame of artificial neural networks (ANNs): from the early success of AlexNet in image classification to the very recent and hotly debated large language models like GPT-4 and ChatGPT from OpenAI, and Google’s PALM. This performance, though, comes at an enormous cost for training and inference, due to the very high number of parameters of these models (more than 1 trillion in some cases). Model compression (MC) gathers a variety of techniques aimed at reducing the number of parameters of machine learning (ML) models, ANNs included, with the aim of either reducing the computational requirements or aiding generalization. In this seminar, we will be covering the topic of MC for various ML models: starting from easier examples connected to linear regression and decision trees, we will shift towards state-of-the-art MC techniques for ANNs, while keeping the level of the discussed topics accessible to attendees with basic knowledge of ML.
When: April 11, 2023, at 4:00 PM
Where: Room 5161.0041B (Bernoulliborg)
Abstract: The presence of several timescales is ubiquitous in nature: from the large scale of climate evolution to the microscopic scale of bursting neurons. Mathematicians model such phenomena by means of singular perturbations.The aim of this talk is to introduce some basic terminology related to slow-fast systems and to show (mostly) graphically the peculiarity of singular perturbations. We are going to use the language of dynamical systems, however no prior knowledge of the subject is required.
When: March 28, 2023, at 4:00 PM
Where: Room 5161.0041B (Bernoulliborg)
Abstract: Everyday we reason about our uncertain world by using evidence. In most cases in our lives, the stakes are low. However, in the domain of law, the stakes can be high: a mistake in reasoning with evidence might mean that an innocent person goes to jail or a guilty person walks free. Bayesian Networks have been proposed as a way to formalize reasoning under uncertainty with evidence, as an explicit and normative model in the domain of law. Yet, existing approaches have many problems. In this talk, I will discuss the (mis)use of proposed methods for using Bayesian Networks in criminal law.
When: March 14, 2023, at 4:00 PM
Where: Room 5161.0041B (Bernoulliborg)
Abstract: In this talk, we will explore the fascinating world of generative adversarial networks (GANs), a type of neural network architecture that has gained immense popularity in recent years for its ability to generate high-quality pictures. I will start by introducing the basics of neural networks and deep learning. I will then delve into the details of GANs, explaining their architecture, loss functions, and training process. I will also discuss how GANs have been used in various fields, such as art, gaming, and healthcare, and showcase some impressive examples of GAN-generated images. By the end of this talk, you will have a solid understanding of GANs and their potential applications. This talk is suitable for beginners with no prior knowledge of neural networks or GANs.
When: February 28, 2023, at 4:00 PM
Where: Room 5161.0289 (Bernoulliborg)
Abstract: The presentation is a personal project motivated by informal discussions with colleagues and my own experience with courses on ethics in research, which usually target the legal implications of scientific misconduct but fall short of addressing underlying ethical considerations. So the aim is to have a look together at the philosophy behind the concept of (scientific) integrity and several components that approximate ethical conduct in academic context.
I will start with the notion of scientific ethos as conceptualized by sociologist Robert Merton (1957), then will move on to more recent discussions on the social epistemology of science (such as bioethicist David Resnick's Ethics in Science and philosopher Heather Douglas' Values in Science). I will conclude with some thoughts on the epistemic virtues of resilience and intellectual humility.
I suspect many of us are in need of some orientation when it comes to academic integrity beyond legal concerns. One of the first steps towards this goal is to develop a vocabulary that describes what we find problematic and why. Philosophy is the ideal resource for this. If you are interested, please join and think along
When: February 14, 2023, at 4:00 PM
Where: Room 5161.0222 (Bernoulliborg)
Abstract: In this discussion-based session, Guillaume and Steve will present the problem of induction and present arguments why it is a problem that reoccurs in AI and demands a solution. They will outline how this problem is perceived by mainstream machine learning research: through the no free lunch theorems, and the practice of explicitly designing inductive biases into ML systems. This leads the way into an open discussion of how the problem of induction is perceived in different subfields of AI (e.g. logic, meta-learning, AGI), and outside of AI (e.g. philosophy, physics), and how we might approach its (re)solution.
When: January 31, 2023, at 4:00 PM
Where: Room 5161.0222 (Bernoulliborg)
Abstract: The my-side bias is a well-documented cognitive bias in the evaluation of arguments, in which reasoners in a discussion tend to overvalue arguments that confirm their prior beliefs, while undervaluing arguments that attack their prior beliefs. After motivating a Bayesian model of myside bias at the level of individual reasoning, this Bayesian model is implemented in an agent-based model of group discussion among myside-biased agents. The agent-based model is then used to perform a number of experiments with the objective to study whether the myside bias hinders or enhances the ability of groups to collectively track the truth, that is, to reach the correct answer to a given binary issue. An analysis of the results suggests the following: First, the truth-tracking ability of groups is neither helped nor hindered by myside bias, unless the strength of myside bias is differentially distributed across subgroups of discussants holding different beliefs. Second, small groups are more likely to track the truth than larger groups, suggesting that increasing group size has a detrimental effect on collective truth-tracking through discussion.
When: January 17, 2023, at 4:00 PM
Where: Room 5161.0289 (Bernoulliborg)
Abstract: No. Definitely not. But computers can definitely help us check that a mathematical proof is correct! In this talk, we will take a shallow dive into the fascinating world of proof assistants and how they can be used to help a pure mathematician, an applied cryptographer and a safety systems engineer.
When: December 6, 2022, at 4:00 PM
Where: Room 5161.0289 (Bernoulliborg)
Abstract: Do you like drawing diagrams to solve your problems? Are you a fan of abstract concepts that connect seemingly unrelated ideas? Do you shiver when you hear "set theory"? Then join us for an introduction to category theory - a unifying subject at the intersection of mathematics and logic. And just before you conclude that this approach looks pretty but utterly useless, I will try to convince you otherwise with some applications in computer science.
When: November 22, 2022, at 4:00 PM
Where: Room 5115.0014 (Nijenborgh 4)
Abstract: One of the main objectives of formal semantics is the translation of natural language expressions such as individual words, phrases, or whole sentences into logical expressions while preserving structural and semantic constraints. This talk gives an introduction to the field and presents homogeneity as a puzzling phenomenon that arises from the meaning of plural definite descriptions.
When: November 8, 2022, at 5:00 PM
Where: Room 5161.0041B (Bernoulliborg)
Abstract: Computational social choice is an interdisciplinary field, which mainly lies in an intersection between artificial intelligence and microeconomics. This presentation introduces some key directions and concepts of social choice theory, by using intuitive examples. To provide deeper insight into the field, more details of two specific directions, liquid democracy, and matching theory, are also shown.
When: October 1, 2022, at 5:00 PM
Where: Room 5161.0222 (Bernoulliborg)
Abstract: 'Formalization' is the core logical method to define and study natural or philosophical concepts. Beginning with several primary notions, a mathematical model is constructed to provide a basis for the target concept. Accordingly, a formal language is also given for expressing the concept and other related notions. Thereby, the reasoning about valid formulas shows how the concept can be used rationally in a formal way.