AILO Talks

Summer BREAK

Thank you all for a great year of talks! See you next academic year!

What are AILo Talks?

AILo - Artificial Intelligence and Logic is a series of talks in which PhD students in AI and Logic can present in an introductory fashion general concepts and perspectives connected to our research. This is meant as a chance for us to get to know each other's research and discover topics and areas we might not know at all. It is also an opportunity to bring us together and connect. The series is supposed to be introductory. The aim of it is to let other colleagues know some basic concepts in our fields.


Afterwards, there are drinks!

Past Talks

Is your concurrent program correct? - by Juan Camilo Jaramillo Londoño

When: June 6, 2023, at 4:00 - 6:00 PM

Where: Room 5161.0222 (Bernoulliborg)

Abstract: Software systems play a crucial role in several aspects of our lives, ranging from streaming our favorite songs to facilitating secure communication between pilots and air traffic controllers for safe landings. However, even experienced programmers can make mistakes and software errors might be expensive or even catastrophic. Furthermore, most software systems nowadays are collections of programs that interact with each other simultaneously by exchanging messages, which increases the chances of errors. This presentation will introduce in an accessible way some rigorous mathematical models that serve as foundations for concurrent computation and how they allow to address the problem of program correctness.

Interpreting Language Models with Feature Attribution- by  Gabriele Sarti

When: May 23, 2023, at 4:00 - 6:00 PM

Where: Room 5161.0222 (Bernoulliborg)

Abstract: In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding tasks. However, the internal computations of these models are hardly interpretable due to their highly nonlinear structure, hindering their usage for mission-critical applications requiring trustworthiness and transparency guarantees. This presentation will introduce interpretability methods used for tracing the predictions of language models back to their inputs and discuss how these can be used to gain insights into model biases and behaviors. Throughout the presentation, several concrete examples of language model attributions will be presented using the Inseq interpretability library.


Plan of the presentation:


Bio: Gabriele Sarti is a Ph.D. student in the Computational Linguistics Group (GroNLP) at the University of Groningen, Netherlands. Previously, he worked as a research intern at Amazon Translate NYC, a research scientist at Aindo, and a research assistant at the ItaliaNLP Lab (CNR-ILC, Pisa). His research aims to improve our understanding of generative neural language models’ inner workings, aiming to enhance the controllability and robustness of these systems for human-AI collaboration.

Concept Design of Reinforcement Learning Architecture for Aquaponics - by Juan Diego Cardenas Cartagena

When: May 9, 2023, at 4:00 - 6:00 PM

Where: Room 5161.0041B (Bernoulliborg)

Abstract: Aquaponics is a sustainable farming technique that aims to grow plants while harvesting fish by exploiting the nitrogen cycle in a closed-loop water circuit. Such a system requires careful management of water pressure, fish food, water quality, indoor temperature, light, and other variables. Although standard industrial control techniques perform reasonably well in small-scale aquaponics, they do not consider objectives such as plant growth or fish comfort. Hence, we consider a reinforcement learning (RL) approach as adaptive control to include them. However, training an RL agent is difficult as sampling the environment is expensive regarding system safety. Therefore, in this talk, we will discuss potential strategies to address safety during the training of an RL agent in aquaponics.