The design of AI systems to assist human decision-making typically requires the availability of labels to train and evaluate supervised models. Frequently, however, these labels are unknown, and different ways of estimating them involve unverifiable assumptions or arbitrary choices. In this talk, I introduce the concept of “label indeterminacy” and derive important implications in high-stakes AI-assisted decision-making. I present findings from an empirical study in a healthcare context, focusing specifically on predicting the recovery of comatose patients after resuscitation from cardiac arrest. This study shows that label indeterminacy can result in models that perform similarly when evaluated on patients with known labels but vary drastically in their predictions for patients where labels are unknown. After demonstrating crucial ethical implications of label indeterminacy in this high-stakes context, I discuss takeaways for evaluation, reporting, and design, as well as potential ways forward.
Social media platforms are rife with actors spreading malicious content, including spam, scams, propaganda, disinformation, and misinformation. Many approaches to combating malicious content rely heavily on machine learning to detect this "bad stuff" by analysing the content itself or associated metadata. While these ML-based detectors are valuable, they are often insufficient on their own. In this talk, I argue that to effectively combat malicious content, we must go beyond detection and focus on understanding the underlying mechanisms and manipulative tactics that malicious actors use to spread or amplify their content. By uncovering these mechanisms, we can improve ML-based approaches by 1) building more tailored and robust detectors and (2) identifying intervention points to disrupt attacks at their source. I will demonstrate how combining ML-based methods with an understanding of adversarial tactics can lead to more robust defences.
Shapley values, as computed via SHAP, are a widely used tool for computing local feature importance in machine learning models, thanks to their strong foundation in game theory and model-agnostic nature, which make them a highly flexible tool.
Shapley Interactions extend the concept of Shapley values to any-order interactions allowing to disentangle higher- from lower-order effects.
To compute these values, Shapley Interaction Quantification (SHAP-IQ) has been recently proposed as an extension of SHAP.
This seminar will begin by introducing the game-theoretic foundations of Shapley values (SHAP), then transitioning to Shapley Interactions (SHAP-IQ), and it will conclude with a hands-on overview of the shapiq Python library.
The increasing adoption of tree-based ensemble models (such as Random Forests and XGBoost) in machine learning has introduced challenges in model interpretability, particularly in complex classification tasks. In this seminar, we explore Decision Predicate Graphs (DPG), a novel graph-based approach designed to enhance the interpretability of ensemble models in a global fashion. DPG provides a structured representation of decision-making processes by transforming decision paths into a directed weighted graph, preserving relationships among features and decision predicates.
Beyond theory, this session will present a real-world case study where DPG was applied to analyze a model tasked with classifying fruit maturity levels. By leveraging DPG, we uncovered how the model prioritizes features, evaluates decision constraints, and differentiates between varying degrees of ripeness. This analysis offers valuable insights into model transparency and decision logic, aiding researchers and practitioners in refining machine learning models for agricultural and food industry applications. Finally, we will discuss on DPG's future enhancements and its wider applicability in various machine learning domains.
With the continuous progress of the autonomous decision-making ability of artificial intelligent systems, how to endow intelligent agents with sufficient ethical considerations in decision-making has become an important challenge that has attracted widespread attention. The key approach to solving this problem is to establish machine ethics, which embeds human ethical values and moral norms into artificial intelligent systems, enabling them to have ethical alignment capability. Machine ethics is based on human ethics but has different fundamental characteristics. First, current intelligent machines lack agency and experience in the sense of realism, manifesting as weak agency in ethical decision- making. Second, the decisions of machines should reflect the ethical considerations of human stakeholders affected by their actions, so the ethical decision-making of machines needs to balance the values of different stakeholders, i.e., machines should have social balancing capability. Third, machines are easily influenced by cultural factors in ethical decision-making and should be able to reflect cultural differences. Finally, machines need to explain ethical decisions to human agents, understand emotional expressions, and perform responsibility attribution, thus good human-machine interaction is required. In this talk, I will introduce philosophical foundation and basic characteristics of machine ethics, as well as three kinds of approaches to implementing machine ethics, including knowledge-driven approaches, data-driven approaches and hybrid approaches. More specifically, the explainable AI methods based on norms, argumentation and inductive logic programming will be introduced. Finally, I will discuss problems and prospects.
4 April 2024 - 17:30 - 20:30
Linneausborg, room 5173.0055
17:30 - Walk-in & Pizza
18:15 - Welcome by GroningenML, RuG, AI Hub Noord
18:30 - Reducing fuel consumption in platooning systems through reinforcement learning
Rafael Fernandes Cunha
Abstract: Adaptive Cruise Control (ACC) in platooning systems, vital for economic and environmental efficiency in transport, uses a key time gap parameter for vehicle spacing. To optimize fuel consumption, our study employs Reinforcement Learning, specifically the proximal policy optimization (PPO) algorithm, to dynamically adjust this time gap in response to traffic conditions. Simulations demonstrate PPO's superiority in enhancing fuel efficiency over static and threshold-based ACC controls.
19:15 - Uncertainty in Machine Learning
Prof. Matias Valdenegro
Abstract: What if we train a model to classify dogs and cats, but it is later tested with an image of a human? Generally the model will output either dog or cat, and has no ability to signal that the image has no class that it can recognize.
This is because classical neural networks do not contain ways to estimate their own uncertainty (so called epistemic uncertainty), and this has practical consequences for the use of these models, like safety when cooperating with humans, autonomous systems like robots, and computer vision systems. A possible solution is the bayesian neural network.
In this talk I will cover the basic concepts of bayesian neural networks, and how they can help us to produce safer models, including explainable AI and computer vision.
20:00 - Drinks
20:30 - Closing
20 March 2024 - 16:00 - 18:00
Ramira van der Meulen, University of Groningen
Theory of Mind is a skill humans use to predict the behaviour of, negotiate with, and even deceive others. While there is great value in knowing what the other knows, there is just as much value in knowing why they know – and knowing what they will do with that information. In this talk, we will journey through some of the literature on Theory of Mind and Common Ground from the view of developmental psychology, cross-cultural psychology, agent modelling and linguistics and shortly evaluate how they fit into the puzzle that is modelling human understanding. We will touch on how upbringing, shared information and a sense of joint intent all factor into human problem-solving, and discuss what we need our sometimes overly prudent systems to know before they start talking.
26 February 2024, 14:00-16:00
House of Connections, Grote Markt
In this discussion group, we will be delving into the usage of AI in the military: in the recent conflicts in Ukraine and Palestine, we have witnessed the usage of autonomous weapons and machine learning systems for tracking military targets. This sparks a debate about ethical, technical, and legal issues that this usage implies. We will be discussing it alongside speakers from a multidisciplinary background.
Invited speakers
Jeroen van Bijsterveld, Assistant Professor @ AI Department, RUG
Taís Fernanda Blauth, PhD student @ Campus Fryslân, RUG
Dennis Jansen, PhD student @ Universiteit Utrecht
Schedule
14:00-14:30: reception and coffee
14:30-15:30: presentations, Q&A
15:30-16:00: discussion
Bart Verheij, University of Groningen
6 December 2023
Clearly, responsible AI systems should be capable of symbolic reasoning tasks, for which data-driven AI methods are typically not suitable. To the surprise of many, there is initial experimental evidence that some data-driven AI systems show symbolic reasoning behavior. This suggests that knowledge-based and data-driven methods in AI are aligning (finally). As yet, findings are experimental and are not backed by a theoretical basis. In my work, I have been using argumentation as a theoretical perspective on the alignment of knowledge, reasoning and data, in particular, by using the case model formalism as a semantics for rule-based arguments. In this talk, I discuss the status of this research aimed at the design of hybrid argumentation systems that can responsibly strengthen human and machine performance.
Michael D. Nunez, University of Amsterdam
21 September 2023
To understand and estimate cognitive computational in individuals, we need, among other methods, simple models of cognition that collapse across complex brain dynamics. These models seek to find and estimate cognitive explanations of observed data that are potentially simple to understand. I discuss my work on extending existing Cognitive Measurement Models of task-behavior during simple decision making. Specifically I talk about extending these methods to Neuro-Cognitive Measurement Models that explain both neural and behavioral data. Our recent work advances a new framework that better assumes that cognition produces both choice-response times and noisy EEG measures (specifically Event-Related Potentials, ERPs) on single experimental trials. The framework relies on simulation-based methods to extract Bayesian posterior distributions of cognitive parameters with Artificial Neural Networks. These methods also allow the estimation of cognitive parameters in individuals that could not have been previously estimated with behavioral data alone. I discuss the future of these methods and why I think they are necessary for the fields of Cognitive Neuroscience and AI.
Yoshinobu Kano, Shizuoka University
18 September 2023
Recent advantages of the Large Language Models (LLMs), such as ChatGPT, affect not only the NLP community but ordinary people as well. What NLP problems can be solved by LLMs is still under investigation. Another question is to what extent the LLMs are "similar" to humans -- they apparently differ in their architecture and training data size. I introduce a couple of related NLP projects that I organize, such as automatic medical diagnosis, the AIWolf project (dialog system in conversation games), Legal NLP (legal bar exam solver), and SNS analysis project for public opinions, and discuss the ability and limitations of the LLMs with future directions.
Laura Bustamante, Washington University in St Louis
23 June 2023
Individuals differ considerably in their motivation and ability to tackle physical or cognitive challenges (e.g., professional athletes or scholars), and these differences are implicated in certain psychiatric symptoms (e.g., apathy). Although the subjective experience of effort is ubiquitous, there is still much to uncover regarding the mechanisms underlying effort-based decision making, including what drives individual differences in motivation for effort and how different domains of effort (e.g., cognitive and physical) relate to each other and to real-world behaviors. Here we present results from three studies using a novel individual differences measure of effort costs (i.e., avoidance).
The Effort Foraging Task embedded cognitive or physical effort into a patch foraging sequential decision task, to isolate and quantify the cost of both cognitive and physical effort using a Marginal Value Theorem computational model. Participants chose between harvesting a depleting patch, or traveling to a new patch that was costly in time and effort. Participants' exit thresholds (reflecting the reward they expected to receive by harvesting when they chose to travel to a new patch) were sensitive to cognitive and physical effort demands, allowing us to quantify the perceived effort cost in monetary terms. Unlike existing tasks, the Effort Foraging Task indirectly measures the influence of effort (versus directly asking participants to choose between low and high effort options). We observed a subset of participants who were effort seeking, which is not commonly seen in direct tasks. We present an overview of three studies to demonstrate the breadth of potential applications for this new task and some of the insights produced in its initial applications.
In the first study, a large online experiment (N=537), we found that cognitive and physical effort costs were positively correlated (r=0.55), suggesting that these are perceived and processed in common terms, and were related to self-reported psychiatric symptoms (e.g., anxiety, cognitive function, and anhedonia). In the second, a clinical study of major depression (N=52 MDD, 27 comparison), we found distinct patterns of symptom relationships for cognitive vs physical domains. Greater anxiety symptoms of MDD were selectively associated with lower cognitive (but not physical) effort cost. This effect was not explained by cognitive effort task performance. Consistent with previous findings, greater behavioral apathy and anhedonia were selectively associated with increased physical (but not cognitive) effort costs. In the third study (N=47) we tested whether we could promote effort seeking via brain stimulation in our indirect task, consistent with previous findings from direct tasks. We administered transcranial direct current stimulation over the frontopolar cortex and found that under stimulation participants were more willing to exert effort (0.21 apple reduction in change exit threshold, p<0.021) relative to sham. Stimulation did not affect cognitive effort task performance nor overall foraging threshold. These studies collectively support the utility of the Effort Foraging Task in measuring individual differences in effort costs and lay the foundation for numerous potential future applications.
Hylke Jellema, University of Groningen
Ludi van Leeuwen, University of Groningen
Marcello di Bello, Arizona State University
16 June 2023
Ludi van Leeuwen, University of Groningen
28 March 2023
Everyday we reason about our uncertain world by using evidence. In most cases in our lives, the stakes are low. However, in the domain of law, the stakes can be high: a mistake in reasoning with evidence might mean that an innocent person goes to jail or a guilty person walks free. Bayesian Networks have been proposed as a way to formalize reasoning under uncertainty with evidence, as an explicit and normative model in the domain of law. Yet, existing approaches have many problems. In this talk, I will discuss the (mis)use of proposed methods for using Bayesian Networks in criminal law.
Mannes Poel, University of Twente
24 March 2023
In this talk an overview will be presented of the research in Brain Computer Interaction for non-medical or non-health applications at the CS department at the University of Twente. The overview will start with the research on BCI and Games taking HCI requirements into consideration and will end with an overview of our current research on mental state detection.
Russell Chan, University of Twente
24 March 2023
Event-related desynchronization and synchronization (ERD/S) are important time-frequency predictors for motor-sequence learning (MSL). Activation levels reflect changes in coherent activity over the cortex and shifting regional activity may provide insight to how motor sequence learning reaches automation. We predicted that learning leads to different frequency band dominance during motor preparation, motoric and post-motor phases of MSL. We investigated 25 participants that performed five learning blocks of a 6-key Go/No-go version of the Discrete Sequence Performance task. EEG was recorded using a 32-channel actiCHamp amplifier. We extracted ERD/S values for all sequence trials in three separate bands: theta (4-8Hz), alpha (8- 12Hz) and beta (12-29Hz) in 100ms time windows for epochs of interests: preparation, motoric and post-motor. Theta oscillatory activity was dominant during preparation and motoric phases whilst beta activity was dominant during the post-motor phase after training. This talk will continue to develop linkages of oscillatory activity to behavioural phenomenon of chunking and how certain changes could be used to predict training alterations.
Zhenxing Zhang, University of Groningen
14 March 2023
In this talk, we will explore the fascinating world of generative adversarial networks (GANs), a type of neural network architecture that has gained immense popularity in recent years for its ability to generate high-quality pictures. I will start by introducing the basics of neural networks and deep learning. I will then delve into the details of GANs, explaining their architecture, loss functions, and training process. I will also discuss how GANs have been used in various fields, such as art, gaming, and healthcare, and showcase some impressive examples of GAN-generated images. By the end of this talk, you will have a solid understanding of GANs and their potential applications. This talk is suitable for beginners with no prior knowledge of neural networks or GANs.
Maarten van der Velde, SlimStampen
16 February 2023
An adaptive fact learning system makes the process of memorising information more efficient. Such a system integrates a computational cognitive model of human learning and memory with a digital learning environment, creating a learning experience personalised to the needs of the individual learner. The system's aim is to present the right items for rehearsal at the right moment, so that time and effort are spent productively. Adaptive learning systems are a prime example of applied cognitive science. As such, their development raises both theoretical and practical questions. How can we improve the scheduling of rehearsal opportunities within a 10-minute learning session? How do we account for the behavioural and temporal dynamics we observe in real-world learning in our models of human memory? And what role can adaptive learning systems play in educational practice?
Tom Lenaerts, Université Libre de Bruxelles
8 December 2022
The existence and characterization of a Theory of Mind (ToM) in upper primates has been under investigation for decades now. Yet, how a ToM may evolve, however, remains an open problem. We developed an Evolutionary Game Theoretical model in which a finite population of individuals use strategies that incorporate (or not) a ToM, modelled using level-k recursive reasoning, to infer a best response to the anticipated behaviour of others within the context of the centipede game. We find that strategies incorporating a ToM evolve and prevail under natural selection, provided individuals make cognitive errors, and a temptation for higher future gains is in place. We found furthermore that such non-deterministic reasoning co-evolves with an optimism bias, favouring the selection of a new equilibrium configuration in the centipede game, which was not anticipated to date. Our work reveals not only a unique perspective on the evolution of bounded rationality but also a co-evolutionary link between the evolution of ToM and the emergence of self-deception.
Sabine Frittella, INSA Centre Val de Loire
15 November 2022
Belnap Dunn logic is a four-valued logic introduced in order to reason with incomplete and/or inconsistent information. It relies on the idea that pieces of evidence supporting a statement and its negation can be independent. Non-standard probabilities were proposed in [1] to generalize the notion of probabilities over formulas of Belnap Dunn logic. In [2], we continue this line of research and study the implications of using mass functions, belief functions and plausibility functions to formalize reasoning with incomplete/contradictory evidence within the framework of Belnap Dunn logic.
[1] D. Klein, O. Majer, and S. Rafiee Rad. Probabilities with gaps and gluts. Journal of Philosophical Logic, 50(5):1107–1141, October 2021
[2] M. Bílková, S. Frittella, D. Kozhemiachenko, O. Majer, and S. Nazari. Reasoning with belief functions over Belnap--Dunn logic.preprint. 2022.ArXiv:2203.01060.
Bilal Wehbe, German Research Center for AI
3 November 2022
Abstract: Machine learning plays a dominant role in the current perception of AI, and it is also indispensable in many domains of maritime robotics. However, AI has significantly more manifestations that are being used in various areas of unmanned underwater vehicles, be it in classical remotely operated vehicles (ROVs), gliders, crawlers up to fully autonomous underwater vehicles (AUVs). This talk will cover some of the recent ML advances in underwater robotics that are carried out at the DFKI - Robotics Innovation Center in Bremen. This will cover different domains from modeling hydrodynamics and control of AUVs, to underwater perception using sonars and 3D reconstruction.
Pınar Yolum, Utrecht University
19 October 2022
Abstract: Privacy on the Web is typically managed by giving consent to individual Websites for various aspects of data usage. This paradigm requires too much human effort and thus is impractical for Internet of Things (IoT) applications where humans interact with many new devices on a daily basis. Ideally, handling of privacy has to be reasoned by software privacy assistants, depending on the norms, context, as well as the trust among entities. Thus, the privacy assistants can help by making privacy decisions in different situations on behalf of the users. To realize this, we propose an agent-based model for a privacy assistant. The model identifies the contexts that a situation implies and computes the trustworthiness of these contexts. Contrary to traditional trust models that capture trust in an entity by observing large number of interactions, our proposed model can assess the trustworthiness even if the user has not interacted with the particular device before. Moreover, our model can decide which situations are inherently ambiguous and thus can request the human to make the decision. We show the applicability of our model using a real-life data set as well as adjustments that are needed to serve different types of users.
Prof Ken Satoh, National Institute for Informatics, Tokyo
29 August 2022
Abstract: Ken Satoh entered law school of University of Talk in 2006 when he encountered "Japanese presupposed ultimate fact theory" which handles uncertainty in litigation to get a decisive judgement. He immediately found the equivalence between this theory and non-monotonic reasoning which he studied for 30 years and developed PROLEG. In this talk, we present the overview of PROLEG system with some demonstration and show some extensions (interactive PROLEG to arrange issue in litigation and application to private international law).
Andreea Sburlea, University of Groningen
June 2022
A Brain-Computer Interface (BCI) transforms mentally induced changes of brain signals into control commands and serves as an alternative human-machine interface. In this presentation I will talk about how we can leverage non-invasive electroencephalographic (EEG) signals to extract information about goal directed movement planning and execution, how we can decode movement trajectories using machine learning and artificial intelligence tools. Moreover, I will talk about trajectory adaptation and correction in the neural control of robotic assistive devices by decoding error potentials from the brain signals of able-bodied and spinal cord injured individuals. Finally, I will discuss kinesthetic feedback as an alternative input to close the BCI loop and deliver to the user somatosensory information about the motor command initiated in their brain.
Joost Joosten, University of Barcelona
May 2022
In this talk, I will first expose the general activities of our research group within the covenant of the University of Barcelona, Formal Vindications S.L. and Guretruck S.L. We will speak on the European Regulation 561 for transport and then generalise the setting to temporal quantitative regulations in general. The challenge here is to find the right balance between expressibility on the one hand and feasible model checking problems on the other hand. In joint work with Moritz Müller we present a particular class of stopwatch automata that in a well-defined sense proposes such a balance. In particular, we expose an automaton for the above mentioned Regulation 561. We will see how work with proof assistants can be combined with model checking to strive for ultimate security in programming certain computable laws. Throughout the talk we shall recurrently dwell on impact of legal software on society.
Some preliminary results can be found on: http://formalvindications.com/work/