LLMs and Responsible AI
@ Chalmers
@ Chalmers
This workshop offers a deep dive into the rapidly evolving landscape of Large Language Models, moving beyond the hype to explore their technical foundations and societal implications. By bridging the gap between cutting-edge research and ethical application, we aim to uncover how these models intersect with various domains of science and technology.
12:00-12:30 : Ricardo Baeza-Yates — Human-AI Co-evolution
Human-AI Co-evolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterizes our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI co-evolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users' choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualizing them at increasing levels of abstraction, i.e., scientific, legal and socio-political.
12:30-13:00 : Anandi Hattiangadi — Artificial General Intelligence: A Philosopher's Manifesto
In this talk, I argue that progress in research on artificial general intelligence (AGI) is impossible without progress in philosophy. This is because, while the concept of AGI is that of a machine that is at least as intelligent as a human, classical accounts of the nature and requirements of human intelligence are flawed. To fix these problems, I argue, we must return to fundamental philosophical questions about what makes human intelligence distinctive and what it would take for a machine to think like a human. I argue that a capacity for language and abstract thought is necessary for AGI, and that existing AI systems are based on flawed philosophical models of these capacities. I sketch a novel "neoclassical" account of what it takes to understand a language, and explore the implications of this approach for the assessment of the intelligence of current systems and the future of AGI.
13:00-13:45 : Asad Sayeed — Attack of the meta-stochastic hyper-parrots
LLM-based AI systems doing something spooky and threateningly human-like has become a feature in social media. However, a popular discourse emerged early on that they are merely "stochastic parrots" that can only produce variations on the enormous quantities of text they have already seen, and thus not meaningfully human-like. The former is popularly seen to invalidate or disprove the latter, even though there is no inherent contradiction. Both things can be true: LLMs produce creative and unexpected behaviours — especially when you hook them up to an environment where they can execute code — while still not really being human-like.
13:45-14:30 : Sandro Stucki — GenAI Security – Promises and Challenges
The past decade has seen a steep rise in the use of machine learning (ML) fueled by developments in deep learning and generative AI (GenAI). The rapid evolution and adoption of these techniques brings unique opportunities and challenges, not least for cybersecurity. In this talk, I will give a high-level overview of the novel threats and possible solutions for securing GenAI systems (security for AI), as well the role of AI/ML in doing so (AI for security). The main focus will be on LLMs and AI agents – how they change the threat landscape and what to do about it.
14:30-15:00 : Fika break
15:00-15:45 : Ricardo Baeza-Yates — Geographical Biases in Large Language Models
Recent advancements in Large Language Models (LLMs) have made them a popular information-seeking tool among end users. However, the statistical training methods for LLMs have raised concerns about their representation of under-represented topics, potentially leading to biases that could influence real-world decisions and opportunities. These biases could have significant economic, social, and cultural impacts as LLMs become more prevalent, whether through direct interactions—such as when users engage with chatbots or automated assistants—or through their integration into assessment, representation, and dissemination of knowledge. One important case is geographical bias, that surfaces in recommendations for relocation, tourism, or new business. It also appears in cultural stereotypes and even representation of maps.
15:45-16:30 : Anandi Hattiangadi — Do Large Language Models Understand Natural Language?
Large language models (LLMs) are remarkably fluent at processing natural language. However, I argue, they do not understand language. The argument draws on considerations raised by Kripke in his discussion of Wittgenstein's rule following considerations. I argue here that these considerations don't merely show that LLMs don't understand language, but more broadly, that all of the classical accounts of language cognition are fundamentally flawed. I present a new alternative, a "neoclassical" paradigm, that builds on a number of confluent strands of research from across the cognitive sciences. Though neoclassicism implies that LLMs don't understand language, it suggests an approach to the development of AI systems that do.
16:30-17:00 : Open Discussion
The program can still be subject to small changes before its final version.
How do I find the room?
The room is EC in the EDIT building and directions can be found at this [link]
Do I need to register?
Registration is not compulsory. Non-registered participants will be admitted until room capacity is reached and won't receive lunch.
I am not able to attend, can I join remotely?
We are considering a hybrid format for the colloquium part of the event. Links will be provided as we get closer to the event's date.
Ricardo Baeza-Yates
KTH / Universitat Pompeu Fabra / University of Chile
Anandi Hattiangadi
Stockholm University / Institute for Futures Studies
Asad Sayeed
Gothenburg University
Sandro Stucki
Chalmers