10:00
Ophelia Deroy (LMU, Germany)
Wellcome Address
10:30 – 11:45
Tamer Amin (AUB, Lebanon)
Consolidating a Consensus View of Conceptual Change in Science Learning
Research on conceptual change in science learning has been accumulating over the last half a century. We have learned a great deal about how we come to understand scientific concepts from research in many different disciplines including developmental and educational psychology, the cognitive and learning sciences, science education, educational neuroscience, and the history and philosophy of science. Our insights remain fragmented, however, as very many theoretical perspectives have emerged from this diverse body of multidisciplinary research. I argue that some of the influential theoretical perspectives can be seen as complementary, and a unified perspective can be articulated around which considerable consensus should be possible, paving the way for progress both in theory and practice.
Coffee Break
12:15 – 13:45
Sascha Poquet (TUM, Germany)
Capturing group learning, collaborative learning and complex problem solving with process data
Over the past few decades, educational and work environments have undergone significant changes. Much of the interpersonal interactions are asynchronous or mediated by technology. Recent advances in genAI further start substituting some interpersonal processes with learner-machine interactions that are now easy to scale. These developments where technology mediates interactions are modifying cognitive, socio-cognitive, socio-emotional, and meta-cognitive processes underpinning individual and group learning in collective tasks. In this talk, I will present examples from educational settings of how data, such as clickstream, mobile communication, and text, recorded by the very technology used during these processes, can be utilised to gain better understanding of social and cognitive processes relevant for learning and working in groups. In addition to showcasing potential ways of effectively utilising these data, the talk will highlight the lack of integration with broader theories in cognitive sciences beyond learning sciences. As computational measures are becoming more sophisticated, there is a need to bridge these methodological innovations in data and metrics with theory building stemming from fundamental domains that study how groups learn.
Lunch
15:00 – 16:15
Bana Bashour (AUB, Lebanon)
How we blame
This book presents a naturalistic account of moral responsibility that is neutral on the metaphysics of free will. It engages with empirical literature in experimental philosophy and psychology and draws on real-life case studies to illuminate the author’s theory of moral responsibility. The author argues that agency requires an understanding of moral responsibility attributions, which requires that one understands one’s intentional states and those of others. Further, she argues that a justified attribution of moral responsibility involves justified attributions of intentional states and justified perceptions of norm violations. This claim is novel because when moral responsibility is indexed to a particular onlooker, the discussion becomes one about whether a blamer is justified in attributing moral responsibility to the blamed. Another distinctive feature of the author’s account is that it makes room for cultural variability in our justifications of moral responsibility; those in different cultures may have different norms or expectations of one another. The first part of the book argues for a theoretical account of agency and moral responsibility while making distinctions between those and one’s theory of punishment. While justified attributions are interpersonal, theories of punishment are institutional and societal in nature. The second part of the book goes into the literature from empirical psychology and experimental philosophy on the nature of moral responsibility.
Coffee Break
16:45 – 18:00
Ophelia Deroy (LMU, Germany)
Collaborative session: Many labs initiatives
10:00 – 11:15
Laura-Joy Boulos (Saint Joseph, Lebanon)
Between trauma and coherence: narrating the weight of chronic crises
Coffee Break
11:45 – 12:00
PhD Students (Saint Joseph, Lebanon)
Chronic Crisis and Mental Health
Lunch
14:30 – 15:45
Mehdi Kamassi (CNRS, France)
Learning and decision-making mechanisms in brains and robots: Towards more ethical decisions in humans and AI
The reinforcement learning (RL) theory constitutes a framework for an artificial agent to learn actions that maximize rewards in the environment. It has been successfully applied to Neuroscience to account for animal neural and behavioral processes in simple laboratory tasks, such as Pavlovian and instrumental conditioning, and single-step economic decision-making tasks. It moreover became very popular due to its account for dopamine reward prediction error signals. However, more complex multi-step tasks, such as navigation and social interaction tasks, illustrate their computational limitations. More recent work open ways to extend this framework towards the satisfaction of richer goals (epistemic goals, social goals, etc.), and the implementation of richer learning strategies to achieve these goals.
In parallel, researches in engineering (robotics in particular) have emphasized the complementarity between different learning strategies when facing complex tasks, and explored solutions to combine these different strategies. One central distinction is between model-based and model-free reinforcement learning strategies: In the former case, an agent learns a statistical model of the effects of its actions in the environment, and then use this model to plan sequences of actions towards desired goals. In contrast, model-free strategies are relevant when the environment statistics are too noisy to learn a good internal model. In this case, RL agents can rather learn local action values and adapt reactively in each state of the environment.
I'll present a novel extension of the RL framework that we recently proposed. It is based on a three-level motivational system (operational level, motivational level, norm level) to model decisional autoonomy in open-ended learning agents (whether biological or artificial). Extending the motivational reinforcement learning formalism, it is intended to relate the norm level (rules/conventions/norms at the societal level, but also missions required by humans from artificial agents), to the motivational level (so as to modulate the agents' homeostatic, epistemic, social and mission drives), which in turn determine the multidimensional reward function that will be used by the RL agents at the operational level. I will finish the presentation by discussing the perspectives that this new framework opens. First, in terms of formalizing new levels of motivational autonomy (that we call "the autonomy ladder"), to help humans better understand the psychological mechanisms that these computational models predict to modulate their own decisions towards the long-term satisfaction of ethical principles of life. Second, to continue to increase the learning possibilities of AI systems while ensuring high-level norms and values are respected. Third, to help formalize constraints on the level of autonomy of AI systems that humans collectively wish to see imposed.
15:45 – 17:00
Nada Chayaa (Arabic Council for Social Sciences)
Intrinsic neuronal properties as mechanisms of feedback, prediction, and individual variation in vocal learning
Coffee Break
17:30 – 18:30
Bahador Bahrami (LMU, Germany)
Crowd Cognition: Understanding How Groups Decide Together
Our research at Crowd Cognition Group investigates the cognitive and neurobiological basis of collective decision-making. Drawing on studies across perception (e.g., vision, olfaction), judgement (numerical cognition, medical diagnosis, general knowledge), and forecasting (geopolitical events), our work identifies the conditions under which collaboration improves outcomes. Three lessons have emerged. First, communication is essential: exchanging perspectives allows groups to pool knowledge. Second, expressing uncertainty is critical: confidence, when calibrated, helps groups weigh contributions effectively, though misplaced overconfidence can derail performance. Third, the combination of local deliberation with global aggregation transforms liabilities such as herding and polarization into strengths, creating a remarkable boost in collective wisdom.
10:00 – 11:15
Arij Daou (AUB, Lebanon)
Intrinsic neuronal properties as mechanisms of feedback, prediction, and individual variation in vocal learning
Adaptive behavior depends on how the brain monitors feedback, predicts outcomes, and stabilizes learned skills. Vocal learning in songbirds offers a powerful model to investigate these processes. We show that neurons in the zebra finch premotor nucleus HVC exhibit consistent intrinsic properties within an individual, yet distinct patterns across individuals, effectively providing a physiological signature of each bird’s song. These intrinsic properties are not static: they refine as songs crystallize during development and degrade rapidly when auditory feedback is experimentally perturbed via delayed auditory feedback (DAF). This tight coupling between intrinsic properties and sensory experience suggests that learning and feedback monitoring are orchestrated not only by synaptic plasticity but also by intrinsic plasticity. Computational modeling of ion channel conductances further links these physiological properties to the encoding and maintenance of learned vocal patterns. By linking biophysical parameters to prediction, error correction, and individual behavioral identity, our findings advance a framework in which intrinsic plasticity is a critical substrate for cognitive functions such as skill acquisition, adaptive plasticity, and the preservation of individuality in learned behaviors.
Coffee Break
14:30 – 15:45
Özlem Yeter (Groningen University, Netherlands)
Cognitive effects of war: Comparative evidence from Syrian refugee children
During my master's project, we tested low-SES Syrian refugee children (aged 9) living in Turkey. We assessed their working memory (WM), inhibitory control (IC), shifting, fluid intelligence as well as vocabulary abilities. Their scores were compared with two low-SES age-matched control groups: Turkish monolinguals and Arabic-Turkish bilingual minorities living in Antakya (Antioch) region in Turkey. In this study, Syrian children lagged behind both non-refugee groups in the fluid intelligence task. They also obtained lower WM scores than their bilingual controls. Adverse experiences and limited preschool education negatively affected Syrian children’s cognitive development. However, no further cognitive difference between this war-torn group and the non-refugee children was observed. On language tests, Syrian children had a smaller Turkish vocabulary size than both non-refugee controls, but they outperformed their bilingual controls in Arabic. Although Syrian children draw a more balanced bilingual profile, their L1 (Arabic) skills were poorer than the control groups’ Turkish skills. Overall results suggest that although Syrian children’s WM, fluid intelligence abilities and L1 development were negatively impacted by the forced displacement, they were able to develop Turkish vocabulary skills and perform on par with Turkish monolinguals on all assessed executive function measures (WM, IC and shifting). This is the first evidence suggesting that holding bilingual status may actually have created a cognitively protective shield for disadvantaged Syrian children. This study also highlights the significance of early childhood education for cognitive development. In the end of my presentation, I will talk about the practical challenges of collecting data from vulnerable and minority groups.
11:45 – 13:00
Hans D. Müller (AUB, Lebanon)
AI Ethics Education in the Arab Region
Artificial Intelligence (AI) is rapidly transforming various aspects of our lives, from healthcare to finance, transportation to entertainment. With this profound influence comes the responsibility to ensure that AI systems are developed and used ethically. AI Ethics, therefore, emerges as a crucial area of study, necessitating attention in university curricula across disciplines. Understanding the current landscape of AI Ethics education, and specifically in the Arab region, is crucial for informed curriculum development. In this book chapter, we conducted a benchmarking study to identify existing courses, assess their content and delivery methods, and highlight areas for improvement. Based on our benchmark analysis, we propose the Community of Inquiry (CoI) theoretical framework as an educational framework to teach AI Ethics. To assess its utility as an underlying educational framework for AI Ethics, a case study involving one AI Ethics course following that framework was conducted. The case study involved a students’ survey to assess and gauge students’ perceptions of the course, and its impact on their careers. Similarly, instructors’ perspectives were also obtained to highlight successes and challenges
Lunch