CERNAI
Center for Reasoning, Normativity, and AI
Temporary website. Please check cernai.unipv.it regularly
Center for Reasoning, Normativity, and AI
Temporary website. Please check cernai.unipv.it regularly
CERNAI is a forthcoming research center dedicated to the study of reasoning, normativity, and artificial intelligence at the intersection of law, philosophy, and logic. Based at the University of Pavia, CERNAI investigates how human and machine agents reason from cases to principles, act under uncertainty, and justify decisions within normative systems.
Rooted in the foundational work of the GENERIC project on generic reasoning, the center brings together researchers in legal theory, formal logic, ethics, cognitive science, and AI to:
Advance formal and conceptual understanding of non-deductive, exception-tolerant reasoning;
Explore the epistemology and semantics of generalizations in law, science, and artificial agents;
Develop normative frameworks for trustworthy AI, grounded in human-like reasoning and legal accountability;
Support interdisciplinary training at the intersection of logic, law, and machine learning;
Provide an institutional home for research collaborations, visiting scholars, and doctoral projects.
Through seminars, fellowships, applied collaborations, and public engagement, CERNAI aims to shape how societies understand and govern reasoning in both human and artificial systems—ensuring their alignment with democratic values, fairness, and epistemic responsibility.
Current Research Directions
Foundations of Reasoning and Logic
Hyperintensional semantics, defeasible inference, normativity, deontic logic, generic reasoning.
AI and Generalization
Generalization in LLMs and AI agents, genericity-aware alignment, explainable AI reasoning.
Alignment
Reasons-based AI Agents, law-following AI
AI Agency and Agential Risk
Formal models of intentions in AI agents, formal models of agentic and existential risk