Hi! I'm a philosophy undergrad at the University of Barcelona, a Board Member of the Centre for Animal Ethics at University Pompeu Fabra, co-Editor-in-Chief of Animal Ethics Review and a Digital Sentience Consortium Fellow.
My research focuses on foundational questions in well-being, philosophy of mind, and political philosophy, and explores how these intersect in the context of frontier AI systems and non-human animals.
I'm interested in figuring out how we ought to consider the interests of all welfare subjects in our decision-making, particularly when determining how to conduct safe AI development.
Feel free to contact me at adriarodriguezmoret@gmail.com.
I am also on philpeople, google scholar, twitter and bluesky. And here is my CV.
Publications
AI Welfare Risks (2025)
Philosophical Studies
I argue that advanced near-future AI systems have a non-negligible chance of being welfare subjects under major theories of well-being. I further contend that AI safety and development efforts pose two AI welfare risks, and propose how leading AI companies could reduce them.
AI Alignment: The Case for Including Animals (2025)
Philosophy & Technology
With Peter Singer, Yip Fai Tse and Soenke Ziesche
We argue that frontier AI systems may harm animals and should be aligned with basic concern for animal welfare. We also propose low-cost policies for AI companies and public policy to ensure such protection.
An Inclusive Account of the Permissibility of Sex (2024)
Social Theory and Practice
I develop a theory of permissibility of sex acts that explains our beliefs about the status of sex acts involving non-human animals, children, and humans with intellectual disabilities without resorting to unjustified discrimination such as speciesism.
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning (2023)
Journal of Artificial Intelligence and Consciousness
I argue that, all else being equal, we have strong moral reasons to align future AI systems with the interests of all sentient beings, rather than solely with human interests or preferences.
Papers under review
I argue that only sentient beings can be welfare subjects because the postulation of welfare goods and bads, which need not be pleasantly or unpleasantly experienced in any way, would lead to unacceptable implications regarding what is prudentially best for us in various circumstances.
Recorded Talks
"AI Welfare Risks" (philosophy-focused, 45 min), at the University Pompeu Fabra's Law & Philosophy Colloquium.
"AI Welfare Risks" (policy-focused, 30 min), at AIADM London 2025.
"Including Animal and AI Welfare in AI Alignment" (20 min), at Rethink Priorities' Strategic Animal Webinars.