Hi! I'm a philosophy researcher (and undergrad) at the University of Barcelona, a Longview Digital Sentience Career Transition Fellow, a Board Member of the Centre for Animal Ethics at University Pompeu Fabra, and co-Editor-in-Chief of Animal Ethics Review.
My research focuses on well-being, animal and AI welfare, AI alignment and the future.
I'm especially interested in figuring out how we ought to consider the interests of all future sentient beings in our decision-making, particularly when determining how to conduct AI development.
Feel free to contact me at adriarodriguezmoret@gmail.com.
I am also on philpeople, google scholar, twitter and bluesky. And here is my CV.
Publications
AI Alignment: The Case for Including Animals (2025)
Philosophy & Technology
With Peter Singer, Yip Fai Tse and Soenke Ziesche
We argue that frontier AI systems should be aligned with a basic level of concern for animal welfare and propose low-cost policies that AI companies and governmental bodies should consider implementing to ensure basic animal welfare protection.
AI Welfare Risks (2025)
Philosophical Studies
I argue that advanced near-future AI systems have a greater than negligible chance of being welfare subjects under all major theories of well-being, that AI safety and development efforts pose two AI welfare risks, and I propose how leading AI companies can reduce them.
An Inclusive Account of the Permissibility of Sex (2024)
Social Theory and Practice
I develop a theory of the permissibility of sex which justifies our intuitions about the status of sex acts involving non-human animals, children, and humans with intellectual disabilities without resorting to any form of unjustified discrimination against these individuals (such as speciesism).
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning (2023)
Journal of Artificial Intelligence and Consciousness
I argue that all else being equal, we have strong moral reasons to align the values of future AI systems to the interests of all sentient beings (including non-human animals and potentially sentient AIs) instead of only aligning them to human interests or preferences.
Papers under review
Paper arguing that only sentient beings can be welfare subjects because the postulation of welfare goods and bads, which need not be pleasantly or unpleasantly experienced in any way, would lead to unacceptable implications regarding what is prudentially best for us in various circumstances.
Recorded Talks
"AI Welfare Risks" (philosophy-focused, 45 min), at the University Pompeu Fabra's Law & Philosophy Colloquium.
"AI Welfare Risks" (policy-focused, 30 min), at AIADM London 2025.
"Including Animal and AI Welfare in AI Alignment" (20 min), at Rethink Priorities' Strategic Animal Webinars.