Hi! I'm a philosophy researcher (and undergrad) at the University of Barcelona, a Board Member of the Centre for Animal Ethics at University Pompeu Fabra, and co-Editor-in-Chief of Animal Ethics Review.
My research focuses on well-being, non-human animals, AI systems, and the future.
I'm especially interested in figuring out how we ought to consider the interests of future sentient non-human beings in our decision-making, particularly when determining how to conduct AI development.
Feel free to contact me at adriarodriguezmoret@gmail.com.
I am also on philpeople, google scholar, twitter and bluesky. And here is my CV.
Publications
AI Alignment: The Case for Including Animals (forthcoming)
Philosophy & Technology
With Peter Singer, Yip Fai Tse and Soenke Ziesche
We argue that frontier AI systems should be aligned with a basic level of concern for animal welfare and propose low-cost policies that AI companies and governmental bodies should consider implementing to ensure basic animal welfare protection.
AI Welfare Risks (2025)
Philosophical Studies
I argue that advanced near-future AI systems have a greater than negligible chance of being welfare subjects under all major theories of well-being, that AI safety and development efforts pose two AI welfare risks, and I propose how leading AI companies can reduce them.
An Inclusive Account of the Permissibility of Sex (2024)
Social Theory and Practice
I develop a theory of the permissibility of sex which justifies our intuitions about the status of sex acts involving non-human animals, children, and humans with intellectual disabilities without resorting to any form of unjustified discrimination against these individuals (such as speciesism).
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning (2023)
Journal of Artificial Intelligence and Consciousness
I argue that all else being equal, we have strong moral reasons to align the values of future AI systems to the interests of all sentient beings (including non-human animals and potentially sentient AIs) instead of only aligning them to human interests or preferences.
Papers under review
Paper arguing that only sentient beings can be welfare subjects because the postulation of welfare goods and bads, which need not be pleasantly or unpleasantly experienced in any way, would lead to unacceptable implications regarding what is prudentially best for us in various circumstances.
Recorded Talks
A 45-minute philosophy-focused presentation of my paper "AI Welfare Risks" at the University Pompeu Fabra's Law & Philosophy Colloquium.
A 20-minute presentation I gave at Rethink Priorities' Strategic Animal Webinars on what I think is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds: Including Animal and AI Welfare in AI Alignment.
A 30-minute policy-focused presentation of my paper "AI Welfare Risks" at AIADM London 2025, where I present four tentative AI welfare policies to reduce welfare risks.