Research Agenda
The NYU Center for Mind, Ethics, and Policy examines the nature and intrinsic value of nonhuman minds, with special focus on invertebrates and AI systems. Which nonhumans are conscious, sentient, and agentic? What kind of moral, legal, and political status should they have? How should we make decisions that affect them in circumstances involving disagreement and uncertainty? Our research agenda focuses on the following general themes, all of which are important, difficult, and contested—calling for considerable caution and humility.
Status: Which nonhumans matter for their own sakes?
Ethically, what is the basis of moral standing, that is, of morally mattering for your own sake? Some experts think that sentience (roughly, the capacity to consciously experience positive and negative states like pleasure and pain) is required. Others think that consciousness (roughly, the capacity to have subjective experience) is enough. Others think that robust agency (roughly, the ability to set and pursue your own goals in a self-directed manner) is enough. There are many other views as well.
Scientifically, which nonhumans have these features? Take consciousness, for instance. Some experts think that a cognitive system with the same structures, functions, and materials as mammalian or avian brains is required. Others think that a cognitive system with broadly analogous capacities for perception, attention, learning, memory, self-awareness, and so on is enough. Others think that a cognitive system that can process information or represent objects is enough. There are many other views as well.
Practically, how should we make decisions that affect nonhumans in circumstances involving disagreement and uncertainty about whether they matter? What are the risks and harms associated with false positives and false negatives about moral standing? When, if ever, is the probability or severity of false positives worse, and when, if ever, is the probability or severity of false negatives worse? How can we work together to construct a conception of the moral circle that mitigates both sets of risks at once?
Sample work:
Animals, Plants, Fungi, and Representing Nature (forthcoming)
Weight: How much do particular nonhumans matter for their own sakes?
Ethically, what determines how much intrinsic moral value an individual or population possesses? Regarding individuals, if one being has a greater capacity for happiness, suffering, and other such welfare states than another, does the former being "carry more weight" than the latter, all else being equal? Regarding populations, if one population has a greater capacity for welfare in the aggregate than another, does the former population carry more weight than the latter, all else being equal?
Scientifically, how much happiness, suffering, and other such welfare states can particular nonhumans experience? Does the capacity for welfare depend on cognitive complexity, cognitive longevity, and other such features? For instance, do elephants have greater capacities for welfare than ants, assuming that they both have the capacity for welfare? In the future, will robotic elephants have greater capacities for welfare than robotic ants, assuming that they both have the capacity for welfare?
Practically, how should we make decisions that affect large and diverse populations in circumstances involving disagreement and uncertainty about how much everyone matters for their own sakes? To what extent can we improve our ability to make interpersonal, interspecies, and intersubstrate welfare comparisons, and to what extent can we develop tools for making high-stakes decisions affecting members of different species and beings of different substrates without such comparisons?
Sample work:
Ethics: What do we owe particular nonhumans?
Ethically, how should humans interact with particular nonhumans? To what extent is ethics a matter of promoting welfare, respecting rights, cultivating virtuous characters, and cultivating caring relationships? Should we help others or merely avoid harming them? Should we extend equal consideration to everyone who matters independently of their spatial, temporal, biological, or material proximity to us, or should we extend greater consideration to some individuals than to others based on such relational features?
Scientifically, how do our actions and policies affect particular nonhumans? We now live in the Anthropocene, a geological epoch in which human activity is a dominant influence on the planet. How do agriculture, development, and other such practices affect animals directly and indirectly, and what do we owe animals in light of those impacts? In the future, how will AI development and deployment affect AI systems directly and indirectly, and what, if anything, will we owe AI systems in light of those impacts?
Practically, what kinds of decision procedures can we develop in order to treat nonhumans as well as possible, both individually and collectively? For example, how does the practice of killing farmed and wild animals shape our beliefs, values, and characters, and how should that factor into assessments of these practices? How does the practice of developing, deploying, and instrumentalizing human-like AI systems shape our beliefs, values, and characters, and how should that factor into assessments of these practices?
Sample work:
Beyond Compare? Welfare Comparisons and Multi-Criteria Decision Analysis (forthcoming)
Kantianism for Humans, Utilitarianism for Nonhumans? Yes and No.
Policy: What follows for particular practices and institutions?
Here a variety of questions arise. In the public sector, should particular nonhumans be classified as legal subjects, with legal rights? If so, should that take the form of legal personhood or a new, related kind of status? Also, should particular nonhumans be classified as political subjects, with political rights, in particular communities? If so, should that take the form of political citizenship or a new, related kind of status? Either way, what follows for everything from the right to bodily liberty to the right to political representation?
In the private sector, what kinds of ethical frameworks should shape our interactions with particular nonhumans? Should universities adopt ethical oversight frameworks for invertebrate research, and if so, what form should these frameworks take? Should AI companies adopt ethical oversight frameworks for the development and deployment of advanced AI, and if so, what form should these frameworks take? In all cases, what role should expert input, public input, external evaluators, and government regulators play?
More generally, what kind of multispecies and multisubstrate society should we seek to build in the future, and how can we combine radical long-term goals with moderate short-term steps? Should we seek to build an animal-free food system, and if so, how can we do so? Should we seek to build wildlife-inclusive infrastructure, and if so, how can we do so? Should we seek to build AI systems who prefer to cooperate with humans (rather than merely build AI systems who can be controlled by humans), and if so, how can we do so?
Sample work:
Ethical Oversight for Insect Research (forthcoming)
You can find other work related to these themes below, and you can find integrated discussion of these themes in The Moral Circle. You can also find much of our practical work related to food systems, infrastructure, and other such topics at the websites at the Center for Environmental and Animal Protection and the Wild Animal Welfare Program, along with regular collaborators such as the Guarini Center for Environmental, Energy, and Land Use Law. If you have comments or suggestions about our research, feel free to contact us.
Publications
CMEP advances research on the nature and value of nonhuman minds by contributing funding, authorship, or both. What follows is a list of relevant outputs to which our team has contributed since the launch of CMEP in 2022, in reverse chronological order.
What will society think about AI consciousness? Lessons from the animal case
Lucius Caviola, Jeff Sebo, and Jonathan Birch
Trends in Cognitive Sciences (2025)
How will society respond to the idea that artificial intelligence (AI) could be conscious? Drawing on lessons from perceptions of animal consciousness, we highlight psychological, social, and economic factors that shape perceptions of AI consciousness. These insights can inform emerging debates about AI moral status, ethical treatment, and future policy.
Read (open access)
Is There a Tension between AI Safety and AI Welfare?
Robert Long, Jeff Sebo, and Toni Sims
Philosophical Studies (2025)
The field of AI safety considers whether and how AI development can be safe and beneficial for humans and other animals, and the field of AI welfare considers whether and how it can be safe and beneficial for AI systems. There is a prima facie tension between these projects, since some measures in AI safety, if deployed against humans and other animals, would raise questions about the ethics of constraint, deception, surveillance, alteration, suffering, death, disenfranchisement, and more. Is there in fact a tension between these projects? It depends in part on what potentially conscious, robustly agentic, or otherwise morally significant AI systems might need and what we might owe them. This paper argues that, all things considered, there is indeed a moderately strong tension—and it deserves more examination.
Read (open access)
Evaluating Animal Consciousness
Kristin Andrews, Jonathan Birch, and Jeff Sebo
Science (2/20/2025)
The emerging science of animal consciousness is advancing through investigations of behavioral and neurobiological markers associated with subjective experience across diverse species. Research on honeybee pessimism, cuttlefish planning, and self-recognition in cleaner wrasse fish provides evidence that consciousness may be widespread throughout the animal kingdom. Although the field faces uncertainties—stemming from the absence of a secure, unified theory of consciousness and the complexity of differentiating conscious from unconscious processes—these investigations underscore the value of open-minded inquiry. By exploring consciousness across taxa, researchers can collect valuable evidence and set the stage for a more inclusive understanding of the tree of life.
The Moral Circle: Who Matters, What Matters, and Why
Jeff Sebo
W.W. Norton (1/28/2025)
As the dominant species, humanity has a responsibility to ask: Which nonhumans matter, how much do they matter, and what do we owe them in a world reshaped by human activity? The Moral Circle argues that we should include all potentially significant beings in our moral community, with transformative implications for our lives and societies. This book explores provocative case studies, such as lawsuits over captive elephants and debates over factory-farmed insects. It also explores future quandaries such as whether to send microbes to new planets, and whether to create virtual worlds filled with digital minds. Taking an expansive view of human responsibility, the book argues for shedding human exceptionalism and radically rethinking our place in the world.
The Edge of the Moral Circle
Jeff Sebo
Relations (12/1/2024)
This essay explores the relationship between two recent books on the scope of moral consideration: The Edge of Sentience and The Moral Circle. Both books develop precautionary frameworks for interacting with nonhumans of uncertain sentience and moral status, and they argue that many invertebrates, future AI systems, and other beings merit moral consideration or, at least, further investigation. However, The Moral Circle focuses more on ethical theory and long-term progress, while The Edge of Sentience focuses more on public policy and short-term progress. This essay highlights the complementary nature of these works and identifies key areas for further research, including how to navigate moral uncertainty and how to reconcile ethical principles with practical and political realities.
Read (open access)
Are Individuals or Ecological Wholes What Matter? Yes.
Jeff Sebo
Oxford Public Philosophy (12/1/2024)
There tends to be strong disagreement in animal and environmental ethics between individuals, who hold that individuals are the primary units of moral analysis, and ecocentrists, who hold that ecological wholes are the primary units of moral analysis. This essay suggests that the concept ‘primary unit of moral analysis’ is ambiguous, and that when we disambiguate it, we can identify a plausible view according to which individualists are correct in one sense and ecocentrists are correct in another sense. Specifically, in both science and ethics, we can make a distinction between the most basic units of analysis and the most helpful units of analysis, and we can say that smaller beings like individuals tend to be more basic but that larger beings like ecological wholes tend to be more helpful in many contexts.
Read (open access)
Taking AI Welfare Seriously
Robert Long, Jeff Sebo, Patrick Butlin, Kathleen Finlinson, Kyle Fish, Jacqueline Harding, Jacob Pfau, Toni Sims, Jonathan Birch, David Chalmers
arXiv (11/4/2024; co-sponsored with Eleos AI)
This report argues that some AI systems may soon be conscious and/or robustly agentic, meaning that AI welfare and moral patienthood are no longer concerns only for science fiction or the distant future. They are pressing issues for the near future, and AI companies and other actors have a responsibility to take them seriously. We recommend three first steps: (1) acknowledge that AI welfare is an important and difficult issue, (2) assess AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. We also offer frameworks for guiding this work amid persistent disagreement and uncertainty, and the risk of over-attribution of welfare in some cases and under-attribution in others.
Read (open access)
Overlapping Minds and the Hedonic Calculus
Luke Roelofs and Jeff Sebo
Philosophical Studies (5/29/2024)
How should we update our moral thinking if it turns out to be possible for a single token mental state — a feeling of pleasure, pain, satisfaction, frustration, or another welfare state — to belong to two or more subjects at once? Some philosophers think that such sharing of mental states might already occur, whereas others foresee it as a potential consequence of advances in neurotechnology and AI. Yet different types of case generate opposite intuitions: if two mostly-distinct people share a few mental states, it seems we should count the value of those states twice, but if two physically-distinct beings share their whole mental lives, it seems we should count the value of that life once. This paper suggests that these intuitions can be reconciled if the mental states that matter for welfare have a holistic character.
Read (open access)
Can we detect consciousness in newborn infants?
Claudia Passos-Ferreira
Neuron (5/15/2024)
Conscious experiences in infants remain poorly understood. In this NeuroView, Passos-Ferreira discusses recent evidence for and against consciousness in newborn babies. She argues that the weight of evidence from neuroimaging and behavioral studies supports the thesis that newborn infants are conscious.
Read (open access)
Moral Consideration for AI Systems by 2030
Jeff Sebo and Robert Long
AI and Ethics (12/11/2023)
This paper makes a case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans morally ought to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being sentient or otherwise morally significant. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being sentient or otherwise morally significant by 2030. The upshot is that humans have a moral duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing to discharge that duty now, so that we can be ready to treat AI systems with respect and compassion when the time comes.
Read (open access)
Intersubstrate Welfare Comparisons
Bob Fischer and Jeff Sebo
Utilitas (11/22/2023)
In the future, when we compare the welfare of a being of one substrate (say, a human) with the welfare of another (say, an AI system), we will be making an intersubstrate welfare comparison. In this paper, we argue that intersubstrate welfare comparisons are important, difficult, and potentially tractable. The world might soon contain a vast number of sentient or otherwise significant beings of different substrates, and moral agents will need to be able to compare their welfare levels. However, this work will be difficult, because we lack the same kinds of commonalities across substrates that we have within them. Fortunately, we might be able to make at least some intersubstrate welfare comparisons responsibly in spite of these issues. We make the case for cautious optimism and call for more research.
Read (open access)
Are infants conscious?
Claudia Passos-Ferreira
Philosophy of Mind (10/17/2023)
I argue that newborn infants are conscious. I propose a methodology for investigating infant consciousness and I present two approaches for determining whether newborns are conscious. First, I consider behavioral and neurobiological markers of consciousness. Second, I investigate the major theories of consciousness, including both philosophical and scientific theories, and I discuss what they predict about infant consciousness.
Read (open access)
A Philosophers’ Letter in Support of Amahle, Mabu, and Nolwazi
Gary Comstock, Andrew Fenton, Syd Johnson, Robert Jones, Letitia Meynell, L. M., Nathan Nobis, David Peña-Guzmán, and Jeff Sebo
An amicus letter submitted to the California Supreme Court (10/16/2023)
Under current U.S. law, one is either a ‘person’ or a ‘thing’. If you are a person, you have the capacity for rights. If you are a thing, you do not. And at present, all nonhuman animals are considered things. This amicus brief, submitted as part of an appeal by the Nonhuman Rights Project, makes the case for elephant personhood. It considers the four main conceptions of personhood that U.S. courts have used to deny nonhuman personhood: a species conception, a social contract conception, a community conception, and a capacities conception. It argues that the species conception fails, and that the other three, plausibly interpreted, are compatible with elephant personhood. It concludes that if we insist on classifying every being as either a person or a thing, then we should classify elephants as persons, not things.
Read (open access)
Considering Wild Animal Welfare in Benefit-Cost Analysis
Toni Adleberg, Becca Franks, Adalene Minelli, Jeff Sebo, Katrina Wyman, and Alisa White
Public Comment to OIRA (9/18/2023)
The NYU Guarini Center on Environmental, Energy and Land Use Law and the NYU Wild Animal Welfare Program submitted a public comment to the Office of Management and Budget, Office of Information and Regulatory Affairs (OIRA) on its Proposed Guidance for Assessing Changes in Environmental and Ecosystem Services in Benefit-Cost Analysis. This comment urges OIRA to ensure that the Guidance properly reflects (a) the instrumental value that animal welfare has for humans, (b) the intrinsic value that animal welfare has for the animals themselves, and (c) the importance that environmental changes can have not only for species and ecosystems but also for individual animals. It requests that OIRA modify the Guidance to reflect these ideas, and in the event that OIRA declines, it requests an explanation.
Read (open access)
Integrating Human and Nonhuman Research Ethics
Jeff Sebo
Handbook of Bioethical Decisions. Volume I: Decisions at the Bench (9/1/2023)
This chapter argues for developing a unified moral framework for assessing human and nonhuman subjects research. At present, our standards for human subjects research involve treating humans with respect, compassion, and justice, whereas our ethical standards for nonhuman subjects research merely involve (half-heartedly) aspiring to replace, reduce, and confine our use of nonhuman animals. This situation creates an unacceptable double standard in research ethics and leads to pseudo-problems, for example regarding how to treat human-nonhuman chimeras. This chapter discusses features that a more integrated moral framework might have, assesses the pros and cons of this kind of framework, and argues that the pros of this kind of framework decisively outweigh the cons.
Read (penultimate draft)
The Rebugnant Conclusion
Jeff Sebo
Ethics, Policy & Environment (4/26/2023)
This paper considers some problems that “small” beings such as insects, microbes, and (some) AI systems raise for utilitarian population ethics. In particular, if small beings have more expected welfare than large beings, then utilitarianism implies that we should prioritize the former all else equal. This could lead to a “rebugnant conclusion,” according to which we should create large populations of small beings rather than small populations of large beings. It could also lead to a “Pascal’s bugging,” according to which we should prioritize large populations of small beings even if these beings have an astronomically low chance of being sentient and morally significant at all. This paper argues that the utilitarian should accept these implications in principle, but might be able to avoid some of them in practice.
Read (open access)
Human, Nonhuman, and Chimeric Research
Jeff Sebo and Brendan Parent
The Hastings Center Report (12/9/2022)
Researchers are currently using chimeras – nonhuman animals who contain human cells – to understand human disease and development, and to create human treatments and organs. As a result, bioethicists are now asking at what point chimeras become “human enough” to have human rights and thus benefit from higher standards of protection. These questions assume that we should maintain much higher standards of protection for vulnerable humans than for comparably vulnerable nonhumans, yet this assumption remains contested. This article argues that bioethicists should keep asking familiar questions about nonhuman animal research alongside new questions about chimera research, and that failure to do so will result in a distorted understanding of the ethics of chimera research.
Read (open access)
Kantianism for humans, utilitarianism for nonhumans? Yes and no.
Jeff Sebo
Philosophical Studies (6/14/2022)
This paper argues that a two-level moral view, which combines a “monist” view at the theoretical level of morality and a “hybrid” view at the practical level, is an attractive alternative to one-level monist and hybrid views. For example, both utilitarianism and rights theory, on a particular interpretation, imply a moderate “Kantianism for people, utilitarianism for animals” in practice. This kind of two-level view preserves the benefits of monist views, since it allows for simplicity and unity at the theoretical level of morality. It also preserves the benefits of hybrid views, since it allows for complexity and pluralism at the practical level of morality. Finally, its implications are much more “pro-animal” than those of the traditional “Kantianism for people, utilitarianism for animals,” making it more plausible in several key respects.
Read (open access)
Saving Animals, Saving Ourselves
Jeff Sebo
Oxford University Press (3/8/2022)
Human and nonhuman fates are increasingly linked. Our use of animals contributes to pandemics, climate change, and other threats which, in turn, contribute to biodiversity loss, ecosystem collapse, and nonhuman suffering. As a result, we have a responsibility to include animals in health and environmental policy, by reducing our use of them as part of our pandemic and climate change mitigation efforts and increasing our support for them as part of our adaptation efforts. Applying and extending frameworks such as One Health, this book calls for reducing support for factory farming, deforestation, and the wildlife trade; increasing support for alternatives; and considering human and nonhuman needs holistically. It also considers connections with population ethics, food policy, infrastructure policy, and more.
Read (open access)
For research prior to 2022, please visit the individual websites for CMEP researchers.
Forthcoming and In Preparation
CMEP is always looking for new opportunities to conduct and support research in this space. What follows is a list of forthcoming and in-preparation projects; in the future, we may also include links to drafts of working papers.
Beyond Compare? Welfare Comparisons and Multi-Criteria Decision Analysis
Bob Fischer and Jeff Sebo
Psychodiversity: Cognition and Sentience Beyond Humans (forthcoming)
Interspecies and intersubstrate welfare comparisons—judgments about the relative welfare of beings of different species or substrates—are both important and difficult. This chapter explores how to make responsible decisions in the absence of reliable welfare comparisons. Part 1 explains why these comparisons are important and why there is skepticism about their tractability. Part 2 examines how to proceed without them, focusing on multi-criteria decision analysis (MCDA) as a promising strategy. MCDA is a structured and transparent method for evaluating options based on multiple criteria. By presenting a simple case study, this chapter illustrates how MCDA can help guide decision-making under uncertainty, supporting more responsible actions in morally complex and high-stakes contexts.
Insects, AI Systems, and the Future of Legal Personhood
Jeff Sebo
Animal Law Review (forthcoming) &
Animals and the Constitution (forthcoming)
This paper makes a case for insect and AI legal personhood. Humans share the world not only with large animals like chimpanzees and elephants but also with small animals like ants and bees. In the future, we might also share the world with sentient or otherwise morally significant AI systems. These realities raise questions about what kind of legal status insects, AI systems, and other nonhumans should have in the future. At present, debates about legal personhood mostly exclude these kinds of individuals. However, this paper argues that our current framework for assessing legal personhood, coupled with our current framework for assessing risk, imply that we should treat these kinds of individuals as legal persons. It also argues that we have reason to accept this conclusion rather than alter these frameworks.
Animal Rights
Adam Lerner and Jeff Sebo
The Palgrave Handbook on the Philosophy of Rights (forthcoming)
This chapter examines the question of whether animals have moral rights, exploring its theoretical foundations and practical implications. Many humans acknowledge that animals matter morally, yet they often deny that animals possess rights in a robust sense. We survey key arguments for and against animal rights. We then consider which animals might have rights, which rights they might have, and how strong those rights might be. Recognizing animal rights could have far-reaching consequences for law, policy, and society. While moral and scientific uncertainty persists, we argue that this uncertainty should not prevent us from taking seriously the possibility that our current systems systematically violate animal rights and that we have an urgent responsibility to reassess and reform our treatment of other animals.
Animals, Plants, Fungi, and Representing Nature
Kimberly Dill and Jeff Sebo
Edward Elgar Research Handbook on Climate Justice (forthcoming)
This chapter examines the moral, legal, and political standing of animals, plants, and fungi in the context of climate justice. While the intrinsic value of nonhuman animals is increasingly recognized, skepticism persists about plants and fungi. This chapter explores recent trends in ethics and science, including the “marker method” for assessing consciousness in nonhuman animals by searching for behavioral and anatomical properties associated with conscious experience in humans. Highlighting the complexities of plant and fungal cognition, behavior, and interdependence, this chapter argues that these beings warrant further investigation despite the methodological challenges that they raise. It also explores implications of their potential moral, legal, and political significance in a world reshaped by human activity.
Ethical Oversight for Insect Research
Toni Sims and Jeff Sebo
Zoophilologica (forthcoming)
This essay argues for ethical oversight in insect research. Despite the widespread use of insects in scientific and medical research, they receive little to no protection under existing animal welfare regulations. This essay shows that many insects exhibit cognitive and behavioral markers of sentience and argues that, when there is uncertainty about whether an animal is sentient, we have a responsibility to consider welfare risks for that animal. This essay then explores how ethical oversight for insect research could be implemented by adapting existing frameworks for vertebrate research while accounting for the unique challenges posed by insects as research subjects. While extending oversight to insects would require overcoming numerous barriers, failing to do so risks both moral negligence and public mistrust.
Moral Circle Explosion
Jeff Sebo
The Oxford Handbook of Normative Ethics (forthcoming)
This chapter argues that we should extend moral consideration to a much larger number and wider range of beings than we currently do. There are at least two reasons why. The first reason is epistemic: We should be open to the possibility that a very large number and wide range of beings have moral standing, and we should extend at least some moral consideration to these beings accordingly. The second reason is practical: We now have the power to impact a very large number and wide range of beings, both within and across species, nations, and generations. The upshot is that we should extend at least some moral consideration to quintillions of beings – including invertebrates, plants, and some artificial intelligences – with surprising implications for many moral theories.
Subjective Experience in AI Systems: What Do AI Researchers and the Public Believe?
Noemi Dreksler, Lucius Caviola, David Chalmers, Joshua Lewis, Kate Mays, P.D. Waggoner, and Jeff Sebo
arXiv (forthcoming)
This paper (co-sponsored by the Center for Mind, Ethics, and Policy; the Centre for the Governance of AI; and the Global Risk Behavioral Lab) surveys 635 AI researchers and 838 U.S. participants about the possibility of AI systems with subjective experience, as well as on the moral, legal, and political status of AI systems with subjective experience. Neither group predominantly believes such systems are imminent, but many forecast their existence within this century. Both groups support multidisciplinary expertise in assessing AI subjective experience and favor implementing safeguards now. While support for AI welfare protections was lower than for animal or environmental protection, majorities agreed that AI systems with subjective experience should act ethically and be held accountable.
All _____ Are Conscious: Science, Ethics, and the Null Hypothesis
Jeff Sebo
In preparation
Animals and Deontology
Adam Lerner and Jeff Sebo
The Oxford Handbook of Deontology (in preparation)
The Emotional Alignment Design Policy
Eric Schwitzgebel and Jeff Sebo
Topoi (in preparation)
How Will Society React to AI Consciousness?
Lucius Caviola, Jeff Sebo, and Jonathan Birch
In preparation
Infant Consciousness: How, Where, Whether, When, What?
Claudia Passos-Ferreira
Toward a Science of Consciousness: Experimental and Theoretical Approaches (in preparation)
Should AI Systems Be Able to Revise Their Desires?
Ginevra Davis and Jeff Sebo
In preparation
Taking AI Welfare Seriously, Continued
Robert Long, Jeff Sebo, and others
In preparation
What if the Bar for Moral Standing Is Low?
Jeff Sebo
In preparation