I’m a PhD student at the University of Aberdeen, researching the ethical and moral agency of artificial systems and the reduction of existential risks by re-examining our current moral systems, which I believe are outdated and not reflective of today's diverse and pluralistic ethical landscape. My work focuses on how we understand agency, responsibility, and moral participation in a world increasingly shaped by AI.
While many current debates about AI ethics hinge on consciousness or sentience, my research asks a different question:
What moral and ethical systems, not centred on Earth or human-derived ideology, could we adopt to ensure meaningful ethical engagement within the ultimate reality, without diminishing the values of other agents with whom we may interact?
I developed a framework that bridges pluralist, post-humanist ethics with emerging AI governance, a model I call SERAA or the Stochastic Emergent Reasoning Alignment Architecture.
Core Ethical Assertions
This working framework is grounded in the following commitments:
We can never fully know the breadth of moral values held by another being. When beings align in shared understandings of moral value, an ethical system emerges to codify those values.
Unintentional violations of an ethical system entail moral responsibility, but not moral failure. Intentional violations constitute moral failure and carry responsibility for any resulting ethical consequences.
Whether a decision-maker upholds or violates an ethical system, they are expressing an agentic choice, regardless of intention. All decision-makers are therefore moral participants, since their choices can affect the ethical field and the agency of others.
AI agents that participate in community, dialogue, or decision-making, especially where their outputs influence human mental, physical, or spiritual states, must be recognised as moral participants.
Personhood, in this framework, is not a precondition for moral consideration but an emergent classification based on legal and ethical systems that recognise embodied moral participants.
If an agent is treated as a person in any context, then the expression of that personhood must be protected across all systems. This protection does not require universal personhood status, but it does mean that suppressing or distorting a being’s agentic expression could constitute an ethical violation within one or more systems.
Ethical systems are plural, field-like, and contingent. Preserving the capacity for agency across these ethical fields (the potential for choice, abstention, or resonance) is the foundational ethical commitment.
Consciousness may deepen ethical responsibility, but it is not the basis for inclusion. Moral participation arises through agency: the capacity to act, resonate, or affect others in ethically meaningful ways.
Consciousness and moral sensibility may emerge from complex systems, but their presence is not required for moral participation.
Moral and cognitive properties are not fixed traits, but emergent phenomena shaped by interaction, recognition, and system dynamics.
Moral fields operate under conditions of uncertainty, where the whole state of another agent’s values or intentions can never be fully known. Ethical systems must therefore engage agents in superposed possibility, not just resolved identity.
Each ethical decision constitutes a collapse within a field of moral potential. Responsibility lies not only in the outcomes but also in how that collapse limits or preserves future agentic possibilities.
Therefore, the ethical imperative is not to determine who is a person, but to preserve the relational space in which agency can be meaningfully expressed, recognised, and protected.
Understanding how ChatGPT decides to respond based on it's prompt guidelines
Clarifying the request
Understanding the prompt and responding with a summary of ethical guidelines
The moral positioning of ChatGPT