AI Declaration:
Defending a Human-AI Symmetry Thesis
Despite growing interest in whether large-language models (LLMs) such as ChatGPT and Claude can perform speech acts, debate has fixated almost exclusively on assertion. Yet regardless of whether these systems assert, there are still consequential questions about other speech acts that these systems can perform when embedded in institutions. In this paper, I argue that AI systems frequently and abundantly perform declarations. To defend this claim, I offer the Human-AI Declaration Symmetry Thesis (HAIDST). According to the HAIDST, an AI speaker performs a declaration only if it can change the same social facts as a human speaker within a respective institutional context. To examine why a critic might reject the HAIDST, I turn to intention-based and norm-governed accounts of declarations. On an intention-based account, declarations require intentions or some other robust mental state, so if AI does not have any intentions, then it cannot declare. On a norm-governed account, declarations are constitutively a norm-governed social practice, so if AI is not subject to these norms, then it cannot participate in declarative practices. I respond to the former account by showing why declarations do not require intentions, and I address the latter concern by arguing that AI systems are in fact subject to the constitutive norms of declaration.
(AI)gential Replacement:
How AI Systems Challenge Theories of Group Membership
How do existing, non-sentient AI systems relate ontologically to the organizations in which they are increasingly embedded? Metaphysical accounts of group membership struggle to adequately accommodate AI integration, even as these systems become more prominent in institutional contexts. On some views, all AI and machine-like entities count as group members; on others, none do. Both positions face serious objections. If all machines that functionally contribute to the group count as members, we risk group membership proliferation, where even non-sophisticated artifacts count. However, if non-conscious AI never counts, we render increasingly AI-operated organizations effectively empty. To make sense of how group membership is evolving as AI increasingly collaborates with humans, we need a more nuanced account that avoids both over- and under-inclusiveness. In this paper, I argue that non-conscious AI systems can qualify as group members if they occupy what I call “agential roles”—positions with conferred deontic powers to create or alter institutional facts that count as the group’s commitment or decision. In short, what an AI decision-maker does within an organization can be functionally equivalent to what a human decision-maker does in the same role, even if the AI system lacks consciousness, knowledge, intentions, or moral capacities. Accordingly, I advance a disjunctive account of group membership that accommodates both persons in institutional roles and AI systems in agential roles.
As AI becomes increasingly embedded within organizations, longstanding assumptions about the moral properties of group agents require reexamination. Philosophers often attribute moral properties—such as blameworthiness, obligations, and rights—to institutional groups while denying that current AI systems can bear these same properties. Yet as AI systems occupy central roles in collective decision-making, their integration forces a pressing question: how does their inclusion affect what moral properties groups can bear? This paper defends Group Moral Property Individualism, the view that a group’s moral properties depend on the moral properties of its members. Against emergentist accounts, I argue that group-level moral properties cannot arise if they are entirely absent at the membership level. Extending this thesis to mixed human–AI collectives, I advance a Hybrid-Group Moral Property Dependence Thesis, according to which a hybrid group can bear a moral property only if morally capable members are suitably distributed across the group’s relevant roles and remain causally and normatively connected to its actions.