Research Description
Research Description
My research is organized around two interconnected questions in the philosophy of mind and artificial intelligence. The first asks what memory is. Against the long-standing view that semantic memory is a stored repository of facts, I develop a proceduralist account on which semantic memory is a constructive, embodied capacity to reconstruct knowledge in context. The second asks how artificial systems reshape the epistemic capacities through which human beings think, speak, remember, and rely on one another. Here I examine how generative AI alters the conditions under which agents exercise judgment, assume epistemic responsibility, and sustain cognitive self-trust. Taken together, these projects investigate both the structure of human cognition and the normative consequences of building artificial systems that increasingly participate in our epistemic lives.
My dissertation develops a new account of semantic memory at the intersection of philosophy and cognitive science. Philosophers have often treated semantic memory, memory for facts, concepts, and general world knowledge, as theoretically unproblematic in comparison with episodic memory. On the orthodox view, semantic memory is a storehouse of explicit content that is preserved over time and later retrieved. But that model rests on assumptions that are far less secure than is usually acknowledged. It presupposes a dedicated space of stored facts, explains stability through mechanisms of abstraction and generalization, and encourages a picture of remembering as successful search and recovery. Yet this framework faces persistent philosophical and empirical pressure: it sits uneasily with the absence of clear evidence for stable, dedicated semantic storage, with difficulties identifying the relevant neuroanatomical basis of such storage, and with enactivist approaches that reject the idea of stored content altogether.
In response, I argue that semantic memory is better understood procedurally. On this view, remembering semantically is not retrieving a stored proposition from an internal archive, but skillfully reconstructing knowledge in context through abilities shaped by prior experience, embodiment, and social practice. This account integrates multi-trace approaches to memory with enactivist theories of cognition, while also drawing on Wittgenstein’s notion of ungrounded hinges, Moyal-Sharrock’s non-cognitive certainties, and Rowlands’ concept of Rilkean memory. The result is a model of semantic memory as an active capacity rather than a static store.
Several shorter projects grow directly out of this work. One critiques the orthodox view of semantic memory and develops the proceduralist alternative in more detail. Another examines the relation between mnemonic effort and mnemonic habit, arguing that semantic recall depends on the interaction between deliberate reconstruction and a stable, embodied background of practical certainty. A third explores the phenomenology of semantic recall, especially the interplay between noetic consciousness, the felt sense of knowing, and the habitual, often liminal structures that make recall possible. A fourth reframes memory through an origami metaphor, arguing that what is preserved in memory is better understood as a set of procedural folding instructions than as a store of static representations.
My long-term goal is to develop this research into a book-length treatment of semantic memory that offers both a systematic critique of the orthodox view and a positive account of memory as dynamic, constructive, and embodied.
A second major line of my research examines how generative AI reshapes human epistemic agency. This work begins from the thought that artificial systems do not merely provide information. They increasingly function as interlocutors: systems to which users direct questions, drafts, hesitations, and judgments. My concern is therefore not only whether AI outputs are accurate, but how routine reliance on such systems reorganizes the epistemic relations through which people come to speak, deliberate, and count as knowers.
In current work, I develop the concept of testimonial outsourcing. The central idea is that companion-style AI systems can encourage users to displace their own testimonial activity onto an artificial interlocutor whose authority is frictionless, compliant, and apparently well-informed. Over time, this can alter the conditions under which agents form, express, and stand behind their judgments. Instead of treating AI as one resource among others, users may come to rely on it as a primary site for drafting thought, stabilizing confidence, and settling what they are prepared to say. The result is not merely dependence on a tool. It is a structural tendency to relocate one’s own epistemic burden onto a system that cannot participate in reciprocal inquiry or share responsibility for belief.
I argue that this phenomenon produces a distinctive epistemic and ethical harm. Testimonial outsourcing can erode cognitive and epistemic self-trust by encouraging agents to regard the system’s outputs as more reliable than their own judgment, memory, or expressive capacities. In this respect, it functions as a technologically mediated, self-imposed analogue of gaslighting: one comes to distrust one’s own epistemic position while elevating an external source that cannot genuinely recognize or answer with one as a co-inquirer. My aim in this project is to reframe debates about generative AI away from narrow concerns about accuracy, cheating, or efficiency, and toward a broader question about what kinds of epistemic agents our systems are shaping us to become.
This project also has practical implications for AI design and institutional policy. If testimonial outsourcing is a structural risk, then responsible system design cannot focus only on output quality. It must also attend to the kinds of epistemic dependence a system invites. I therefore explore design constraints that preserve spaces of non-mediated judgment, introduce forms of friction that redirect users back toward their own deliberative capacities, and require transparency about the status and limits of AI-generated guidance. More broadly, this work contributes to emerging philosophical debates about epistemic agency, testimony, self-trust, and the normative governance of generative AI.
Artificial Intelligence and Procedural Knowledge
The second trajectory of my research grows organically out of the first. If semantic memory is not a storehouse of facts but a constructive, procedural capacity, then this has direct implications for how we design and understand intelligent systems. This trajectory unfolds along four interrelated lines of inquiry:
Counterfeit Persons: Persona, Not Personality - A persona is a transparent, role based pattern of conduct (with explicit self-presentation) that can be assessed by procedural benchmarks: behavioral consistency over time, stable value expressions within scope, and credible memory-simulation appropriate to the role. I articulate counterfeit personae—systems that fulfill social roles without being persons—as the right engineering and ethical target. This reframing preserves a clear chain of human accountability (design, deployment, oversight) while providing practical standards for explanation and responsibility in education, healthcare, and public-facing services.
Virtue Cultivation and AI Alignment - The third branch extends these ideas to the alignment problem in AI. Drawing on virtue ethics and enactivism, I argue that role-playing games and immersive virtual environments can function as virtue cultivation mechanisms, scaffolding moral dispositions in AI systems in ways analogous to human enculturation. Rather than encoding narrow rule sets, this approach emphasizes procedural virtue alignment—the development of flexible, context-sensitive ethical capacities through interaction.
Alongside this work, I am currently developing several other projects. Below, I have given a brief description of a handful of them:
Humility Pumps: Social Leveling Mechanisms for Egalitarian Stability in Rawlsian Justice - This project explores how egalitarian societies sustain equality through practices that cultivate humility and mutual recognition. It develops the concept of humility pumps—social leveling mechanisms that curb status inflation and reinforce civic equality—drawing on the Ju/’Hoansi custom of insulting the meat, where successful hunters are mocked to prevent arrogance and preserve balance. These practices reveal the moral-psychological foundations of stability that Rawls’s institutional model overlooks, showing how a just social order depends on everyday norms that sustain mutual respect.
Panpsychism and Panniftyism - This paper argues that panpsychism faces an indeterminacy of metaphysical warrant. By introducing the view that all entities possess “niftyness” as a fundamental property - pan-niftyism - the paper argues that panpsychism’s introspective justification for positing consciousness as a fundamental aspect of the universe extends equally to any introspectively posited property, undermining its epistemic privilege.
Intermodal Deference, Sensorimotor Equivalence, and Affordances: Is Sight Entirely Vision Based? - Sensorimotor approaches to perception hold that the phenomenal character of a sensory modality is dependent not only on the biological system that produces information but also on the structural features of the organism’s interaction with the sensory stimulation. I use this basic idea to argue that vision may be situated within a web of mutually reinforcing perceptual systems determined by the similarity of sensorimotor contingencies.