Selected Papers
AI Ethics
Self-Esteem and Technological Unemployment: Should We Halt AI to Protect Meaningful Work? Cordasco, C. L., Véliz, C.,  J Bus Ethics (2025).
Business Ethics
The Ethics of Entrepreneurship: A Millian Approach, Cordasco, C.L., J Bus Ethics (2024).
Market Participation, Self-Respect, and Risk Tolerance, Cordasco, C.L., Cowen, N. J Bus Ethics (2023). 
Innovation Theory
An Institutional Taxonomy of Adoption of Innovation in the Classic Professions, Cordasco, C., Gherhes, C., Brooks, C., & Vorley, T. (2021). Technovation, 107, 102272.
Political Philosophy
Abstraction as Flexibility: Prudential Agreement under Evaluative Uncertainty, Accepted at Economics and Philosophy.

Revise & Resubmit
Is a More-Than-Minimal State the Meta-Utopia?, Revise & Resubmit at Philosophy and Public Affairs.
The Accuracy-Explainability Trade-off, the Right to Explanation, and Implications for Organisations, with Carissa Véliz, conditionally accepted at the Journal of Business Ethics.

Under Review
Taking Moral Residue Seriously: A Case for Imprecise Moral Credences. (Noûs)
Ken we keep them? Sunstein's Barbie Goods Reconsidered, with Gianluigi Giustiziero. (Journal of Business Ethics)
A Non-Tradeability Approach to Threshold Minimalism, with Dora Xu. (The Philosophical Review)
The Experience Machine and the Independence Criterion. (Philosophical Quarterly)
Epistemic Asymmetry and the Ethics of AI Governance: Why Evidence-Based Technology Policy Is Structurally Biased (Journal of Business Ethics)
The Transitional Lens: Re-Professionalisation in the Age of AI, with Atif Sarwar.

Works in Progress

The Positive Duty to Aid and Sweatshop Jobs, with Billy Christmas.


Imprecise Moral Credences and Moral Change
Moral progress requires people to change how they see moral questions, not just to adjust their confidence in existing views. But how does such change work? Existing accounts focus on societies and institutions, not individuals. I argue that when agents revise their moral outlook gradually and in response to reasons, a particular epistemic structure is at work: the rival view cannot be dismissed, which keeps it live and available for the accumulation of considerations that eventually tips the balance. Imprecise credences model this structure directly. The account explains the phenomenology of gradual moral change---why it feels like recognition rather than conversion---and provides the epistemological microfoundation that existing accounts lack.
Epistemically Irreversible Goods
Some practices deliver distinctive benefits only because, in ordinary circumstances, people can treat them at face value: as sources of relief, as evidence, or as public signs of contribution. This paper calls a good epistemically irreversible when that face-value permission depends on background assumptions that are defeated by a salient update, so that repeating the old behaviour does not restore the good in its ordinary form. I distinguish individual defeat, where a responsible update undermines an attitude that cannot simply be sustained at will, from public defeat, where an update becomes common ground and changes what can be treated as warranted in shared reasoning. The practical lesson is a constraint on repair: recovery typically requires new, openly defensible grounds for face-value uptake, or redesigned practices that remain stable under the new informational environment.

When Accuracy Is a Public Value: Explainability, Discretion, and Adaptive Accountability in Algorithmic Administration
Public agencies increasingly rely on algorithmic systems to make or support decisions about benefits, enforcement, and service delivery. Calls for explainability are often framed as demands for individual-facing reasons that enable citizens to understand and contest outcomes. This paper argues for a domain-sensitive approach grounded in a neglected feature of administrative decision-making: in many settings, thick, enforceable explanation requires ex ante stabilisation of criteria and procedures, yet stabilisation can predictably reduce accuracy when agencies must learn under novelty, drift, and adversarial behaviour. In such settings, the resulting errors are not mere efficiency losses. They create administrative burden, arbitrary exclusion, and loss of legitimacy. The upshot is a trade-off that is too often treated as a problem of algorithms, even though it has a clear analogue in human administrative judgement: discretion often preserves accuracy precisely by allowing criteria to be revised as agencies discover what matters in practice. Drawing on work on accountable AI, algorithmisation as organisational practice, and administrative burden, the paper distinguishes domains where legality and citizen planning require thick, individual-facing explanation from domains where accuracy is itself a public value and accountability should be secured primarily through institutional mechanisms: auditable change-control, versioned decision pipelines, independent review, and contestation channels that do not require agencies to freeze evolving evaluative schemes. The paper offers a framework for matching explanation regimes to administrative task structure, including operational indicators for distinguishing task types, a mechanism for assigning and reviewing regime classifications, and criteria for assessing adversarial claims. It draws implications for benefits administration, regulatory enforcement, and public sector risk scoring.