Selected Papers
AI Ethics
Self-Esteem and Technological Unemployment: Should We Halt AI to Protect Meaningful Work? Cordasco, C. L., Véliz, C.,  J Bus Ethics (2025).
The Accuracy-Explainability Trade-off, the Right to Explanation, and Implications for Organisations, with Carissa Véliz, accepted at the Journal of Business Ethics.

Business Ethics
The Ethics of Entrepreneurship: A Millian Approach, Cordasco, C.L., J Bus Ethics (2024).
Market Participation, Self-Respect, and Risk Tolerance, Cordasco, C.L., Cowen, N. J Bus Ethics (2023). 
Innovation Theory
An Institutional Taxonomy of Adoption of Innovation in the Classic Professions, Cordasco, C., Gherhes, C., Brooks, C., & Vorley, T. (2021). Technovation, 107, 102272.
Political Philosophy
Abstraction as Flexibility: The Veil of Evaluative Uncertainty, Accepted at Economics and Philosophy.

Revise & Resubmit
Is a More-Than-Minimal State the Meta-Utopia?, conditionally accepted at Philosophy and Public Affairs.



Under Review
Taking Moral Residue Seriously: A Case for Imprecise Moral Credences. (Noûs)
A Non-Tradeability Approach to Threshold Minimalism, with Dora Xu. (The Philosophical Review)
The Experience Machine and the Independence Criterion. (Philosophical Quarterly)
The Transitional Lens: Re-Professionalisation in the Age of AI, with Atif Sarwar (Journal of Management Studies)
Ken we keep them? Sunstein's Barbie Goods Reconsidered, with Gianluigi Giustiziero. (Journal of Business Ethics)
The Duty to Aid as a Duty to Hire, with Billy Christmas. (Business Ethics Quarterly)


Works in Progress
What Organisations Cannot See: Epistemic Asymmetry and the Governance of Cognitive Offloading
Imprecise Moral Credences and Moral Change
Epistemically Irreversible Good
When Accuracy Is a Public Value: Explainability, Discretion, and Adaptive Accountability in Algorithmic Administration
Commitment under Evaluative Uncertainty

Book Project
Stubborn Uncertainties

In this book, I take seriously the possibility that our uncertainty about evaluative commitments is a permanent feature of the moral landscape, and trace the consequences of that condition across moral epistemology, political philosophy, and the governance of artificial intelligence. Drawing on imprecise probability theory, I argue that an honest representation of our moral evidence requires credal ranges rather than sharp credences, and that a rival evaluative framework may be dismissed only when one's entire evidentially licensed range treats it as negligible. Because most of our moral evidence does not warrant that degree of confidence, the range of frameworks that retain standing is wide, and the implications extend well beyond epistemology. At the individual level, premature evaluative settlement prevents the encounter with genuine resistance that self-formation requires. At the institutional level, rational agents under such uncertainty have reasons to prefer flexible arrangements that preserve a range of evaluative options. At the level of AI governance, systems aligned to fixed moral specifications foreclose the evaluative discovery on which moral progress depends. The governing claim throughout is that the inability to settle on evaluative commitments, far from constituting a deficiency in our moral reasoning, is the condition that makes moral progress structurally possible.