Selected Papers
AI Ethics
Self-Esteem and Technological Unemployment: Should We Halt AI to Protect Meaningful Work? Cordasco, C. L., Véliz, C., J Bus Ethics (2025).
The Accuracy-Explainability Trade-off, the Right to Explanation, and Implications for Organisations, with Carissa Véliz, accepted at the Journal of Business Ethics.
Business Ethics
The Ethics of Entrepreneurship: A Millian Approach, Cordasco, C.L., J Bus Ethics (2024).
Market Participation, Self-Respect, and Risk Tolerance, Cordasco, C.L., Cowen, N. J Bus Ethics (2024).
Innovation Theory
An Institutional Taxonomy of Adoption of Innovation in the Classic Professions, Cordasco, C., Gherhes, C., Brooks, C., & Vorley, T. (2021). Technovation, 107, 102272.
Political Philosophy
Is a More-Than-Minimal State the Meta-Utopia?, Cordasco, C. L., Philosophy and Public Affairs, (2026).
Abstraction as Flexibility: The Veil of Evaluative Uncertainty, Accepted at Economics and Philosophy.
Under Review
A Non-Tradeability Approach to Threshold Minimalism, with Dora Xu. (The Philosophical Review)
The Experience Machine and the Independence Criterion. (Philosophical Studies)
The Transitional Lens: Re-Professionalisation in the Age of AI, with Atif Sarwar (Journal of Management Studies)
Ken we keep them? Sunstein's Barbie Goods Reconsidered, with Gianluigi Giustiziero. (Journal of Business Ethics)
The Duty to Aid as a Duty to Hire, with Billy Christmas. (Business Ethics Quarterly)
Moral Residue and Acting Without Closure (Ethics)
Works in Progress
Will AI Agents Kill the Firm? A Theory of Oversight Under Agentic Production
If AI agents can execute tasks that previously required teams, why would firms persist? I identify a mechanism that existing theories of the firm leave underexplored: the specification-monitoring correlation. When a principal directs an agent, the same professional training constrains both what the principal specifies and what the principal can evaluate, so that failures on unspecified dimensions are difficult to detect. Teams whose members have different professional training decorrelate these errors. A formal model shows that when AI capability grows asymmetrically—stronger on execution than on contextual judgement—teams become more valuable as agents improve, reversing the naive prediction that better agents make firms less necessary.
What Organisations Cannot See: Epistemic Asymmetry and the Governance of Cognitive Offloading
Technologies that offload cognitive functions simultaneously remove a capacity that is functional within existing practice and lift a binding constraint on what the activity can become. Because the capacity has been characterised and measured, its loss is visible against established baselines. Because the constraint's removal opens a possibility space whose contents depend on practices not yet developed, the most consequential benefits fall beyond current assessment. I show that this asymmetry cannot be corrected by more evidence of the same kind and propose recoverability as the governing criterion: organisations should accept reversible costs while protecting against those that are not.
Commitment under Evaluative Uncertainty With Alessandro Sontuoso
Institutions require decision-makers to commit to evaluative criteria before encountering the cases those criteria must govern. Commitment enables verification but restricts rules to depend only on pre-specified dimensions, and when the relevant dimensions are uncertain, the restricted class may exclude the optimal rule. We extend the preference-for-flexibility framework from menus of alternatives to menus of decision rules, where the restriction is a verifiability condition. In a linear-Gaussian environment, a single object—the conditional covariance between committed and uncommitted dimensions—governs the accuracy cost of commitment across action spaces and loss functions.
Understanding Without Grounding? Pragmatic Structure, Linguistic Compression, and the Conditions of Competence in Language-Trained Systems
A familiar objection to the possibility of understanding in large language models holds that systems trained on language alone lack the non-linguistic grounding required for genuine competence. I argue that a significant class of failures attributed to the absence of sensory grounding are better explained by the pragmatic structure of the training signal. Language produced under cooperative conversational norms is systematically optimised to omit what speakers can cheaply recover from shared context. A system trained on such language inherits a communicative practice that presupposes a common ground it does not possess. This reframing does not settle whether any current system understands, but it shows that the evidential basis for the grounding objection is weaker than it appears.
The Stakeholder Dilemma: Person-Affecting Evaluation and the Problem of Future Generations
Stakeholder theory is widely regarded as the natural home for intergenerational concern in business ethics. I argue that this extension is structurally incoherent. Stakeholder theory's evaluative apparatus depends on identifying determinate constituencies whose interests can be specified and weighed. When the theory extends to non-overlapping future persons, it confronts the non-identity problem: corporate decisions of sufficient scope determine which future persons will exist, and person-affecting evaluation loses its determinate subject. Shareholder theories face no analogous dilemma because the tradeable-claim mechanism, indirect reciprocity, and the structural features of the corporate form handle the temporal dimension institutionally.
Book Project
Stubborn Uncertainties
In this book, I take seriously the possibility that our uncertainty about evaluative commitments is a permanent feature of the moral landscape, and trace the consequences of that condition across moral epistemology, political philosophy, and the governance of artificial intelligence. Drawing on imprecise probability theory, I argue that an honest representation of our moral evidence requires credal ranges rather than sharp credences, and that a rival evaluative framework may be dismissed only when one's entire evidentially licensed range treats it as negligible. Because most of our moral evidence does not warrant that degree of confidence, the range of frameworks that retain standing is wide, and the implications extend well beyond epistemology. At the individual level, premature evaluative settlement prevents the encounter with genuine resistance that self-formation requires. At the institutional level, rational agents under such uncertainty have reasons to prefer flexible arrangements that preserve a range of evaluative options. At the level of AI governance, systems aligned to fixed moral specifications foreclose the evaluative discovery on which moral progress depends. The governing claim throughout is that the inability to settle on evaluative commitments, far from constituting a deficiency in our moral reasoning, is the condition that makes moral progress structurally possible.