Is a More-Than-Minimal State the Meta-Utopia?, Revise & Resubmit at Philosophy and Public Affairs.
Ken we keep them? Sunstein's Barbie Goods Reconsidered, (with Gianluigi Giustiziero), under review at the Journal of Business Ethics.
The Accuracy-Explainability Trade-off, the Right to Explanation, and Implications for Organisations, (with Carissa Véliz), Revise & Resubmit at the Journal of Business Ethics.
Abstraction as Flexibility: Prudential Agreement under Evaluative Uncertainty, Revise & Resubmit at Economics and Philosophy.
Self-Esteem and Technological Unemployment: Should We Halt AI to Protect Meaningful Work? Cordasco, C. L., Véliz, C., J Bus Ethics (2025).
The Dark Side of AI in Professional Services, Trincado-Munoz, F. J., Cordasco, C.L., & Vorley, T. The Service Industries Journal, 1–20 (2024).
The Ethics of Entrepreneurship: A Millian Approach, Cordasco, C.L., J Bus Ethics (2023).
Market Participation, Self-Respect, and Risk Tolerance, Cordasco, C.L., Cowen, N. J Bus Ethics (2023).
An Institutional Taxonomy of Adoption of Innovation in the Classic Professions, Cordasco, C., Gherhes, C., Brooks, C., & Vorley, T. (2021). Technovation, 107, 102272.
Works in Progress
A Non-Tradeability Criterion for Threshold-Minimalism
Abstract: I develop a non-tradeability criterion for threshold-minimalism. Threshold-minimalism privileges one requirement within a richer evaluative perspective X and proposes a threshold on it as marking where X's demands begin. The criterion - strict non-tradeability - states when such a threshold can coherently serve as an internal floor on an explicit comparison class D: X must never strictly prefer a threshold-failing option to a threshold-meeting one. The criterion, therefore, determines, for given X, D, and privileged requirement, a class of floor-compatible thresholds, and it identifies an upper boundary beyond which tightening reintroduces internal tradeability. I illustrate the framework using sufficientarianism, capability thresholds, and Rawls's decent-peoples criterion. When strict non-tradeability fails on the stated domain, the threshold cannot be defended as an internal floor there, and must be defended on some other rationale.Taking Moral Residue Seriously: A Case for Imprecise Moral Credences
Abstract: After hard moral choices, people sometimes feel a lingering unease even when they think they acted correctly. This paper argues that this “moral residue” comes in two different forms. One is tragic residue: the appropriate response when harm was unavoidable, so doing the right thing still meant that something bad happened. The other is critical residue: the appropriate response when the choice turned on competing moral principles, and the principle you acted against still seems like a serious contender rather than a mistake you can dismiss. I claim that critical residue is hard to explain if moral uncertainty is always represented by a single, precise probability. In many framing disputes, the evidence does not entitle an agent to fix one exact level of confidence in the rival principle, and what matters for whether the rival can be dismissed is whether the verdict is robust across a range of reasonable starting points. That makes the “epistemically regulating state” a set or range of credences, not a single point. Imprecise probabilities capture this structure directly, and they explain why residue can be fitting in some cases while looking like scrupulosity in others.
The Positive Duty to Aid and Sweatshop Jobs, (with Billy Christmas).
Abstract: Berkey (2019) argues that sweatshop labour, though mutually beneficial, is wrongfully exploitative because it represents only partial compliance with a pre-existing duty to aid. He models this on a case where an owner sells a single life-saving drug dose at an extortionate price. This paper accepts Berkey’s premise of a group-directed duty but rejects the specific application of the "drug-rescue" analogy to employment. We argue that a critical disanalogy exists: while a single drug dose is indivisible, the financial surplus used for wages is divisible. We contend that when a benefit is divisible, a duty owed to a disadvantaged group requires sharing that benefit across the widest possible set of members. Consequently, the duty to aid in the sweatshop context supports using surplus to hire additional workers at the equilibrium wage—widening access—rather than concentrating benefits by increasing wages for a select few. We support this claim through an analysis of job subcontracting and rent dissipation, showing that "fair wage" premiums often fail to reach the intended beneficiaries. Finally, we argue that the duty should be assessed dynamically: retaining profits is justifiable if used to fund investment that expands future employment, thereby discharging the duty to aid more effectively than immediate wage hikes.
Epistemically Irreversible Goods
Some practices deliver distinctive benefits only because, in ordinary circumstances, people can treat them at face value: as sources of relief, as evidence, or as public signs of contribution. This paper calls a good epistemically irreversible when that face-value permission depends on background assumptions that are defeated by a salient update, so that repeating the old behaviour does not restore the good in its ordinary form. I distinguish individual defeat, where a responsible update undermines an attitude that cannot simply be sustained at will, from public defeat, where an update becomes common ground and changes what can be treated as warranted in shared reasoning. The practical lesson is a constraint on repair: recovery typically requires new, openly defensible grounds for face-value uptake, or redesigned practices that remain stable under the new informational environment.
From Demeaning to Defining: Identity Lag and Re-professionalisation in the Age of AI
AI systems now perform core diagnostic, interpretive, and advisory tasks in law, medicine, and consulting, yet professionals’ uptake often remains hesitant, partial, and fragile. Existing accounts emphasise trust, skills, incentives, boundary work, or sensemaking around AI as an epistemic technology. Building on institutional work on professions and identity, this paper instead explains under-adoption through a specific form of identity lag under paradigm-shifting technologies. I introduce the idea of a transitional lens: in early stages of a paradigm shift, professionals and their audiences evaluate AI use through inherited standards that equate “real” professionalism with unaided judgment. Visible reliance on AI in identity-defining tasks is therefore readily read as demeaning, even when it improves performance, generating distinctive patterns of concealment, ambivalence, and sensitivity to observability. I then develop a dynamic model of re-professionalisation that traces how AI moves from being coded as corner-cutting, to contested, to a defining element of expertise as evaluative standards shift and become common knowledge. The paper specifies a micro-level evaluative mechanism linking technological paradigm change to evolving standards of professionalism, and outlines organisational strategies for repositioning skilful AI use as competence and responsibility rather than inadequacy.
When Accuracy Is a Public Value: Explainability, Discretion, and Adaptive Accountability in Algorithmic Administration
Public agencies increasingly rely on algorithmic systems to make or support decisions about benefits, enforcement, and service delivery. Calls for explainability are often framed as demands for individual-facing reasons that enable citizens to understand and contest outcomes. This paper argues for a domain-sensitive approach grounded in a neglected feature of administrative decision-making: in many settings, thick, enforceable explanation requires ex ante stabilisation of criteria and procedures, yet stabilisation can predictably reduce accuracy when agencies must learn under novelty, drift, and adversarial behaviour. In such settings, the resulting errors are not mere efficiency losses. They create administrative burden, arbitrary exclusion, and loss of legitimacy. The upshot is a trade-off that is too often treated as a problem of algorithms, even though it has a clear analogue in human administrative judgement: discretion often preserves accuracy precisely by allowing criteria to be revised as agencies discover what matters in practice. Drawing on work on accountable AI, algorithmisation as organisational practice, and administrative burden, the paper distinguishes domains where legality and citizen planning require thick, individual-facing explanation from domains where accuracy is itself a public value and accountability should be secured primarily through institutional mechanisms: auditable change-control, versioned decision pipelines, independent review, and contestation channels that do not require agencies to freeze evolving evaluative schemes. The paper offers a framework for matching explanation regimes to administrative task structure, including operational indicators for distinguishing task types, a mechanism for assigning and reviewing regime classifications, and criteria for assessing adversarial claims. It draws implications for benefits administration, regulatory enforcement, and public sector risk scoring.