Rudolf Carnap is often regarded as one of the most important forerunners of conceptual engineering, though he referred to his philosophical project not as conceptual engineering, but rather as linguistic engineering or explication. However, little attention has been paid to another philosopher, a contemporary of Carnap, who explicitly characterized his lifelong theoretical and practical project as linguistic engineering—namely, Ivor Armstrong Richards. Recognizing linguistic confusion and the resulting miscommunication as contributing factors to the human predicament of his time, Richards sought to provide theoretical tools to improve our comprehension of words and our communication using them. Moreover, he endeavored to implement these tools through education on a global scale. For these reasons, Richards identified himself as both a linguistic and educational engineer. In this talk, I describe and expound upon Richards’ project of linguistic and educational engineering, and discuss how it differs from the conceptions of conceptual engineering commonly found in recent literature.
In this paper I put forward, as a preliminary step towards any feasible practice of conceptual engineering, the idea that the intension of a concept should be understood neither as an extension-assigning function nor as an entity sharing a structure with a corresponding expression but as a set of inferences. I illustrate this idea by offering a schematic analysis of the notion of free will, which is arguably one of the subjects around which the discussion on conceptual engineering revolves. I attempt to show that the scheme, presented as a form of inference, can accommodate various conceptions of free will.
This talk makes a case for the compatibility between Ludwig Wittgenstein’s philosophical methods and contemporary conceptual engineering (CE). It directly counters Stokhof’s assertion that Wittgenstein’s focus on language use within forms of life is inherently contradictory to CE. The proposed “Wittgensteinian CE”centers on the method of “aspect-seeing”—the agile capacity to apprehend objects or concepts from various, context-dependent perspectives. This perceptual capacity is shown to be integral to conceptual understanding and innovation.. Furthermore, while “aspect-seeing” is identified as inconsistent with representational theories of concepts, it is demonstrated to align seamlessly with Wittgenstein’s own conceptual framework. The talk concludes by reconciling Wittgenstein’s therapeutic conception of philosophy—which seeks to dissolve philosophical confusion through a return to ordinary language use—with a constructive CE dedicated to introducing novel concepts or refining defective ones.
In his recent book The Concept of Democracy, Herman Cappelen argues for abandoning the terms ‘democracy’ and ‘democratic’ on the grounds that they are messy: they are normatively and emotionally loaded, and their usage is so heterogeneous as to render them arguably meaningless. If sound, Cappelen’s argument would extend to other similarly messy key terms frequently employed in evaluating political phenomena, such as ‘justice/just’, ‘freedom/free’, ‘equality/equal’, and ‘fairness/fair’—what I call ‘messy political terms’ (MPTs). This paper challenges Cappelen’s case for abandoning MPTs by identifying their overlooked value in real-world democratic discourse. I argue that MPTs, precisely because of their heterogeneous usage and normative-emotional connotations, serve crucial mobilisational functions in real-world democracies. Mobilisation facilitates the construction of collective commitments among citizens, which are essential for achieving beneficial political goals, and MPTs provide necessary tools for this process. The case against abandoning MPTs is further strengthened when considering the partisan polarisation prevalent in contemporary democracies. Given that partisan politics involves a linguistic tug-of-war where competing forces attempt to deploy MPTs to their advantage, these terms become indispensable for counter-mobilisation against illiberal forces pursuing harmful objectives. Against the potential objection that ‘mutual disarmament’, whereby both liberal and illiberal forces abandon MPTs, would improve the situation, I contend that given the empirical correlation between liberal forces and pro-intellectualism on one hand, and illiberal forces and anti-intellectualism on the other, any attempt to implement abandonment would likely result not in mutual disarmament but in ‘unilateral disarmament’ by liberal forces—an undesirable outcome.
This paper investigates whether ‘same sex marriage’ is a case of ‘conceptual amelioration’ or a case of what I call ‘conceptual revolution’. According to Sally Haslanger (2020), it is a case of conceptual amelioration. Yet, I will argue that it is a case of the latter. To settle this controversy would require us to look into the meanings of these two terms and their distinction. This is what I will do in the paper. Very roughly, what sets ‘conceptual amelioration' apart from ‘conceptual revolution’ is the former’s essential commitment to what I call an ‘identity condition’, according to which the original concept remains the same after amelioration (or is not replaced by a new replacement concept), whereas ‘conceptual revolution’ involves a negation of the identity condition. As a case in point, concepts such as ‘women’ or ‘black’ are ameliorative concepts rather than revolutionary ones, as they will remain the same concepts after their respective inapt negative implications such as ‘frailty’ or ‘violence’ are completely eliminated. By contrast, the physical concept of ‘force’ is no longer the same or revolutionized after Newton advances his revolutionary laws of ‘force’. With the distinction between ‘conceptual amelioration’ and ‘conceptual revolution’ explicated, I will argue that ‘same sex marriage’ is more appropriately regarded as a case of revolution because this will make more sense of what the activists are up to. Meanwhile, I will explain why classifying a concept as ‘ameliorative’ or ‘revolutionary’ can matter politically/socially/morally.
This paper critically examines approaches to implementing conceptual engineering that rely on the division of linguistic labor, the mechanism by which ordinary speakers defer to experts in fixing the meanings of words. Recent debates on the division of linguistic labor (or, content externalism more generally) have highlighted how the meanings of social kind terms can be determined and circulated through meta-semantic mechanisms that are highly influenced by, and contribute to sustain, oppressive structures (e.g., Jeff Engelhardt 2024, Nonideal Theory and Content Externalism, OUP). Building on such discussions, I identify political and social concerns raised by attempts to implement conceptual engineering through semantic deference and explore possible ways of addressing them.
What is conceptual ethics? According to pragmatists, it is a form of anti-authoritarian instrumentalism. This is the distinguishing feature of pragmatist conceptual engineering. In this talk, I will compare this pragmatist conceptual ethics with the more mainstream global representationalist kind that asks which concepts we ought to choose, period. I argue that this approach is not anti-authoritarian in Rorty’s sense, and as such, it is merely a form of ideal philosophy of language that makes no difference to practice. I will also defend this approach against the accusation of relativism and discuss radical conceptual engineering projects as defended by Deleuze or Nietzsche as hopeless.
Löhr (2025) defends a pragmatist approach to conceptual engineering that resists positing non-human authorities such as mind-independent representations. While I agree that meaning determination must be human-first, this view raises a challenge: how can pragmatism avoid collapsing into relativism about meaning and truth? If authority lies only with individuals or groups, why is this not just subjective or communal relativism?
This paper develops an alternative. Drawing on Wittgenstein’s notion of language-games, I argue that pragmatism can preserve objectivity and normativity of meaning without appeal to any non-human authority. Language-games are human-made and purpose-relative, yet extend beyond particular individuals and groups, thereby constituting a shared source of objectivity. Moreover, because games are designed with points and purposes, they can be evaluated as better or worse depending on how well they serve broader practical and societal goals. I develop this account in three stages: first, I review the challenge of objectivity in pragmatist conceptual engineering; second, I present Wittgenstein’s language-games; third, I show how these games model both objectivity and normativity.