Abstract: Synthetic data, i.e., data that is algorithmically generated, has been considered as a novel solution to the data scarcity issue, and as a ‘technical fix’ able to fill the gap in areas where real data is sensitive, or biased. Different narratives about the nature of synthetic data as mirroring or replacing real data, as well as diverse evaluation metrics to measure the fidelity and utility of this data have proliferated in the machine learning fairness community, in public policy research, privacy and data protection studies, and critical data scholarship. However, to date there is no consensus on how to define the ‘quality’ of synthetic data. Against this background, I demonstrate how the concept of synthetic data introduces an analogical perspective on data. This perspective is relational and regulative and extends the discussion on the quality of synthetic data to encompass questions regarding the purpose and trade-offs of synthetic data generation and use, the social practices and actors with different powers that underpin and configure it, and how to shape its direction in response to changing real-world circumstances and emerging human values. Building on this analysis, I argue that the generation and use of meaningful synthetic data requires promoting responsibility in complex AI and data innovation ecosystems and facilitating forms of data justice and responsiveness.
Abstract: This lecture explores how we can reconsider the relationship between technological development and society through the approach of speculative design. In the first half, I will introduce a series of my past works, highlighting their critical perspectives on traditional values and their methods of social intervention through fictional scenarios. The second half will center on my latest project, "PARALLEL TUMMY CLINIC," a speculative work themed around artificial wombs. This section will examine the ethical and social implications of such technology and its potential impact on individuals and communities. Specifically, I will discuss how we can move beyond common dystopian imaginaries to present more diverse and open possibilities for the future. The lecture will conclude by reassessing the role of critical creativity in social design, asking the fundamental question: 'For whom is this technology?'
* This talk is CHAIN Academic Seminar #52. It will be delivered in Japanese with simultaneous Japanese-English interpretation.
Abstract: The distinction between “convivial” and “monopolistic” technologies, introduced in the 1970s by the philosopher Ivan Illich, was the foundation for a radical critique of contemporary technological society (Illich 1973). This key distinction was adopted in a critique of technology and economic reason (Gorz 1988) by French critical phenomenology (avant la lettre). This talk will focus on how this distinction between convivial and monopolistic (or non-convivial) technologies can support a critical phenomenology of technology. I will argue that Gorz attempts to do just this, but that his development of the “convivial – un-convivial” distinction in terms of a broader account of “autonomy” vs “heteronomy” would benefit from a more phenomenologically grounded account of autonomy. I will pose (and try to address) the question of whether a more embodied account of autonomy, such as developed within the context of enactive approaches to cognition would serve such an aim. A third step will be to ask if and how an enactively enriched notion of autonomy, when situated within the critique of technology and economic rationality, can contribute to the development of programmes for “concrete utopias”, an objective of Gorz’s critical phenomenology.
André Gorz, A (2004 [1988]). Métamorphoses du travail: critique de la raison économique, Gallimard, coll. « Collection Folio Essais »,
Illich, I. (1973). Tools for conviviality. Chicago / Turabian
Abstract: Efforts to prevent human extinction—often framed as “existential risk mitigation”—are increasingly shaping discussions in global policy, technology governance, and ethics. A common argument in favor of these efforts holds that human extinction would be uniquely bad because it would cut short humanity’s vast future potential. This potential is typically described in terms of flourishing civilizations, major scientific and cultural achievements, or human life expanding across the stars. However, this way of framing the value of the future implicitly relies on a specific ideal of what makes life valuable—an ideal rooted in human excellence, progress, and achievement. This raises an important question: To what extent can we prioritize these visions of the future without unjustifiably promoting a narrow conception of the good life? In political philosophy, the principle of liberal neutrality holds that governments should not impose one reasonable vision of the good over others. Applied to longtermist thinking, this principle highlights a potential tension between ambitious future-oriented policies and the need to respect present and future individuals’ diverse values, life choices, and cultural perspectives. This paper explores how we might navigate this tension. Can we pursue existential risk mitigation in a way that honors pluralism and autonomy? What would a democratically legitimate approach to longterm policy look like—one that recognizes the importance of future generations without assuming a one-size-fits-all future? By examining the ethical foundations of longtermist thinking through the lens of political liberalism, the paper opens space for a more inclusive, reflective, and just approach to safeguarding humanity’s future.
Abstract: In her recent book, The AI Mirror: How to reclaim our humanity in the age of machine thinking, Shannon Vallor (2024) criticizes prevalent narratives that warn of AGI existential risks to identify a more insidious AI-induced existential threat: the erosion of human self-understanding. Drawing on existentialist philosophy, she argues that AI tempts us to forget our fundamental freedom and responsibility for shaping its path of development. The prevalence of AGI narratives itself, she suggests, is a symptom of this self-forgetting and its resulting constraint on our imagination. Achieving a flourishing future with AI requires that we reclaim our agency and rewrite the “vital pre-technical program” that determine the ultimate purpose of technology. While she argues that this demands the revaluation of virtues like civil courage, restraint, and care, I argue for the importance of another, foundational virtue: hermeneutical reflection. This virtue is characterized as a cultivated, critical awareness of the implicit assumptions and background practices that shape our action and experience, which allows us to notice and articulate the problems embedded in our current systems. This reflective practice is what rekindles the forgotten motivation for care, restraint, and civil courage that Vallor calls for. Accordingly, it is the essential first step in rewriting the vital pre-technical program from one of exploitation to one of care that truly serves human flourishing.
Abstract: Did technology really begin with machines and modern science? In this talk, I explore the deep roots of technology through three interwoven dimensions: myth, milieu, and technics. Myth, far from being a mere relic of pre-modern imagination, constitutes one of the earliest technological forms through which humans gave meaning to the world and shaped their responses to it. As suggested by the Global Mythology Hypothesis, shared narrative patterns across cultures point to universal motifs of creation, making, and transformation—deeply entwined with the origins of technics. Milieu, especially as understood in the Japanese concept of fūdo, refers to a lived and mediated environment—not simply “nature” but a meaningful ground where humans and non-humans co-shape each other. In such a context, technology is not an external tool to dominate nature but an embodied, responsive act of poetic world-making. Drawing on Yuk Hui’s theory of Technodiversity, this lecture challenges the dominant view of technology as a single, linear path of Western modernity. Instead, it argues for a pluralistic understanding of technics as culturally situated responses to the world, rooted in particular cosmologies, climates, and historical trajectories. By reweaving the threads of myth, milieu, and technics, I propose a way to rethink the act of making—not merely as production, but as an ontological mode of being-with and responding-to the world. This perspective may open a path toward a more ecologically grounded and culturally diverse future for technology.
Abstract: Instead of focusing on the specificities of the technology involved, I will discuss the future of AI society by considering the form of rule that reflects the ownership structures in the AI sector: oligarchy. I will carry out a conceptual analysis of contemporary oligarchy to discuss what, if anything, this form of oligarchy has to do with technology and how it may undermine democracy from within. I will suggest that, unless we find systematic counters to the grip of oligarch power, we don’t have to be creative in imagining the the future of AI society. That is because it may be its dystopian past: the fragmentation of society into hierarchically stratified estates.
Abstract: This talk offers a new theoretical option inspired by the Enactivist framework that aims to account for agency in virtual worlds. It proposes that there is only one agent and one action to be considered in VR - the action of the player who is engaging with a complex technological interface. The view goes against the available positions currently offered in the literature (the virtual realists, who claim that there is one real biological action and another digital action, or the virtual fictionalists, who claim that there is one real biological action, and the virtual action is only imagined, illusory or fictional. Our view builds on the enactivist view of agency, which claims that agents define their own individuality, are active source of activity in their environments, and regulate this activity in relation to certain norms - all of which is achievable in virtual reality. There are important social and pragmatic consequences to consider from the enactive view of agency in VR, such as re-thinking the fictionalization of virtual abuse, rape or violence in virtual worlds.
Abstract: Richard Saage suggests that utopias are concerned with envisioning alternative forms of government and politics (Saage, 2016). In this sense, utopias are collectivist in that they concern an alternative vision of social order and social collaboration. Many narratives of the future of AI seemingly lack this feature: They portray individuals almost as if detached from their societal context (Dickel & Schrape, 2017). They concern the well-being of these individuals without pondering the person’s preferences and behavior in light of the society they are living in. In my talk, I will introduce and discuss some AI futures that Saage might have had in mind. I will critically consider the scope of Saage’s allegation. Assuming that Saage is correct in his analysis regarding at least some prevalent AI visions, this would leave several conclusions to be discussed: a) We might refrain to call those futures utopias and reject them from the canon of these tradition. b) The concept of utopia might have to be reconsidered so as to do justice to those narratives and drop its reliance on collectivism. c) We must retrace and rethink the distinction between individual and society. It might well be that in utopia we have disposed of the need to coordinate societal behavior by way of regulation and coercion. With an appeal to both b) and c) I will discuss those three options contra Saage.
Dickel, S., & Schrape, J.-F. (2017). The Logic of Digital Utopianism. NanoEthics, 11(1), 47–58. https://doi.org/10.1007/s11569-017-0285-6
Saage, R. (2016). Is the classic concept of utopia ready for the future? In S. D. Chrostowska & J. D. Ingram (Eds.), Political Uses of Utopia: New Marxist, Anarchist, and Radical Democratic Perspectives (pp. 57-79). New York.
Abstract: Technology has become inseparable from the conditions of human society, urging us to reconsider the traditional binary distinction between persons and things (cf. Gunkel, 2023). In this talk, we explore a framework for integrating artificial agents generated by technology into our ethical network. The relational approach provides a promising conceptual tool for this purpose. In the field of robot ethics, relational approaches have been developed as critical alternatives to models that define moral boundaries based on human-centered properties such as consciousness or sentience (cf. Coeckelbergh, 2010; Gunkel, 2018). More recently, radical relational approaches have emerged, which view our moral considerations, ontological properties, and actions themselves as fundamentally shaped within the relationships (Puzio, 2024; Shimizu, 2025). By examining the radical relational approach within the context of the technosocial environment, we propose a shift in how ethical coexistence with technological entities—and more broadly, non-human entities—is understood: not as something determined top-down by humans, but as a dynamic process co-constituted within our shared environment. Starting from Puzio’s eco-relational approach, we aim to deepen this relational understanding of being by incorporating Watsuji Tetsurō’s ideas of fūdo and the ethics of betweenness (aidagara), which emphasize the interplay between individuals and their natural, social, and technological environments. Finally, we consider how the implications of radical relational approach might be brought into practice. While it does not offer “ready-made” normative solutions, we suggest that faithfully describing the ethical world as messy, evolving, and embedded in an ever-changing social-environmental, relational context may help diversify practical possibilities—such as in the design of spaces for human–AI coexistence and the cultivation of environments for ethical practices mediated by technology.
Abstract: This presentation examines the problem of moral enhancement undermining autonomy. Moral enhancement is defined as improving moral capacities through various means to enable more ethical choices. Moral enhancement is crucial for everyone, as humans often fail to act morally, prioritizing self-interest or favoring in-groups. Moreover, in complex moral dilemmas, individuals may struggle to determine the appropriate action. Thus, moral enhancement holds significant moral importance for humans to live more ethically, given our inherent limitations in consistently making moral choices. However, there are several criticisms of moral enhancement. Among these, this presentation focuses on the criticism concerning the undermining of autonomy. This presentation specifically also focuses on moral AI enhancement. Some philosophers have conceptually proposed methods for moral AI enhancement. Broadly, two types of models have been proposed. The first, the Delphi model, involves users simply accepting AI's moral advice for decision-making, without engaging in reflective thought on moral issues. The second, the Socratic AI model, enables users to deepen their understanding of moral issues and make decisions through dialogue with AI. The Socratic AI model is generally preferred in discussions surrounding moral AI enhancement because, in the Delphi model, users merely accept AI's opinions without fostering a deeper understanding of moral issues, which compromises the autonomous nature of their choices. This presentation will address why autonomous choice is considered important and the extent of its significance.
Abstract: Intelligence can be framed as the capacity to adopt perspectives that improve one’s odds of achieving their goals. Under this view, I explore how intelligence—whether AI, human, or hybrid—may soon evolve beyond current biased frameworks for AI benchmarking, competition, and alignment. Historically, our predictions about technological futures have often failed, regardless of them being doomer or utopian projections. In this talk, I propose simplifying the debate by focusing on four defining questions for humanity’s future: growth, plurality, agency, and substrate. I then organize future projections into a recursive decision tree, going over possible outcomes, consequences, risks, challenges, probabilities, and margins of error for each branch thus created. On this vivid map, I emphasize alternative, less hollywoodian visions grounded in translation and care across diverse intelligences, reimagining the future as open-ended ecologies of perspectives rather than single convergent paths. By offering such alternatives, my hope is we can better prepare for unpredictable transformations in our society, technology, and the very structure of our cognition.