There are concepts that change your mind. I love it when that happens. I had this experience back in 2013 when seeing for the first time Wim van de Donks model for ways of ordering society. He states there are three main governing principles: the state, the market and the community. Let’s take infrastructure as an example. At the moment of course, there is a strong bias towards the market with players like Meta and Alphabet. In a lot of countries, states are looking at sovereign solutions (especially since the actions of the second Trump administration). And the community has a global infrastructure in Wikimedia.
The point Van de Donk makes is that these require different ways of handling: the state can draft laws (which for instance the EU is doing in very interesting ways around AI), the market is ruled by economics, the community by it’s own values (differing of course for different groups).
Building on Van de Donk’s framework of state, market, and community, Pierre Bourdieu’s Field Theory allows us to explore these domains in more depth by examining the dynamics of power, capital, and strategy within various social fields. Bourdieu sees society as composed of various fields (social arenas) such as art, politics, or medicine. Each field has its own rules, power relations, and forms of capital (economic, cultural, social, symbolic). Actors compete for capital and position within fields; their strategies are shaped by their habitus (internalized dispositions).
In terms of AI, there is a focus on economic values from most tech companies and quite a few governments (cf. the UK ‘injecting AI in the veins of the country’. On the other hand, communities and public partners attribute more value to social and cultural values. Thinking about the connections between fields can help to find better governance models for AI. For instance, in media, Paul Keller has introduced some interesting ideas about other possible ways for economically viable information infrastructures in his recent publication Beyond AI and Copyright.
Thinking about governance for AI implies thinking about new types of governance. No longer based on hierarchy and stakes held, but factoring in the messiness and complexity of a connected world. The first women to receive a Nobel Prize for Economics was Elinor Ostrom. Her work fundamentally challenged the traditional views of governance, particularly the "tragedy of the commons" theory, which suggested that individuals acting in their own self-interest would inevitably overuse and deplete shared resources. Ostrom argued against this, showing through empirical research that communities could effectively manage common resources without external control or privatization.
This thinking is echoed by Jeremy Lent in Web of Meaning where he describes a family gathering all over the world in which an Uncle Bob explains the naivety of having ideals: ‘when it comes down to it, everyone’s just interested in their own skin. It’s a rat race’. Uncle Bob’s cynicism represents the belief that everyone is fundamentally self-serving, which reflects a deeply entrenched, reductionist view of human nature in modern Western culture. Lent challenges this worldview by arguing that it is not only limiting but also fundamentally flawed. He asserts that this view of life as a zero-sum competition is at odds with the interconnected nature of reality. Instead of a world made up of isolated, self-interested individuals, Lent suggests that we live in a web of meaning—an interconnected system where everything and everyone is related.
This view is gaining traction and is reflected in books such as Metazoa and Other Minds by Peter Godfrey-Smith, which explores the evolution of consciousness and the interconnectedness of life. His work and books like Entangled Life by Merlin Sheldrake are examples focusing on the natural world. Johann Hari’s Lost Connections is about social connection, Impact Networks by David Ehrlichman about connections between organizations. There are many other ways that lead to Rome than the simplistic one of self-centeredness.
Martin Luther King once said: “Power without love is reckless and abusive, and love without power is sentimental and anemic”. So too with governance, we need to balance the need for power with the need for love. An example Adam Kahane worked with in Power and Love: A Theory and Practice of Social Change. Communities have already begun to push back against the market-driven development of AI, calling for more transparent, ethical AI systems. Initiatives like open-source AI projects and community-driven data policies (e.g., Wikipedia, Creative Commons) reflect a growing trend toward democratizing AI governance.
Because of these connections between people, organisations and information all over the world any governance of AI should factor in more than only formal organisations and more than just human actors. This brings us to another conceptual framework: Actor-Network Theory. This sees society as a network of both human and non-human actors that interact and influence each other could be a practical theory to help us navigate governance for AI. It states that technologies, ideas, institutions, and people all influence each other. Power and agency emerge through these networks of relations rather than residing solely in individuals or structures.
For instance material networks (such as the infrastructures described in Deb Chachra’s How Infrastructure works) have an impact on accessibility of AI. The power dynamics within material networks—such as the physical infrastructure needed for AI, including data centers, broadband and computational resources—affect who can access and benefit from AI technologies, influencing how AI is governed.
Actor-Network Theory allows us to see AI as more than just a human invention; it is an evolving network where technologies, algorithms, and infrastructure play significant roles in shaping outcomes. The ethical decisions embedded in these systems can only be understood by looking at the entire network of relations - human and non-human - that form AI’s ecosystem.
Based on these ideas, we can see that multi-actor governance is necessary for fair policy making in this field. AI governance requires balancing the state’s regulatory role, the market’s profit-driven objectives, and the community's ethical considerations. It requires thinking about the field you are applying AI to, as well as the non-human actors in play. Using these concepts, we can begin to design AI governance systems that reflect the complexity of the technology itself. As AI continues to evolve, so too must our approach to governance. By understanding the networks that shape AI, from global players to local communities, we can build governance systems that reflect the interconnectedness of our digital world.
Which brings us back to ideals: a legitimate approach to progress in the world. Just as Martin Luther King emphasized the need for power and love to coexist, each being dangerous without the other. So we need to balance the powers that be - through regulation, market influence and community values. Only in this way can AI truly benefit the whole of society.