Eileen Donahoe is Executive Director of the Global Digital Policy Incubator (GDPI) at Stanford University. GDPI is a global multi-stakeholder collaboration hub for development of digital policies that reinforce human rights.She served as the first US Ambassador to the United Nations Human Rights Council in Geneva during the Obama administration, and later as Director of Global Affairs at Human Rights Watch.
In reflecting on how the global “Internet Governance” discussion should develop in the next decade, one top priority occurs to me: we need to articulate how to apply universal human rights principles more fully and advocate for the use of the existing human rights framework in governance of digitized, algorithmically-driven societies. We all know we are in the midst of a global battle for dominance in technology, particularly with respect to artificial intelligence. We also must recognize that we are in the midst of a geopolitical battle with respect to the norms and values that will guide regulation of technology and governance of AI-driven societies. Our shared priority should be to solidify global commitment to the existing human rights framework as the foundation for governance of digitized societies globally.
The work articulating how to apply the existing human rights framework to digitized societies will require both continuity with and creative adaptation of the existing doctrine and framework. It will also require cross-disciplinary, cross-sector, multi-stakeholder engagement, as well as multilateral reinforcement. We had a foundational moment in June 2012, when the UN Human Rights Council passed the first UN resolution on Internet Freedom by consensus. That resolution laid down the foundational principle that human rights must be protected online as in the offline realm. Efforts were soon made to apply existing human rights doctrine to the internet, but remarkable technological advancement has changed the “online” context dramatically.
In just a few years, with digitization of society, the online/offline distinction has basically collapsed, at least in the digitized half of the world. The internet has become the infrastructure of society and machine decisions have, somewhat invisibly, infiltrated many realms of governance. We now collect so much data that many sectors of society have turned to algorithmic decision-making for the simple reason that the quantity of data collected is beyond human processing capacity. In effect, digitization of society has necessitated a move to machine decision-making so all the data being collected can be processed and capitalize upon.
In the context of all this change, applying existing human rights doctrine in the digital realm has not been a simple move. Some features of our globalized, digitized ecosystem are inherently challenging to the existing framework. Most notably, the basic trans-border mode of internet operation is challenging to an international order built on the concept of nation states defined by territorial boundaries. Second, the original human rights framework placed primary obligation on states to protect and not violate human rights of citizens and people within their territory and jurisdiction. Yet, in digitized societies, extraterritorial reach is the default rather than the exception. In addition, we have seen a dramatic trend toward privatization of governance, where private sector global information platforms and social media companies function as quasi-sovereigns and have dramatic effect on the enjoyment of human rights of both users and the larger societies in which they operate. In this regard, the adoption of the UN Guiding Principles on Business and Human Rights (UNGPs) in 2011 was a significant normative development. The UNGPs articulated the responsibility of private sector companies to respect human rights, as well as the responsibility to develop due-diligence processes to assess the impact of their products and services on human rights. But many private sector technology companies still are unfamiliar with human rights and too few engage in serious human rights impact assessments.
We are at a critical juncture when it comes to the governance of digital societies. While we need new policies and regulation for digital technologies, we do not need to reinvent the wheel or start with a blank sheet to develop a whole new set of principles. Many well-intentioned entities who are unfamiliar with existing human rights language are working to develop new ethical frameworks for AI. But we do not need new principles for digital society – we have that foundation in existing universal human rights. The important work that needs to be done is to articulate how to apply this existing human rights framework in AI-driven societies.
Several features of the existing human rights framework make it well-suited for this purpose. First, it starts with a human-centric approach and a rich vision of human dignity, which will become increasingly important in a machine-driven world. Second, it is universally applicable, with status under International law, and has been embedded in national constitutions and applied by governments around the world. Third, it is the product of global multilateral negotiation and multi-stakeholder engagement, so it enjoys a level of legitimacy and global recognition that would be very difficult to match. These are crucial advantages.
On a pragmatic level, it is not realistic to think we can get global agreement on a comprehensive set of new principles at this geopolitical moment, especially with as rich a vision of human dignity as the existing human rights framework. The bottom line is that we let go of the existing human rights framework at our own peril. Our shared global multistakeholder project for the next decade must be to do the hard work of adapting the existing principles to digital reality. Through that exercise, we will contribute to the development of innovative new mechanism for governance while providing continuity with enduring values.