With the increasing ability and availability of Artificial Intelligence (AI), there’s an emergence of complex questions about the very fabric of its existence.
Whether it is politicians mulling over policy implications (Coeckelbergh 2022), civilians navigating the implications of AI in daily life (Liu et al. 2023), educational institutions’ decisions on AI usage in learning and teaching (Mouta, Pinto-Llorente, and TorrecillaS´anchez 2023), tech organizations using AI to innovate and create business applications (Bessen et al. 2023) or military organizations strategizing defense (De Spiegeleire, Maas, and Sweijs 2017), each group views AI through its unique lens (Calo 2017).
The current discourse on AI reflects a complex interplay of optimism and concern regarding AI’s impact on global equity (Crawford 2021). This narrative is increasingly focused on the ”AI Divide,” a term that encapsulates the challenges AI poses in perpetuating disparities between those who have access and those who don’t (Carter, Liu, and Cantrell 2020).
The AI divide refers to the gap between individuals, communities, or nations in access to, understanding of, and ability to benefit from artificial intelligence technologies. It is shaped by factors such as digital infrastructure, education, socioeconomic status, and policy support.
The global population is currently around 8.1 billion. As of 2023, it’s estimated that there were roughly 3.4 billion people employed worldwide, with 2 billion people working within the informal economy (Dyvik 2023). The Information and communication technology (ICT) sector was projected to employ 55.3 million people full-time by 2020, according to estimates made before the COVID-19 pandemic (Sherif 2023).
Within this sector, a smaller number of workers are actively involved in developing or working with AI tools and systems. Looking forward, it’s expected that by 2025, up to 97 million individuals will be working in the AI field. Furthermore, the US AI market is anticipated to reach 299.64 billion dollars by 2026. AI tools and systems are projected to impact nearly 40 percent of jobs globally, with this figure rising to about 60 percent in advanced economies (Howarth 2024).
These changes will lead to a combination of job replacement and augmentation with the influencing factor being access to or lack of access to the internet and digital infrastructure. This scenario underscores a significant power imbalance, where a relatively small portion of the global population has a profound influence on the lives and livelihoods of the majority as visualized in the Figure (left).
Wu, Y. (2022, July). An Overview Analysis of AI Divide: Applications and Prospects of AI Divide in China’s Society. In 2022 3rd International Conference on Mental Health, Education and Human Development (MHEHD 2022) (pp. 3-9). Atlantis Press.
In this rapidly evolving landscape and widening AI divide, a notable challenge that has emerged is the significant lack of diversity among the creators, researchers, and educators in the AI field.
This homogeneity within the AI workforce, if it persists into the predicted 96 million AI workforce, represents more than an issue of fairness or representation; it fundamentally affects the design, implementation, and impact of AI technologies across different populations and societies. (Stathoulopoulos and Mateos-Garcia 2019).
In the early 2000s, the term "AI ecosystem" was not widely prevalent. Instead, related terms like "innovation ecosystem," "technology ecosystem," or "startup ecosystem" dominated, referring broadly to networks of innovation involving businesses, universities, and investors. The concept of an Ecosystem to support AI innovation began appearing sporadically in industry reports and analyst publications, highlighting interactions between tech companies, AI startups, investors, and policymakers.
Gartner's Definition of AI Ecosystem:
Gartner defines the AI ecosystem as a network of providers, tools, frameworks, hardware, data platforms, and services that enable organizations to develop, integrate, deploy, and manage AI solutions.
McKinsey's Perspective on AI Ecosystem:
McKinsey describes the AI ecosystem as encompassing vendors, infrastructure providers, customers, regulators, researchers, data providers, and educators—all influencing and shaping the trajectory of AI adoption and impact.
World Economic Forum's AI Ecosystem Discussions:
The World Economic Forum frames the AI ecosystem as a multi-stakeholder environment consisting of technologies, standards, policies, and actors, all influencing how AI is ethically designed, deployed, and regulated at global, national, and local levels.
Stanford's AI Index Report:
The AI Index Report by Stanford's Human-Centered AI Institute documents the AI ecosystem, discussing global AI research collaborations, funding flows, regulations, and ethical standards.
OECD AI Policy Observatory:
The OECD AI Policy Observatory uses the term "AI ecosystem" to outline countries' policies, stakeholder mapping, and national strategies.
Overal the broad Industry term "AI ecosystem" is broadly defined as:
A network or community of interconnected stakeholders, technologies, services, and resources involved in the development, deployment, commercialization, and governance of artificial intelligence systems.
This definition emphasizes how multiple actors, components, and dynamics collaborate or compete within the AI field, impacting technological innovation, market growth, societal adoption, and regulatory oversight.The AI ecosystem definition adopted by the industry highlights complexity, collaboration, and competition, emphasizing the need to navigate strategic partnerships, ethics, regulation, and innovation simultaneously. It allows companies to clearly identify their roles, understand market dynamics, align strategy with ecosystem developments, and leverage collaborations effectively.
However, it is important to highlight that that AI is not merely technological but involves people, policy, data, culture, and societal forces. It underscores a holistic perspective, acknowledging that developments in AI depend on broader structures and contexts—economic incentives, ethical frameworks, education, regulation, and societal acceptance.
It is Humans within both tech behemoths and startups that shape the AI landscape with their innovative products and platforms. Open-source communities bolster this growth, fostering collaboration and ensuring that AI tools remain accessible to all (Quan and Sanderson 2018). The role of human researchers and academics cannot be overstated, as institutions constantly push the boundaries of what’s possible, often dictating the future direction of AI (Basole and Accenture 2021).
In the following section particularly we focus more on the identity of the creators as it has a cascading effect on the other layers. These human contributors do more than just define AI’s trajectory; they embed societal norms, values, and biases into the technology, shaping the future of our automated world (Cave and Dihal 2020) (Adams and Khomh 2020).
In the context of the AI ecosystem, we refer to human Identity consider human identity, as understood through the lenses of identity theory (Jenkins 2014), as one that is positioned at the intersection of the social categories of race (Schlesinger, O’Hara, and Taylor 2018), gender (Scheuerman et al. 2020), class (Inaba and Togawa 2021), sexuality (Keyes, Hitzig, and Blell 2021), disability (Trewin et al. 2019), nationality, and age (Pollack 2005), among others.
Sociology Literature provides a comprehensive exploration of how individuals’ identities are formed (Stets and Burke 2000), negotiated (Jenkins 2014), and transformed (Collins and Bilge 2020) within the context of social structures and interactions, offering insightful analysis of the complexity of identity as both a personal and social construct (Albert 1998). However, Historically researchers have focused on single or multiple demographic features independently without fully grasping how these identities interact to create complex power imbalances within one’s intersectional positionality (Stets and Burke 2000).
Positionality here refers to the recognition and articulation of one’s social and political context, particularly in terms of identity and power dynamics, and how these factors influence one’s perspective, knowledge, and engagement with the world. This concept is critical in understanding that people’s experiences, opportunities, and worldviews are shaped by the intersection of multiple social categories.
When considering positionality in an intersectional context, individuals become aware of how their various identities—such as being a white, middle-class, cisgender woman or a Black, low-income, queer man—interact to create unique experiences of privilege and oppression (Anthias, 2008). This awareness helps humans understand how their standpoint influences their interpretations, interactions, and the knowledge they produce or engage with.
Here, Intersectionality is a critical framework that provides us with the mindset and language for examining interconnections and interdependencies between social identity categories and systems. The term encapsulates the insight that race, class, gender, sexuality, ethnicity, nation, ability, age, etc operate not as unitary, mutually exclusive entities, but rather as reciprocally constructing phenomena (Crenshaw 2013).
Davide Bonazzi’s conceptual illustration
In the AI ecosystem, as presented in Figure below, The human identity permeates all the layers of the AI ecosystem, influencing access to (Layer 1), availability of (Layer 2.1), data in (Layer 2.2), perceptions of (Layer 2.3), and experiences of all stakeholders (Layer 3). In this work, we call for an intersectional lens as the overlapping identities and positionality of the creators and end users shaped by the unique experiences of privilege and oppression (Benjamin 2019) impact their role in the ecosystem.
The perception of AI is not monolithic, it is influenced by numerous factors, including societal structures, personal experiences, media representations, etc (Ge et al. 2024). The lens of identity in the field adds a layer of complexity, which leads to discussions about whether AI application design and creation processes are informed by a myriad of backgrounds, experiences, and worldviews (Tadimalla and Maher 2024). Thus by understanding the context and backgrounds of who creates AI technologies, as well as examining existing frameworks (Faruqe, Watkins, and Medsker 2021) (Selenko et al. 2022) and societal considerations(Makridakis 2017) that guide AI’s development and the way they impact society, we can advocate for diversity and inclusion as essential to an AI Identity that serves all people fairly.
Often discussions and models involving identity in the AI landscape and ecosystem are heavily centered on the technological and economic aspects (Mitchell 2020) (Prentice, Wong, and Lin 2023) (Ayobi et al. 2021) (Chung and Adar 2023). With efforts focused on defining and explaining
What AI is (Wang 2019),
How to understand its usage (Devedzic 2022) (Touretzky, Gardner-McCune, and Seehorn 2023),
How AI perceives the human identity (Scheuerman et al. 2020) (Schlesinger, O’Hara, and Taylor 2018) (Tian et al. 2017),
How humans perceive AI (Ragot, Martin, and Cojean 2020) (Lima et al. 2020) (Shinners et al. 2022),
How humans interact with AI (Keyes, Hitzig, and Blell 2021) (Ashktorab et al. 2020) and
How AI influences human identity in various scenarios (Cao et al. 2023).
These answers include discussions on algorithms (Noble 2018) (Pasquale 2015), data(Aragon et al. 2016) (Aragon et al. 2022), hardware infrastructure (Batra et al. 2019), application areas (Huang and Rust 2021), market growth, company roles (Alahmad and Robert 2020), and investment trends (Mir, Kar, and Gupta 2022). While these aspects are undeniably crucial, they often overshadow the deeper, more intricate layers of AI’s perception and relationship with the concept of identity.
Figure: The AI ecosystem
See the paper attached below for in-depth descriptions of each layer
In the technology development landscape, the creator’s work is influenced by their experiences, perceptions, and identity, which manifests in the data they select and collect, ultimately shaping the technology they create. Thus representation and inclusion in the creation process have far-reaching consequences and considerations, which the framework suggests must be critically assessed through the lens of identity.
By examining the roles, interactions, and contributions of these diverse stakeholders as presented in the AI Ecosystem we can gain a comprehensive understanding of how AI systems are created, operate, and evolve. By foregrounding these concepts, we aim to dissect how they influence AI creators and their creations.
Through the depiction of layers and various pipelines (connections) across the layers that lead to various consequences or issues in the AI landscape, this grounding of the AI ecosystem in the Human identity shows the myriad ways AI technologies can either reinforce existing societal disparities or potentially pave the way for a more inclusive and equitable future.
By dissecting these relationships, we can better comprehend and address the disproportionate consequences and impacts that AI can have on various segments of society.
Key sociological constructs such as diversity, fairness, inclusion, and bias are interwoven with a fundamental sense of belonging, and accountability underscoring the importance of these concepts in evaluating diversity and inclusion work in the field of AI. The upward arrow alongside ’Impact of Identity’ suggests that the presence or lack of diversity, and inclusion across the mentioned sociological constructs get amplified as we move up through the layers. for example, the impact of bias in the creator’s layers (Mehrabi et al. 2021) snowballs exponentially into the consequences and considerations layer much akin to the bioaccumulation/- magnification process in nature (Bommasani et al. 2022). Greater emphasis on these aspects can potentially elevate the role and positive influence of identity in the technological sphere. This AI identity research framework serves as a guide for a comprehensive analysis of how identity shapes technology and, conversely, how technology can reflect and affect societal values and individual sense of self.
Figure: The AI Identity research framework
In the AI identity Ecosyetm object framework, we center the human identity of both creators and consumers as inherently intersectional.
Their interactions with AI ecosystem are deeply shaped by their positionality and intersectional identities which inform how they perceive, develop, and use AI technologies.
This approach underscores the importance of considering the diverse and intersecting identities that influence both the creation and consumption of AI, ultimately impacting how AI technologies affect different segments of society.
We define ”AI Identity” in two dimensions: internal and external.
Internally, AI Identity includes the collective characteristics, values, and ethical considerations embodied in the creation of AI technologies.
Externally, AI identity is shaped by individual perception, societal impact, and cultural norms.
These dimensions form a comprehensive view of AI identity, highlighting the interplay between the creation of technology itself and its broader interaction with society. This means understanding the place of AI in society, its development, interactions with individuals (Gutoreva 2024) (Maher, Ventura, and Magerko 2023), and the nuances of its impact on various facets of human life.
The identity of AI is intricately linked to multiple ethical dilemmas, including responsibility, accountability, fairness, transparency, and trust (Benjamin 2019).
These issues are central to the ongoing discussions surrounding the regulation and governance of AI, as well as its cultural and social impacts(Arora et al. 2023). Furthermore, it is vital to recognize the role of media representations of AI in popular culture, as they significantly shape public attitudes and beliefs about this technology. In this context, the emergence of Human-Centered AI (HCAI) (Shneiderman 2021), with its emphasis on considering human values and agency, represents a pivotal shift in the AI landscape.
In conclusion, the world of AI is ever-changing, with new creators, creations, and ideas constantly emerging as technology advances.
Highlighting the interplay between the technology itself and its broader interaction with society, we define AI Identity in two dimensions to form a comprehensive view of AI identity.
AI Identity includes the collective characteristics, values, and ethical considerations embodied in the creation of AI technologies internally, and AI identity is shaped by individual perception, societal impact, and cultural norms externally. The discussions within this paper shed light on the significant impact of diversity and inclusion in shaping public perceptions and understanding of AI, demonstrating how these narratives influence the discourse surrounding AI technologies in various societal contexts.
Moreover, by proposing the AI identity framework, which captures the impact of various social constructs such as diversity, fairness, inclusion, bias, sense of belonging, and accountability across the creators, creations, and consequences of AI, we advocate for a more inclusive and responsible AI ecosystem.
This AI identity Ecosystem lens highlights the need for the development of AI technologies that are equitable, accessible, and beneficial for all segments of society. This paper serves as a call to action, urging the AI community to ground the development of AI in the human experience. An approach that creates technology to address the needs of diverse populations, which, in turn, fosters greater inclusivity and engagement in AI development.
Author Details
Yash Tadimalla is a final year Ph.D. student in the College of Computing and Informatics at UNC Charlotte, where he is pursuing an interdisciplinary degree in Computer Science and Sociology. His research explores how an individual's identity influences their interaction with and learning of technology, particularly in the domains of Artificial Intelligence (AI) and Computer Science (CS) education.
At UNC Charlotte he is assisting various NSF research projects under the Center for Education Innovation (CEI) Lab and the Human-Centered Computing (HCC) Lab. As the Technology Focal Point for the UN Major Group for Children and Youth Science-Policy Interface and President-elect of the World Student Platform for Engineering Education and Development (SPEED), He advocates for the equitable advancement of STEM education, mental health, and tech advocacy on a global scale.
AI Literacy for All © 2024 by Sri Yash Tadimalla, Mary Lou Maher is licensed under CC BY-NC-SA 4.0