In this episode of Unmasking the Machine: A Sociological Look at AI, a student panel explores the rapid AI boom and the myth that technology is neutral. Drawing on reporting from UC Santa Cruz and the BBC, the conversation examines competing narratives about artificial intelligence: one framing AI as something that can be responsibly shaped through research, governance, and interdisciplinary collaboration, and another portraying it as an economic revolution that will create winners, losers, and potential “carnage.” Through sociological concepts like the politics of technology, hidden labor, environmental costs, and the social construction of intelligence, the panel asks who benefits from AI, who is left out, and what power structures shape its development. As students on the verge of entering the workforce reflect on how AI is already changing education and careers, the episode considers a larger question: if AI feels inevitable, how can individuals and societies still shape its future?
Hosts:
Carey Faulkner, Associate Professor of Sociology
Kelly Miller, Senior Instructional Designer
Panel:
Ashanti Amastal, Class of 2028
Nathalie Hernandez, Class of 2027
Emily John, Class of 2028
Arteaga, A. (2026, January 13). Shaping the future of artificial intelligence. UC Santa Cruz.
Islam, F. (2026, January 28). AI boom will produce victors and carnage, tech boss warns. BBC News.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881.
Winner, Langdon. 1980. "Do Artifacts Have Politics?" Daedalus 109(1): 121-136. https://www.jstor.org/stable/20024652
Carey Faulkner (0:02 - 0:47)
Welcome to another episode of Unmasking the Machine, a sociological look at AI. The podcast where we take a deeper look at current headlines to examine the social forces behind the technologies. I'm Carey Faulkner, Associate Professor of Sociology at Franklin and Marshall College.
Kelly Miller (0:17 - 0:32)
And I'm Kelly Miller, Senior Instructional Designer. To help us explore today's topic, Error 404: Neutral Technology Not Found, we're joined by three students from Carey's Sociology of AI class. Hi. You guys, introduce yourselves.
Ashanti Amastal (0:34 - 0:39)
Hi, I'm Ashanti Amastal, the expert for our first article from UCSC on shaping the future of artificial intelligence.
Nathalie Hernandez (0:40 - 0:47)
And I'm Nathalie Hernandez, the expert for our second piece from BBC on AI boom will produce visitors and carnage, tech boss warn.
Emily John (0:47 - 1:03)
And I'm Emily John, the sociology expert for the group. My role today is to help us think sociologically about the current AI boom or the rapid expansion of artificial intelligence and how it will reshape things such as work, inequality, and power in society.
Carey Faulkner (1:03 - 1:07)
It sounds like we have a lot to dive into. So let's start the briefing.
Ashanti Amastal (1:07 - 3:56)
So thank you for having us.
Our group looked at two different articles about the current AI boom, but they framed that boom very differently. One focuses on how universities and experts are trying to shape AI more responsibly while others are emphasizing disruption, risk, and the possibility of winners and losers. So I had article A, which is from UCSC.
The article presents AI as something that could still be shaped through research regulation and interdisciplinary collaboration. It highlights several experts studying the environmental costs and various data centers, the labor or like the hidden labor of AI, the need for more transparency and impacts it has on education, whether it be reading, writing, or intellectual property. Overall, the article takes a stab at various forms of bias in writing tools and suggesting various forms of harm while still trying to maintain this image of AI as something more positive.
So the article is like being pushed by various sociologists, academic experts, and research studying various forms of AI in different fields. In comparison to our next article, which mainly focuses on a tech CEO business perspective, which is a little sneak peek of that. My article quotes from various university researchers and the idea is that understanding AI requires like multifaceted and interdisciplinary collaboration.
So with key points of environmental impact, which is like how AI is being trained and like the requirement of massive energy, labor, and jobs. So the idea that like it's not eliminating entire occupations, but making it more automated and changing responsibility, changing the idea of surveillance, management, bias, whether that be through scheduling or like the evaluation of certain systems, AI and transparency in writing, ideas in AI and education. So how students are learning to read and write now, misinformation, privacy, and the primary focus on English, which obviously is leaving out marginalized communities or non-English speakers.
So overall, the article takes a more cautious but optimistic tone about AI. It acknowledges real risks like environmental costs, jobs, and bias, but argues that these problems can be addressed with certain experts, universities, and policy makers to guide AI in its development. In other words, the article frames AI as something that society can still shape and regulate into something that will help our future.
So overall, the UCSC article focuses on governance, research, and responsible development, but the second article we look at frames the AI boom very differently, focusing more on economics.
Nathalie Hernandez (3:57 - 5:56)
So thank you for the sneak peek, Ashanti. The article you focused on from UC Santa Cruz focuses more on how researchers and policy makers are trying to shape the future of AI in more responsible ways.
It talks about things like social impacts of AI, why governance and research are important, guiding how the technology develops. However, the article that I looked at from BBC takes a slightly different perspective and focuses more on the economic side of artificial intelligence as a whole. It really discusses comments from Cisco's CEO, Chuck Robbins, who argues that AI could actually become something much bigger than the internet itself, which is a pretty big claim to make, in my opinion, but at the same time, he warns that there will be both winners and what he calls carnage, more like losers, like winners and losers, as AI continues to grow.
What he means by that is that while some companies and industries will likely benefit a lot from artificial intelligence, others might struggle or fail to keep up with all of these technological advancements. The article also mentions that some experts think that AI markets could become more like a bubble, meaning that there's a lot of excitement and investment right now that might not fully be lasting, so referring more towards long-term benefits that will probably not be seen during this excitement. And another major point is how AI could change or even eliminate certain jobs, specifically in areas like customer service, so because of this, workers are encouraged to learn how to use AI rather than ignoring it.
It also briefly brings up some risks, like how AI could make cyber attacks or scams much more sophisticated, and this is kind of scary because it could lead to some job displacement in certain industries. Overall, though, the article mainly frames AI less as something that needs to be carefully governed and more of a powerful technological revolution that will transform the economy and might lead to both winners or losers, or, in Chuck Robbins' words, carnage.
Emily John (5:57 - 7:33)
What stands out to me sociologically is that both articles are talking about the AI boom, but in completely different social positions, so we have the UCSC article framing AI as something that can be responsibly used and shaped by experts, universities, and policy, but then we have the BBC article framing it as a massive economic transformation that will create both success and destruction, so already we can see that AI is not just a neutral technology. It's being interpreted through different institutional interests and different ideas on what matters most, so I want to pull some readings from class.
The first reading I want to talk about is the Winner reading in Do Artifacts Have Politics? So winner says technologies are not neutral tools. They shape power, social institution, and access, and we can clearly see that here, so in the BBC article, AI is described in terms of victors and carnage, as it was stated earlier, and this makes the political consequences really obvious because some people, companies, and even countries will benefit while others are often displaced or left behind, but in the UCSC article, the politics are a little less dramatic, but they're still there, but often the question becomes who gets to shape AI's future? Is it mostly universities, researchers, and institutions with resources? So in both cases, technology is tied to power.
Ashanti Amastal (7:35 - 7:56)
Yeah, in the article, they talked about the importance of transparency moving away from the quote like black boxes where people don't fully understand why AI is making the decisions it's making, so researchers are trying to find more ways to make AI explain itself, its process, and how it's giving information, and the power that comes with that, so being more accountable.
Nathalie Hernandez (7:57 - 9:08)
I feel like something that really jumps out to me from Winner's arguments is that technology isn't really neutral, and I feel like you can really see this in the BBC article because Chuck Robbins, as you mentioned, talks about these winners and losers, victors and carnage, and basically saying that artificial intelligence is going to shift a sort of power, like some companies or workers or even countries are going to benefit a lot while others will lose out entirely just depending on when the boom happens or how long it lasts or literally anything that has to do with the market itself. It's not really just like a technological issue, but it's also who has access to those resources, who controls that technology, or who ends up on the losing side, like you mentioned.
I feel like that's the politics of artificial intelligence really in action because it's really interesting how the article frames this in the terms of markets and how from a sociological perspective we really see how it's about the power and inequality in all of these topics that we're mentioning and how technology is shaping the economy, social opportunities, or even global influence, and that's why you can't really think of AI as just a neutral tool, but more think of it holistically and what it's really affecting.
Emily John (9:09 - 10:01)
I also want to talk about the Crawford reading, and especially her chapters on labor and infrastructure. Crawford reminds us that AI is not just like the software on the cloud. It depends on real physical systems, energy use, and labor.
The UCSC article points to this when it discusses carbon-aware computing, electricity costs, and the strain of data centers. It also talks about the labor market impacts like surveillance and algorithmic management. The BBC article talks more about job loss and disruption, but not as much about the hidden labor or the material infrastructure behind AI.
Crawford here helps us see that AI boom is not just an innovation, but it's also about the environmental costs, labor, reconstruction, and unequal burdens, which was more explored in Ashanti's article.
Ashanti Amastal (10:01 - 10:32)
In my article, it mentions the risks and the outcomes of AI in the labor market, but very much brushing it aside as a positive regardless. It mentions that AI will make it more automated, more efficient, but still provide jobs and not the idea that was eliminating them. It also talks about certain environmental risks like issues of water consumption or climate impacts, but brushing it aside completely.
It mentions it, zero acknowledgement, and then it's like, on to the next, AI's the answer.
Nathalie Hernandez (10:33 - 12:05)
I think similarly, my article also brushes over the hidden labor behind AI, like the data annotators, moderators, like we mentioned in class before, or the servers and the energy that actually make these systems work. Because it's, artificial intelligence isn't just code in the cloud, it really depends on a ton of material and human labor that has to be put into these systems to even work or even be started up.
Another quote I think really jumped out to me from my article was, but urged workers to embrace, not fear the technology, unquote. I feel like this really talks about how Robin is encouraging workers to learn how to use AI rather than resist it, which kind of assumes that adapting to the technology is the responsibility of the workers now, and that's kind of the shift in power, shift in focus of how the labor market is going to be talked about, or how people are going to be preparing for maybe job interviews or job positions in the future, even though the economic changes are being driven by these large companies, you still see that these changes are probably going to be shifted some way.
So I feel like that's something that I took away from it, to kind of put this trust on AI and know that it's going to be here for long term, like we've said before, that's kind of scary, but it's also kind of a cheat sheet in a way because it's telling us that if we are good at artificial intelligence, or if we know how to use these tools, that will likely be there for a long time in the job market, and we'll be able to succeed in the job market that comes in the future.
Emily John (12:06 - 12:45)
And I also think that our class idea on intelligence being socially constructed is also really helpful here, because both articles treat AI as a marker of progress, innovation, and even national strength, but socially pushes us to ask who gets defined as smart, valuable, or future-oriented in this economy. And historically, definitions of intelligence have always been tied to hierarchy, and today AI is becoming one of the ways for status to be distributed, whether it's between workers and managers, between companies, and even between nations as well.
Nathalie Hernandez (12:46 - 14:09)
I think for this, definitely the idea that intelligence is socially constructed is really useful here, because I feel like the article that I talked about frames AI as a marker of progress or national strength, like you mentioned, talking about how certain countries or companies and leaders are presented as smarter or just more capable, just because they're leading in AI. And so I feel like sociology really pushes us to ask who decides what counts as intelligence or as innovation, and who benefits from being seen as smart. Like in some cases, people or places outside of these networks, because of their skills or location or access, can be left behind, and so artificial intelligence isn't just technological progress, but it's also another way that social and economic hierarchies get reinforced.
For example, the article that I talked about mentions the UK and having some type of superpower because of its use of AI. It mentions the US and China being the leading powers in artificial intelligence, but once we talk more about how they're bringing in other countries that also might be going up there in that use, it's kind of important too to see what other countries might be leading because of the help of artificial intelligence.
Ashanti Amastal (14:10 - 14:40)
In the article, they talk about intelligence being in the sense of language, so talking about marginalized groups and their dialects, or AAVE being excluded in data or in LMS models, and how that impacts education or how people are interacting with one another.
And then AI is limiting people's critical thinking skills and obviously people's intelligence as you grow up with this embedment of AI, whether it be in education. Schools are more likely now to embed AI in how students are learning how to read, write, almost like taking away people's critical thinking skills.
Emily John (14:41 - 15:25)
I also want to touch on one more concept. I want to talk about heteronation, and it's the idea that systems that look automated still depend heavily on human labor. So I know we touched on it a little earlier, but I just wanted to emphasize it a little more because both articles talk about AI as transformative, but neither really focuses on the hidden workers, moderators, and the infrastructure labor that makes these systems possible.
And I think that that absence makes AI look more autonomous than it actually is, and it gives AI this credit or this almost pedestal that I don't really think it deserves. And so I feel like that's a very important concept just to keep in the back of our hands as well.
Nathalie Hernandez (15:26 - 16:38)
For the BBC article, it really does brush over it. I know we've talked a lot about the topics that our article just seems to brush over, but it really makes AI more of a powerful, independent tool than it actually is because it's talking about how it's working in the market or in the job markets or different type of things that don't seem like they'd have human labor because it's more of socially constructed terms. But I feel like that's really important because it hides the inequality that's in those systems.
People who work behind the scenes often don't really see the benefits of what AI is having and what it's creating. And while these companies and leaders are able to reap most of the benefits, the workers are not. I feel like it's also kind of disguising or attempting to disguise exploitation in a way because it's framing AI as more of a superpower or a winner as my article emphasizes in this kind of global race to see who is able to benefit more out of AI.
So I feel like it's definitely hiding the excitement of innovation through the loss of autonomy or the de-scaling of work, for example.
Ashanti Amastal (16:39 - 17:12)
I think in my article it highlights the emphasis of AI as almost a solution. It gives issues and then frames AI as the only solution with regulation or policy and it just emphasizes this.
AI is the only direction to go basically. It mentions issues of environmental cost, job changes, education, and then how AI can only make it better. It's honestly insane to me how much AI is embedded in everything that we do, whether it be at school, media, literally everything.
Kelly Miller (17:13 - 17:28)
So as current college students and then taking this class, you have a lot of extra readings that maybe may or may not be mentioned in the podcast, but you guys are very close to entering the workforce. What of all these issues are the most concerning for you guys personally? I feel like honestly this was kind of reassuring to my career path.
Nathalie Hernandez (17:29 - 18:26)
I feel like honestly this was kind of reassuring to my career path.
I want to do consulting and since I'm going to be working a lot with AI and even now part of my job is to use AI and we started to report it. I wrote on one of the memos I think or the reflexive memos that AI has actually been used more by companies and I've seen it firsthand. So I feel like it's not really a worry but it's more so giving us something to focus more on.
My article really emphasizes if you know how to use artificial intelligence now you will probably be really set on the job market and you'll be seen as valuable just because you know how to use a tool that was not really here a couple years ago. So I feel like in that sense it's kind of reassuring. This tool is going to be here long term.
It might not be morally correct or it might not be ethically good for the environment or for society as a whole but I feel like that's one of the biggest things that I've taken away from this big boom.
Emily John (18:26 - 20:35)
I want to be a doctor and AI surprisingly has really entered the healthcare force and I was really shocked. Last March I shadowed an infectious disease doctor and while she was taking me throughout her day and stuff one of the things that she does is before she sees a patient she actually uses this AI algorithm thing for science and for doctors and med students and it's connected to National Institute of Health and all. They have access to all of it and she'll be like okay my patient is 93 years old and this is the symptoms this is the whatever.
Can you give me papers that I can refer to to make some sort of definitive diagnosis because I'm thinking of this. I was telling her I'm really surprised that you're using AI because I didn't think that healthcare is something that I think it's kind of scary to use AI on a living human being because how can you trust a robot? This robot doesn't directly interact with these patients.
This robot doesn't talk to these patients so how do you trust it? But something that she left me with that I take it with me to school as well is that she's not letting AI do the thinking. She's doing the thinking because she told the AI machine thing this is the diagnosis I want to make.
Can you give me papers that will back this up? Can you give me? And then she'll look through it and she'll never just blindly accept it.
She'll always reject something. She'll be like I don't like this one. Look the AI used a simple version of this.
It was interesting seeing she was almost like arguing with a machine and she was like this is what you should take with you to your education, to your job force because at the end of the day AI is not going to disappear. This is going to be something that's integrated into all of our job forces at one point and we're just going to have to learn how to deal with it and live with it. And so I think like taking in critical thinking and not letting the robot think for you is very important and I know when I become a doctor like even if I will have access to these things I want to make all the decisions in my old noggin first for you know letting AI decide for me because that's just really scary.
Ashanti Amastal (20:36 - 21:02)
I think for me like my job experience was definitely really different like these past two summers I've been working as a squash coach with kids so AI like unless I'm really really like really lazy that's when I use AI as either to make a lesson plan or to make games for me but other than that I feel like it's very different like my job is basically to make my kids happy and come back so AI really doesn't have anything to do with that because it doesn't really know my personal experiences unless I tell it.
Carey Faulkner (21:03 - 22:17)
So one thing that I have heard in both of the articles and I hear from you all as well is this idea of AI being inevitable.
The articles sort of had themes of inevitability of transformation of competition kind of built into them and I can sort of see those aspects like in some of your narratives as well. I'm talking about your future and I guess you know there is another article that we read that talked about socio-technical imaginaries and that I can't remember the authors but we'll put that in the show notes. I think a question that I would like to ask you about is if we're talking about AI as inevitable I guess I wonder then what can should people do?
Right? Like if it's inevitable how inevitable is it? In what ways is it inevitable?
How might people go about shaping this system that is inevitable but potentially people can participate in the ways in which it is and isn't transformative. Are there ways for change to happen?
Emily John (22:18 - 23:52)
It's funny that you asked that. The other day I was on TikTok and it was a video and this guy was like you know me when like my ChatGPT free unlimited thing runs out and so I need to bring back the old noggin which basically he's saying that he'll rely on ChatGPT to the absolute max but then when he really really needs to he'll use his brain to do this essay or do this assignment or whatever. It was something similar like that.
Let me tell you like I think over 250k likes and all the comments were like yes like oh my god so real this that and I just find it really funny because I think like people are honestly treating it as like a big joke that we're starting to rely on AI for our thinking and just as like a normal part of our life and I hate to be a pessimist I really do but I just think that considering what I've been seeing online about people just not really taking it that seriously and even I tell people about this class they'll be like yeah I'm taking like a sociology class called soc of AI and they're like what you need an AI class for that's what they ask me and I'm like you know I just don't think people take it seriously enough to where can we make a full change but at the same time there is still people in our class that we are able to have these conversations and we're able to think for ourselves and I know we're not the only what 15 people out there so I think that on one end there's a lot of people who don't take it seriously but there's enough of us to where maybe you know we can take it seriously enough to where we can protest against it.
Nathalie Hernandez (23:52 - 24:56))
I also think it's inevitable but if we really want to get away around it I guess we would have to resort to our own like self discipline if we want to see the long term effects of it I guess like if I was to use it to write an entire essay for me repeatedly and consistently throughout the semester then I'll see at the final exam how I'm gonna do or just things like that where people know that what you're doing will make you see the downside of AI but I don't think that there's ways to kind of put a stop to it just because this is being run by really big tech corporations. It's being pushed out with a lot of big budgets to push actual artificial intelligence and so I don't really think that someone's gonna put a stop to it or someone can at least for now so it's really I think something that we need to take into account for ourselves and kind of self reflect on how often we're using it or the way that we're using it .
Ashanti Amastal (24:57 - 25:11)
I think the biggest thing is to honestly stay curious like question everything not just blindly accepting anything that comes from AI as true like questioning where it's coming from, who's saying it why is it being said, why am I even using AI? Question everything basically
Carey Faulkner (25:12 - 26:07)
I do think it's important to remember that these are useful viewpoints for individuals to take with them definitely to ask questions and I want to suggest to ask other people questions too because collective efforts are far more successful than individual efforts, individual self-discipline in terms of making broader kinds of social changes and there are if we look around the world we can see that different countries are engaging in different forms of regulation and that relates to how their populations are voting and the sort of collective opinions that they are expressing in relation to AI so I would agree that I see some degree of sort of inevitability happening here but exactly how that plays out I want to encourage you all as the young people to feel a sense of ownership over the world around you to engage in collective efforts to make the changes that you want to see. Please young people, please.
Kelly Miller (26:08 - 26:14)
Wow well said. So that brings us to the end of this episode of Unmasking the Machine. Thanks for tuning in.
Transcribed by TurboScribe.ai