In this episode of Unmasking the Machine, students dive into the growing gap between what AI can actually do and what companies claim it can do. Through discussions of AI washing, layoffs driven by expectation rather than reality, and the powerful narratives shaping corporate decision-making, the conversation reveals how AI is as much a social force as a technological one. Drawing on sociological insights, the group explores how hype, legitimacy, and fear of falling behind are influencing everything from executive strategy to worker insecurity. The result is a nuanced look at how AI is being used not just as a tool, but as a story, one that is actively reshaping labor, power, and the future of work.
Hosts:
Carey Faulkner, Associate Professor of Sociology
Kelly Miller, Senior Instructional Designer
Panel:
Marina Nikolic, Class of 2028
Carter Lawrence, Class of 2027
Leo Wang, Class of 2029
Amalie Kinney, Class of 2026
Companies Are Laying Off Workers Because of AI’s Potential—Not Its Performance. (2026, January 29). Harvard Business Review.
Poinski, M. (2026, February 2). Why “AI Washing” Is A Huge Risk For Your Company. Forbes.
Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881.
Cave, S. (2020, February). The problem with intelligence: Its value-laden history and the future of AI. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 29–35). Association for Computing Machinery.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Miceli, M., Posada, J., & Yang, T. (2022). Studying up machine learning data: Why talk about bias when we mean power? Proceedings of the ACM on Human-Computer Interaction, 6(GROUP), 1–14. https://doi.org/10.1145/3492853
Schwartz, R. D. (1989). Artificial intelligence as a sociological phenomenon. Canadian Journal of Sociology/Cahiers canadiens de sociologie, 179-202.
Winner, Langdon. 1980. "Do Artifacts Have Politics?" Daedalus 109(1): 121-136. https://www.jstor.org/stable/20024652
Carey Faulkner (0:02 - 0:15)
Welcome to Unmasking the Machine, a sociological look at AI, the podcast where we take a deeper dive into current headlines to examine the social forces behind the screen. I'm Carey Faulkner, Associate Professor of Sociology at Franklin and Marshall College.
Kelly Miller (0:16 - 0:29)
I'm Kelly Miller, Senior Instructional Designer. For this episode, we are joined by four students in Carey's Sociology of AI class, who will be discussing behind the algorithm, power, labor, and the AI narrative. Please introduce yourselves.
Marina Nikolic (0:33 - 0:38)
My name is Marina. I'm going to open with summarizing an article in Forbes.
Carter Lawrence (0:38 - 0:46)
Carter Lawrence, the expert for our second piece from the Harvard Business Review. And I'll be talking about how companies are laying off workers because of AI's potential.
Leo Wang (0:47 - 0:57)
And I am Leo Wang, one of the sociology experts for the groups. My role today is to help us think sociologically about the legitimacy narrative of using AI to reorganize labor.
Amalie Kinney (0:58 - 1:08)
And I'm Amelie Kinney, the other sociology expert for the group. My role today is to help us think sociologically about how AI is socially constructed, symbolically powerful, and restructures rather than replaces labor.
Carey Faulkner (1:09 - 1:13)
All right. It sounds like you've got some good stuff to talk about today. Let's start the briefing.
Marina Nikolic (1:13 - 4:37)
Okay. I want to start with talking about a very recent article in Forbes, published February this year in 2026. Why AI Washing is a Huge Risk for Your Company by Reagan Pointstein.
In the age of AI, we've seen major shifts in corporate leadership and billionaires' business ventures, making waves throughout the entire economy. However, the risks associated with AI have been drastically underestimated in the corporate world. We talk about them as a public, but we don't talk about what it looks like within specific businesses.
AI washing in itself is a practice of exaggerating a company's AI capabilities to inflate its valuation and attract investment. AI washing like this can lead to litigation, investor complaints, and the loss of liability insurance, meaning that individual executives can be held personally liable for AI washing. The core of this article is a warning.
Many companies are currently exaggerating what their AI can actually do to drive up their stock prices and impress investors. Not only is there an issue with legal liability, but the article points out a shocking statistic from Russell Reynolds Associates. Globally, 234 CEOs left their jobs last year, which is 21% more than the eight-year average for CEO departures.
This means the turnover for CEOs is at an all-time high. Not only is the statistic representative of board impatience, but the impact of impatience in investors as well. So short spurts of underperformance in a quarter that have been forgivable in the past now are completely unacceptable to boards and investors and lead to changes in CEO positions constantly.
A major takeaway from this is that the risks associated with AI washing are not confined to insider groups and corporate boards. Some of the biggest headlines in February had to do with the Federal Reserve, the cause of quote-unquote like rollercoaster valuations, everyone dropping, going down, going up every single month. President Trump nominated Kevin Warsh to succeed Jerome Powell as the chair of the Federal Reserve who points he deems a fiscal hawk.
Following his announcement, precious metals saw a sharp decline with silver losing a third of its value immediately post-announcement. The underlying economic imbalance impacts the now common struggle with affordability for the average American, making the Federal Reserve's interest rate decision tricky. Another huge economic development has to do with the world's richest man and polarizing political figure, Elon Musk.
Once the world's biggest electric vehicle maker, Tesla reported this first ever decline in annual revenue despite riding the AI wave. Tesla's no longer the world's biggest electric vehicle maker one company in China is, and now Elon plans to turn Tesla to an AI and robotics company and they're canceling production of two of the most lucrative models in all of Tesla. Representing CEO disagreeing boards once again, this CEO decision did not sit well with investors at Tesla and Tesla stock fell over 5% immediately after announcing the switch to AI and robotics.
To understand the risks on an insider level, Poinsky interviews Jason Bechara, a financial practice leader at an MSI insurance group and an executive risk specialist about the financial risks to both you personally and your company from AI washing and how to mitigate any damages. He warns that we are existing under the quote unquote hype era of AI, but now we're shifting into a time of consequence for how much AI inflation there is. When asked about the risks of AI washing, Bechara plainly states, there's definitely reputational risk for the business and the entity.
I would say that the risk more specifically to AI washing are personal liabilities, executive representation liabilities as well though. They have a fiduciary responsibility to do what's in the best interest of the shareholders. So the balance sheet and maintaining the protectors protecting shareholder value.
So if their team sees misrepresentations, they themselves are personally liable.
Carter Lawrence (4:38 - 7:34)
All right, so I wanna bring up my article from the Harvard Business Review about how AI is causing workers to be laid off because of AI's potential. So most people hear about layoffs and think companies are cutting workers because AI is already doing their jobs. But the article argues that's usually not what's happening.
Companies are not laying people off because of AI. They are laying people off because they expect AI to be able to do it in the future. These layoffs are happening due to anticipation rather than actual performance.
The article explains that AI can help with tasks like writing, coding, and data analysis, and it can't fully replace human roles. In workplaces, AI is improving parts of jobs, not eliminating entire positions. So many companies are restructuring, slowing down hiring, and cutting employees.
A survey of 1,006 executives found that 39% of organizations had already made low to moderate headcount cuts in anticipation of AI, 21% had made cuts, and 29% were hiring fewer people for the same reason. 2% reported making large layoffs because AI had actually been implemented in a way that truly replaced jobs. So the real-world impact of AI is still limited in a lot of cases, but the fear and expectation around what AI soon might be able to do is already shaping major business decisions.
The Harvard Business Review suggests that AI can sometimes become continued explanation for layoffs that may really be about something else. These layoffs could be happening because it can hide the fact that companies need to reduce costs, correct overhiring, or try to appear adapted to look better for investors. In that sense, AI may not be the real cause.
It is a way for companies to exceed their goals without complications. Leading CEOs from companies like Ford, Amazon, Salesforce, and JPMorgan Chase have all said that their white-collar jobs in their company may soon just disappear because of AI. Some companies already have acted aggressively with this.
Klarna reduced its workforce by 40%, invested heavily in AI, and later admitted that it created lower-quality service. Duolingo also faced this backlash on social media after they say that AI would replace human contractors. The article recommends a careful approach.
Companies should focus on the uses of AI where results can actually be measured. They should test AI in areas like systems development, customer service, or programming. They should run controlled comparisons between work done with AI and work done without AI.
Companies should make changes through attrition rather than large, immediate layoffs. The article is trying to tell us that this is not a story about AI completely dominating the workforce and beginning to take over jobs right now. It's just a message about companies that are testing the waters of the idea of what AI might become for their company and the world.
Many companies are making these decisions based on the height of AI and the fear of missing out on opportunity. Currently, there is no definitive proof that AI can do a better job than humans.
Amalie Kinney (7:36 - 11:47)
All right, thank you for those article analyses. Both really interesting pieces on how AI is affecting the workforce. And they actually connect really well with some sociological concepts that I wanna go over today.
So to begin my sociological deep dive, I want to talk about discussions about artificial intelligence, especially layoffs and AI hype and how those things connect to sociological concepts about labor, power, and knowledge. So both of the articles discussed highlight something really important, and that is how AI is not just a technological issue, but it's also a social phenomenon. The HBR article argues that companies are laying off workers based on AI's future potential and not its actual current performance.
This means organizations are acting on expectations and assumptions rather than on actual evidence. This connects directly to Ronald David Schwartz's paper, Artificial Intelligence as a Sociological Phenomenon, where Schwartz argues that intelligence itself is not purely technological, but it's also socially situated and constructed through cultural and institutional practices. So in other words, AI doesn't just exist as a tool, but it exists as an idea shaped by society.
So when companies say AI is replacing jobs, they are participating in a shared belief system that defines what AI is and what it can do, even if that belief is premature. Additionally, the Forbes article introduces the concept of AI washing, where companies exaggerate or misrepresent their AI capabilities. And this is not just a marketing issue, but it's also sociological.
Stephen Cave's article, The Problem with Intelligence, helps explain to us why, because he argues that the concept of intelligence is value-laden and historically tied to power and hierarchy. Some companies label technology as intelligent. They're not just describing it.
They are also assigning value, claiming authority, and they're justifying decisions, and this gives AI symbolic power. AI washing works because society already treats intelligence as something inherently valuable and superior, so companies can use AI language to attract investors, legitimize layoffs, and position themselves as innovative, even if the technology doesn't fully support those claims. And both articles suggest AI is replacing workers, but Kate Crawford's Atlas of AI challenges that assumption.
In the labor chapter, Crawford shows that AI often does not eliminate work, but instead it reorganizes and intensifies it. For example, in Amazon warehouses, workers are constantly being monitored, and their time is being tracked, and they're even being evaluated by algorithmic systems. As she describes, workers' time and movement are controlled down to the smallest detail, showing how efficiency, surveillance, and automation are all deeply intertwined.
This reflects a key sociological shift because workers are increasingly treated like machines, even as machines are described as intelligent. So instead of replacing labor, AI often increases control over workers, extracts more productivity, and restructures how labor is organized, which means layoffs justified by AI might actually reflect changing labor strategies, not true automation. I also wanted to bring Anderson's analytic autoethnography into the conversation because it reminds us that knowledge is always situated and interpreted through sociological experience.
This matters because of the way we talk about AI, the way companies understand AI, and because the way workers experience AI are all shaped by social position and perspective. Executives may see AI as opportunity, efficiency, and innovation, but workers, on the other hand, may experience it as surveillance, job insecurity, and even a feeling of increased pressure to perform and to meet unrealistic expectations of productivity and efficiency. So the meaning of AI is not fixed.
It depends on who is interpreting it and from where. So to kind of finish off, the HBR and Forbes articles show that AI is already shaping major decisions like layoffs, but sociology helps us understand why. AI is not just a technology, it is a socially constructed idea, a powerful symbol used to justify decisions, and a tool that restructures labor rather than simply replacing it.
So when companies say AI is the future. We should really be asking ourselves, is this based on reality or just belief? And who benefits from this narrative?
And then also, what is happening to the workers in the process? Because ultimately, AI is not just changing work, it's revealing how power, knowledge, and labor operate in our society. And with that, I'll turn it over to our second sociological deep dive expert, Leo.
Leo Wang (11:48 - 15:42)
Thank you, Amalie. What stands out to me is that these two articles are not really about two separate problems. They are about two stages of the same social process.
The Forbes article focused on AI washing, meaning companies exaggerating what AI can actually do. And the HBR article focused on a company already laying people off because of AI's potential rather than its proven performance. If you put these two together, the sociologically question becomes, why are the organizations making such serious decisions now if the technology is still being overstated or is not fully reliable?
My answer is that AI already has social power before it has full technical power. In other words, AI does not need to fully work in order to already shape organizational behavior. It only needs to be widely seen as important, valuable, and inevitable.
This is one reason sociology is so useful here, because it asks us to look not only at what technology does, but at how society gives technology meaning. That is where this Forbes article is especially useful. It shows that AI is not only being treated as a tool, it is also being treated as a signal.
When firms present themselves as AI-driven, they're not just describing the software. They're also trying to look innovative, future-ready, competitive, and attractive to investors. In sociology, we might call this legitimacy or being seen as credible, modern, and worthy of confidence.
So AI-washing matters not only because it may be misleading, but because it shows that an organizational environment rewards AI claims even before those claims are fully substantiated. This connects well to the idea that AI can be talked into being. This is the title that was used by Bareis and Katzenbach.
And their point is that AI is not simply discovered as a neutral fact. It becomes socially real enough, narratives, promises, and shared expectations about the future. So when companies exaggerate AI, they're not only overstating a product, they're helping create the reality in which AI already appears transformative and necessary.
Then the HP article shows that the next step, once AI is accepted as a future, organizations begin to making labor decisions based on the expected potential rather than the actual performance. That means layoffs are not always happening because AI can already do the job as well. They can also happen because the managers believe AI will soon be capable enough, or because they want to show that a company is preparing for that future.
So the firms begin reorganizing the present around the imagined future. This is why Langdon Winner becomes important. In Winner's article, Do Artifacts Have Politics?, Winner argues that technologies have politics.
For this discussion, that means we should not treat layoff as an automatic outcome of technical progress. AI did not decide to eliminate jobs. The manager at organizations made those choices.
But once those choices are framed in the language of AI, they can sound objective, rational, and even inevitable instead of political and contestable. We can go even further with Miceli, Posada, and Yang in their article, Studying Up Machine Learning Data. Why talk about bias when we mean power?
The deeper issue is not just whether AI claims are accurate, it is who gets to decide when the AI is ready, what counts as efficiency, and whose risks are treated as acceptable. Those are power questions, not purely technical ones. And Crawford helped us see why even AI capability is never a fully neutral measurement.
People decide what data counts, what gets measured, and what gets ignored, and will get presented as evidence. So even the claim that AI is ready or almost ready is already socially shaped.
Carter Lawrence (15:42 - 15:53)
So for the first question, for you, for Amalie. So how does AI reinforce existing power structures in the workplace? And does it help or hurt employee-employer relationships?
Amalie Kinney (15:53 - 18:29)
Yes, that's a really good question. I'd say from a sociological perspective, one key idea to remember here is power and control over labor. So AI allows employers to monitor workers more closely than ever before through tracking productivity, time use, and even behavior in real time.
This strengthens what sociologists would connect to Weber's idea of rationalization and Foucault's idea of surveillance, where work becomes more efficient, but also more controlled. Workers lose autonomy while employers gain more precise oversight. So even if AI doesn't replace workers, it often intensifies control over them, reinforcing the power imbalance between employer and employee.
Another important concept is symbolic power and legitimation. AI can be used as a justification for decisions like layoffs, restructuring, or increased productivity demands, even when AI isn't fully capable of replacing jobs, which is something we've already discussed. But companies can still invoke this as a reason for change.
So it gives employers a kind of discursive power where they can frame decisions as inevitable or technologically necessary, which makes it harder for employees to push back. In this sense, AI doesn't just change work, it changes the narrative around work, often ways that benefit those already empowered. And I think in terms of employer-employee relationships, AI tends to strain trust rather than strengthen it.
Workers see layoffs happening based on AI's potential or when they're being monitored more closely, and then that makes them feel replaceable or devalued, which can lead to lower morale, increased anxiety, a sense of job insecurity. And at the same time, remaining employees are often expected to do more with fewer resources, especially if layoffs occur prematurely, which can create resentment toward both management and the technology itself. However, I think it's not entirely negative.
AI can improve relationships if it's used to support rather than replace workers, for example, by reducing repetitive tasks or helping employees build new skills. In those cases, AI can enhance productivity while also maintaining trust, but that does depend heavily on how organizations choose to implement it. So overall, AI tends to reinforce existing hierarchies by giving employers more control, more justification for their decisions, and more ability to reshape labor.
Whether it actually helps or hurts relationships ultimately depends on whether it is used as a tool for collaboration or just simply as a tool for control. And then I had a question for you, Carter. If companies are laying off workers based on expectations about AI rather than its actual performance, what does that say about how belief and hype shape economic decisions?
And then also, do you think it's ethical for companies to lay off workers based on technology that hasn't yet proven it can replace them?
Carter Lawrence (18:30 - 19:41)
Thank you for your question. So based on the article from I get, so if companies are letting workers go based on what they think AI can do rather than what is actually done, that shows how much people believe in AI and how much they think it will change things. Companies don't always react to what has happened, but sometimes they react to what they think will happen and what investors want to hear or what makes them look like they are ahead of the game.
So companies are willing to lay it all on the line by adding AI before it has fully proven itself. So these business choices by these companies have been affected by fear and pressure and competition and not by facts. And for the other part of your question, whether it's ethical, I don't think it's ethical for companies to let workers go because AI has not proven itself yet.
If AI has not actually shown that it can do the job well, then letting people go is a decision based on guessing rather than real evidence. That puts workers at risk for something that may not happen and can fairly treat them as if they do not matter. It also hurts trust between employees and employers.
A better approach would be for companies to test AI and its potential before firing people and they should use it to help workers. Companies should use AI to support workers, not replace.
Leo Wang (19:41 - 19:54)
I have a question for you, Marina. In the article that you talked about, AI washing sounds obviously risky, both legally and financially. Could you tell us more about what you think about why so many companies are still willing to do it?
Marina Nikolic (19:55 - 21:46)
I think it comes a lot between the delay between right now we could get a bump in our stock price versus maybe five years from now there's gonna be legal and financial capability. And so there's an immediacy of the reward of AI washing and then maybe a huge delay before a potential for a legal negative outcome. I feel like it's kind of like survival of the trendiest where everyone feels the need to hop on the AI wagon because they don't wanna be the one left behind.
And so I think investors and boards and CEOs are fearing more becoming outdated or like a legacy brand more than they're fearing potentially executives are gonna be legally culpable. Maybe for those executives, they fear more of being rendered obsolete by missing out on AI, which is difficult to think about because I think of it kind of like cheating or if everyone cheats, how am I gonna keep up with everyone else if I don't too? I would say there's like a couple other reasons.
I think that's the main one though. One of them is just the immediate bump in stock price. Investors wanna see that, boards wanna see that.
And then Amalie was talking about this discursive shield where you get to blame the algorithm for a difficult human choices, like layoffs or something like that, where you can say, it's just about the numbers. It's just illegal to do. And then also I think that a lot of it just is AI washing.
A lot of these companies just have sophisticated algorithms that you can't actually tell apart from true AI. So they just wanna say that they're doing it too. Yeah.
And then I have a question for you, Leo. Your analysis suggests that AI doesn't even need to work to be effective, right? So you build the hype before you build the actual product, right?
It just needs to be believed. If companies are using AI washing for legitimacy and using AI potential to justify layoffs, has AI become less of a tool and more of a storytelling device used by those in power to reshape the workplace without having to prove the tech actually works?
Leo Wang (21:48 - 22:46)
This is exactly what I'm saying. This is a shift in corporate dynamics brought on by AI. AI has transitioned from a technical development to a legitimacy narrative in recent years.
It exerts social power regardless of actual functionality. At least to the level it is exaggerated. By practicing AI washing, as in Forbes article, organizations are not just marketing a product.
They are talking AI into being in the name of potential capital. This narrative then subtly convinces society of the technical inevitability in management or the inevitability of automation. It suggests that controversial human choices like the layoffs in HBR were not made to become uncontroversial.
It's just the algorithm. Here, the guise of objective progress forgives painful decision-making. In this sense, the idea of AI is currently more influential than the tech itself.
Carey Faulkner (22:47 - 24:10)
So I'm thinking about these two articles. We've got Forbes and we've got Harvard Business Review, and they are written for corporate leaders. So it's interesting to me because they both express a degree of caution.
We see that people are being fired and we see that people are over-exaggerating what AI is capable of doing. So there's just something interesting to me about how much caution is talked about here. And even in the Forbes article, this discussion of actual future accountability and even techniques to try to avoid that accountability, there was a part about how companies could use a third-party company to test their claims about AI.
And so that later they could say, it's not our fault, sue them. I wasn't expecting so much discussion of caution. Given the choices that people are making, given the hype that exists around these technologies, we actually have a space where folks are writing about negative consequences that come from firing a bunch of people and then being like, whoops, we were wrong.
Or your investors can sort of sue you if you are over-promising. So you really should not do that. And so I found it really interesting to see these stories as ones that are actually seem to be resisting the hype.
So what do you think about that? How do we make sense of that hype resistance in these articles when there is so much of it?
Marina Nikolic (24:10 - 25:09)
I think of it as like, who's the target audience, at least for the Forbes article, because what are we trying to do? I know that Megan Quincy is a writer for a newsletter that has been historically written by executives and corporations, right? It's Forbes.
And so when you were reading it as a warning, yes the entire public and read it and can give you like an insight into what's happening behind the scenes. But they spent a solid five paragraphs talking about potential mitigation strategies individual CEOs can take, right? And they're talking about legal culpability, everything.
And Carter's article too, what we're talking about is what can you do as an executive in any tech company that's using AI and what I think they're trying to do by warning is you're like simultaneously telling like a potential just random person in the public, like, you know, we don't actually care about this. We're trying to avoid culpability. We're trying to cut prices.
This is what we're trying to do. And yet you're also telling individual CEOs, here's what you're gonna need to do. This is what needs to happen.
Carter Lawrence (25:10 - 25:24)
I think my article, it does in the beginning kind of caution you, but towards the end, they kind of switch it up, kind of like telling you advice, how you can implement AI so it doesn't cost you to be laid off.
It's a tool to help your employees, not something to replace them.
Amalie Kinney (25:25 - 26:19)
Yeah, I thought it was really interesting sort of in both articles, how from a social standpoint, they weren't really sympathizing with the workers who had been laid off. It was more like, okay, so this has happened in the past and it's going to continue happening. So how do you mitigate the fact that now you've fired these workers and now your work isn't getting done because the AI is premature.
It's not able to step up to the plate yet. And it seemed like that was more the framework from where it was like the narrative that it was trying to capture. Like, yeah, it's kind of a bummer that these people were laid off in mass in some instances, but how at the end of the day do we lay off people more equitably?
They used like de-escalating language to talk about it. But at the end of the day, I still felt like it was very efficiency and productivity focused and not against the side of using AI instead of workers. It was just like, how do we do that and make it better?
Kelly Miller (26:20 - 26:39)
You all decided to come to a liberal arts college. How are you guys trying to find your values around this topic as you enter the workforce, go on to grad school, like whatever your goals are? How are you going to find your values?
Because the workforce is tight right now too, right? Jobs aren't as easy to come by and all that kind of stuff. So where are you finding your values?
Marina Nikolic (26:40 - 27:14)
For me at least, I'm just thinking a lot about AI washing on its own. I'm trying to kind of distance myself around how much AI is being talked up and how much it can replace me and ground myself in realities, which is just like open AI Tesla. They're all using money in an annual cycle.
And no matter how much you're told, all of these different majors can be rendered obsolete in five, 10 years. I kind of try to take the approach of, even if it's not realistic, taking that anxiety down. It's not good enough yet.
It's not good enough to replace me and other humans yet.
Kelly Miller (27:14 - 27:37)
I thought about this when you guys were talking about Harvard Business Review, four of these are aimed at CEOs. I tell this to faculty when we read journal articles on AI or whatever, AI sells right now too. So I'm thinking about it as a media, putting AI in a headline, they know it's clickbait, right?
Good or bad too. So I think that's what I always remember when I see the next AI headline is they're trying to get a reaction out of us.
Amalie Kinney (27:38 - 28:23)
I'm a graduating senior, which is so scary. But I've thought a bit about that. And I've talked to different professors and mentors in my life who I've gone to for the career advice.
So I'm a sociology major, but I'm also a psychology major. And something that's been brought up a bit in the field of psychology is that there are like AI mental health experts out there now. And they're very cost-effective.
Yeah. Air quotes around that. Mental health experts.
But they're very cost-effective and people are going to them because therapy is really expensive, especially with the healthcare system that we have in the US. So it's a very prevalent concern and other seniors that I've spoken to, it's very much top of their mind how they fit into this new world that is going to be at least partially governed by AI.
Carter Lawrence (28:24 - 28:46)
Yeah. Again, it's not going to take my job because I want to go into construction management. The AI is used a ton there for mapping out the construction sites and just messaging subcontractors and stuff.
And from what I've seen, I feel like I can't really do what I do, but there's probably endless possibilities of AI. So it's definitely coming and it's kind of a scary thought.
Leo Wang (28:48 - 29:23)
Yeah. I feel like my answer to the question since I'm just a freshman that I don't really have much experience in moving towards the workforce. I feel like the reason why I'm taking sociology is to get a clear sense of what I am doing right now.
Looking into this topic actually helped me distance myself from the AI hype or the AI myth. It's just like a madness of the crowd. So I feel like I just want a clear sense of what I'm doing and in order to actually know what I'm going to do in the future.
Carey Faulkner (29:24 - 29:37)
Wow, that was really, really nice of you to tie things up for us. Well, we are out of time for today, but we are so grateful to you all. Thank you so much.
This was another great podcast with our students.
Kelly Miller (29:38 - 29:43)
That's all for this episode of Unmasking Machine, a sociological look at AI. Thank you for listening.
Transcribed by TurboScribe.ai