In this episode of Unmasking the Machine, the conversation turns to AI in the art industry, exploring how algorithms are reshaping music, film, and creative ownership. Through discussions of AI music tools like Suno and a landmark lawsuit between major studios and image-generating platforms, students unpack the tension between accessibility and exploitation. Is AI opening doors for new creators, or reinforcing existing inequalities in who gets paid and recognized? Drawing on sociological perspectives, the episode examines authorship, authenticity, and the enduring question of what makes art “human.” From copyright battles to AI-generated songs, this episode reveals how creativity, control, and cultural value are being renegotiated in the age of algorithms.
Hosts:
Carey Faulkner, Associate Professor of Sociology
Kelly Miller, Senior Instructional Designer
Panel:
Harrison Brown, Class of 2029
Asad Syed, Class of 2027
Lucy Anstett, Class of 2026
Kornhaber, S. (2025, December 22). AI Is Democratizing Music. Unfortunately. The Atlantic.
Hunter, T., & Oremus, W. (2025, June 12). How Disney’s AI lawsuit could shift the future of entertainment. The Washington Post.
Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881.
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. The MIT Press.
Pugh, A. J. (2024). The last human job: The work of connecting in a disconnected world. Princeton University Press
Ruhil, 2025, “The great forgetting: When AI decides what we do not need to know”
Winner, Langdon. 1980. "Do Artifacts Have Politics?" Daedalus 109(1): 121-136. https://www.jstor.org/stable/20024652
Woolgar, S. (1985). Why not a Sociology of Machines? The Case of Sociology and Artificial Intelligence. Sociology, 19(4), 557-572. https://doi.org/10.1177/0038038585019004005
Carey Faulkner (0:01 - 0:16)
Welcome to Unmasking the Machine, a sociological look at AI. The podcast where we take a deeper dive into current headlines to examine the social forces behind the screen. I'm Carey Faulkner, Associate Professor of Sociology at Franklin and Marshall College.
Kelly Miller (0:16 - 0:30)
And I'm Kelly Miller, Senior Instructional Designer. Guiding our conversation today are three students in Carrie's Sociology of AI class who will be discussing Art and the Algorithm, AI in the Art Industry. Please introduce yourselves.
Harrison Brown (0:33 - 0:38)
I'm Harrison Brown, and I will be the expert for our first article from The Atlantic on the use of AI in media production.
Asad Syed (0:39 - 0:46)
And I'm Asad Syed, the expert for our second piece from The Washington Post on how Disney's AI lawsuit could shift the future of entertainment.
Lucy Anstett (0:46 - 0:57)
And I'm Lucy Anstett, the Sociology Expert for the group. My role today is to help us think sociologically about AI and the arts, the narrative surrounding AI, and what people really want from their artists.
Carey Faulkner (0:57 - 1:00)
All right, it sounds like we have a lot to dive into, so let's start the briefing.
Harrison Brown (1:01 - 2:57)
Our first article is called AI is Democratizing Music, Unfortunately, written by Spencer Kornhaver. He focuses on Suno, which is the most popular platform for AI creation. Kornhaver offers perspectives both in favor and against AI in music.
He touches on the pushback radio stations such as iHeartRadio have started the Guaranteed Human Initiative, pledging the company won't employ AI personalities or play songs that have purely synthetic lead vocals. They claim that 90 percent of consumers want their media to be from real humans. Another perspective he states is from a Suno employee named Rosie Nguyen.
She said when she was a little girl in 2006, she aspired to be a singer, but her parents were too poor to pay for instruments, lessons or studio time. She said, quote, a dream I had became just a memory until now, end quote. She wrote, Suno, which can turn a lyric or a hummed melody into a fully written song in an instant, enabling music creation for everybody, including kids like her, who growing up didn't have that access.
So Kornhaver gives an example of a current success story with Suno, discussing Talisha Nikki Jones, who is a 31-year-old Mississippi entrepreneur who created AI singer Zinnia Monet. Jones reportedly offered a $3 million record contract after his songs found streaming success. She used Suno to convert autobiographical poetry into R&B.
Kornhaver states AI is helping even established musicians to work less or at least to work faster. The country music producer Jacob Durrett said in a story that Suno affords him a productivity boost more than a creative boost. So these technologies prove to be similar to more well-known AI models like ChatGPT in the way that people are using them to do their work more efficiently and save time.
So this may suggest that AI and music will not replace musicians, but just aid them and help them be more efficient.
Asad Syed (2:58 - 4:35)
And so our second piece is the article from the Washington Post that explains a major lawsuit where Disney and Universal are suing the AI company MidJourney for copyright infringement. The studio claimed that MidJourney trained its image generating AI on their copyrighted content, like characters from Star Wars, Marvel and other franchises, without their permission. And because of this, users can generate images that closely resemble or directly copy those characters.
And the studios argue that this turns MidJourney into what they call a virtual vending machine of unauthorized copies, allowing the company to profit off of work that they didn't create. And then the article also frames this lawsuit as a turning point because it is the first time that a major Hollywood studio has taken legal action against an AI company. It reflects a broader conflict between creative industries and tech companies over who owns data and who gets paid when AI uses that data.
And while Disney and Universal are not trying to completely stop AI, they want companies like MidJourney to either pay for using their content or put systems in place to prevent obvious copying. And on the other side, AI companies argue that their systems need access to large amounts of data to function and that training on existing content may fall under fair use. They claim that AI is creating new images rather than directly copying.
And the article also highlights that AI is quickly improving and could soon transform industries like film and media. The lawsuit is not just about one company. It could set a precedent for how AI is regulated in the future, especially around issues of ownership, compensation and creative control.
Lucy Anstett (4:36 - 6:14)
OK, so even though these articles are talking about two different industries, animation, film and then music respectively, I think there's a lot of overlap. The first thing that stood out to me was both articles seem to highlight AI as a potential means of accessibility and democratization of the arts. It seems like AI companies and programs like MidJourney and Suno are intentionally marketing themselves to be seen as tools to help artists rather than something that's actually hurting the arts.
And specifically in Sid's article, The Washington Post discusses how GenAI could offer tools for fan artists creating fan art for their favorite media. So sort of putting tools in the hands of the fans of this content. And then the Atlantic article highlights how AI companies market themselves as offering music for everyone or an avenue for making people's creative dreams come true.
So this really made me curious, thinking more critically, do we think this technology is really democratizing the arts? I was thinking about Langdon Winner's Do Artifacts Have Politics? And sort of the false idea that AI is a neutral system that can be used for whatever.
In Winner's piece, they talk about how a lot of times specific design choices privilege certain users or communities and end up actually limiting access. And I think we can see that here with artists. Winner also highlights how technology just isn't neutral and often has intended or unintended political consequences.
And so when progress is pursued for progress's sake, are we actually strengthening pre-existing inequalities without realizing and are these inequalities and consequences really unintended? So for you guys in the articles, do we think that Gen-AI is a tool for accessibility or is it exaggerating inequality in the arts?
Harrison Brown (6:14 - 6:45)
I can start on the Atlantic article. I think definitely could be a tool for both. I do think it'll give more opportunities for a broader demographic of people.
I mean, it has made it easier for previously established artists so they might have a little advantage in that. But in the case of Rosie Nguyen, she had limited resources growing up and instead of having to buy expensive instruments or go get lessons or stuff like that, she could do it on a phone and create music that sounds like it would with expensive instruments just on a phone with technology.
Asad Syed (6:45 - 7:29)
I think for the Washington Post article, I touched on a little bit in my summary, but when it becomes a matter of compensating companies or artists to use their content for Gen-AI and content creation, it becomes a game of who can pay for it. And so I guess to an extent that does exaggerate the inequalities in the arts because if it's pay-to-play, then the wealth is going to stay in the hands of established artists and tech companies as they profit off of the work of smaller individual artists. And I think being an artist, especially a musician, is already really difficult just like in the industry that it is.
Trying to make it, quote-unquote, is difficult. So I feel like individual artists are feeling the effects of this and losing work since it's being reproduced for cheaper and much more efficiently.
Lucy Anstett (7:31 - 8:57)
Yeah, and it does remind me a little bit of a concept we've talked about before which is techno-chauvinism. That was coined by Meredith Broussard, and that's sort of the mindset that tech and tech progress is the solution for all of our social problems, and I see that especially when using the example of Rosie Nguyen, that if somebody is having different social impacts that are keeping them from achieving their dreams, the idea that technology will be the immediate source of progress for that. To me, it does seem like AI companies are being quite strategic, especially in their lawsuits with Disney about how they frame their technology and its impact on society.
Also being reminded of Taking AI into Being by Bareis and Katzenbach and their discussion of the narrative surrounding AI and the sort of AI boom we're seeing. These authors discuss how AI is sort of framed in three broad ways or categories. The first, that it's something that's inevitable.
They describe it as an inevitable technological pathway, something that we're sort of hurtling towards that cannot and should not be stopped. Also, it's framed as a part of a natural progress or something that's necessary. And then finally, part of a race that the United States in particular is attempting to win against other nations.
So if it becomes like a battleground or a race, something that we want, a competitive factor we need to obtain if we want to be winners in the game of AI development. So these authors are really highlighting how these different narratives can be used in their favor. So what narratives do you think are at play in these discussions of AI in the art industry?
Asad Syed (8:57 - 10:19)
Yeah, I'll take it. So I think my article kind of frames the Gen AI content and then the kind of like original, like let's say Disney in this case, as like good versus evil. Disney is like the original made a long time ago that we've been seeing forever.
Like you see before a Disney movie, the animation of Mickey on a sailboat, that's been around for as long as we can remember. And so I guess like the idea that content being created by AI is inherently bad because it's not coming from a person or like an artist. It really is coming based upon the ideas of other content that was made before it.
So it's not necessarily really original in that sense because it's really difficult to say that anything that AI will produce will be inherently original because it's all trained on past data. But to the point that Lucy was making about AI being inherently inevitable and also just a part of positive progress is another angle that we can take and that you have to just kind of consider because, again, as we've talked about in class, there's a lot of uncertainty with AI. While uncertainty has a negative connotation, there's also a positive side to that.
We don't know what it's capable of. And I'm sure there's a market and an audience for Gen AI content and Gen AI music and Gen AI art. So it is going to be inherently inevitable and to a certain degree going to be positive.
Lucy Anstett (10:20 - 11:30)
Yeah, I think a lot of AI companies are framing the idea of like anything standing in the way of progress is bad and that progress is always a positive thing. So that's interesting. But something that Sid was talking about also reminded me of the idea of what people really want and expect from their art and their artists.
The Atlantic article starts with a phrase that says human beings may have sung before they spoke, which to me reads as sort of a direct connection between humanity and artistic creation and music. And I was also struck by what Harrison mentioned, the iHeartRadio's guaranteed human slogan, which seems to make an assumption about what people want from their music, assuming that that will draw people towards iHeartRadio. So I was thinking about Woolgar's piece, Why Not a Sociology of Machines?
The Case of Sociology and Artificial Intelligence. So in that piece, Woolgar breaks down the differences between technology and humanity, the cognitive and the social, and sort of pushes us towards this question, is there something special about human behavior that cannot be replicated by AI? And I think art is so often associated with sort of natural human emotion and feeling, which Woolgar argues that AI will always lack.
So what do the articles sort of tell us about that idea?
Harrison Brown (11:30 - 12:41)
In the Atlantic article, Kornhaber mentions that Tom Pullman, who's the president of iHeartRadio, stated that 70% of people use AI as a tool, but they still have a desire for human-made music, with a statistic that 90% of people want their media to be from real humans. I'd say personally, I saw something on TikTok, and it was a bar song by Shaboozey, but it was made into an Irish folk song, and it was actually, it sounded good, like it was something you'd want to listen to. So I thought that was cool, that the music, I mean, it doesn't really sound like AI.
I think a lot of stuff like videos you see, you can kind of tell they are AI, but these softwares such as Suno do a good job with that, but it was weird listening to it knowing it wasn't a real person. I'd say personally, I'd still want the music I listened to to be written and sung by humans. Something we talked about was AI definitely takes away some personal, the meaning and connection of the lyrics.
It's like Suno could create rhythmic lyrics or lyrics that model other songs, but it lacks the humanness and emotions that are in the lyrics and the listeners can take away. You can debate if people want music to just sound good or actually have significance, and I think AI could take away the connection that people have with the lyrics and the music.
Asad Syed (12:41 - 14:03)
I mentioned one kind of quote about how they were referring to Mid Journey as a virtual vending machine, but another one that they had in the article was that it's a bottomless pit of plagiarism. It's kind of like the idea that without fresh data, AI is not able to produce new art or stuff or things, right? And so they become less useful, and then AI needs the artists to be producing new work in order for them to learn on that.
But I also think that Harrison's point about the lack of humanness and emotions, I think for our generation, music is something that's really important and significant in our lives. I listen all day long, and a lot of the music that I listen to, there's stories behind it. There's experiences behind it.
Those things can't be replicated, right? And AI can do that. It can make the words flow really nicely together and make the story sound very nice where there's a plot twist or whatever it be, but it's never going to be from true experience.
They're never going to have lived that experience, whereas artists put the pain, the passion, all those things into the lyrics, into their words, into their writing, and so there's a level of significance that comes with that. There's so many examples of AI just taking away the humanness and the emotion and really the significance of art.
Carey Faulkner (14:03 - 15:05)
So what you're doing right now is actually what Woolgar is talking about. You are discursively constructing that line between human and machine. I don't disagree with you, but I want to make that really explicit, but by the ways that we talk about sort of what music really is and all these sorts of human connections that you are drawing to it, you are saying that here is where the line is, sort of like the machines can replicate the videos, videos that are getting more and more difficult for us to be able to distinguish just with our eyes from the video that somebody has actually recorded with, I'm going to say filmed. I know I shouldn't say filmed, but right, but like what humans mean, but the ways in which we are talking about and constructing these lines, like that's sort of sociologically meaningful.
These are like discussions that we're having to go back to the question of sort of what is art and definitely are important for us to engage in, and I think people can and will be drawing these kinds of lines, and so that is what you're doing there, right? Like you were saying, this is how I see the line, and that line could get drawn in different ways, suggests we'll go.
Lucy Anstett (15:05 - 15:45)
Yeah, I think that building off of the idea of where they draw the line, some of these companies and even the users of the AI might be telling on themselves a little bit when they say like, oh, this can unleash a new wave of creativity and it could be such a huge step for the arts, but then so many of the artists are like, oh, it helps me work faster and work more, and we're going to say it's okay if we get compensation. So to me, it seems like they're hiding behind the guise of like, oh, we're doing this for creativity. We're drawing the line of like, it's going to help us create art, but really to me, it does seem to be about making money and making it faster and keeping money in the hands of those who can pay the artists.
So that sort of seems like a little bit of a mistruth.
Kelly Miller (15:46 - 16:19)
I mean, another example of that, which is way before your time, but I think Carey can relate to this with me. This has been happening forever, right? When printing press was invented, when this was before your time, when I was in college and you ripped music from Napster and Limewire, like, yes, I agree.
It's taking revenue away from the artists that created it, but at the same time, what it was doing is we were then sharing this music with everybody and people found out about people they never would have found out about, which was actually giving them a bigger audience than they may have had. Like, this really isn't a new problem to me.
Carey Faulkner (16:19 - 16:38)
I think there's sort of a new take on the problems that we've been having over time, but certainly with the idea that like, anybody can be like, boop, I pushed a button and out spits this whole new thing that was not actually involving an instrument or human beings drawing or whatever else it is has certainly taken us to a different level.
Kelly Miller (16:38 - 17:02)
I know a big topic right now, too, is being transparent about your AI use in these mediums. I've personally heard the iHeartRadio say guaranteed human, and I've noticed that, like, that struck me as being unique and different. If you post on Instagram, you're supposed to toggle over that little bar that says, if this was with AI, they're having a voluntary disclosure with this kind of stuff.
So I don't even know what my question is, but I think the intentions there are good, but...
Carey Faulkner (17:02 - 17:45)
All right, I'm going to build off this, because one of the things that I'm thinking about is this mark of distinction between, you know, you hear the word AI slop or the term AI slop. There's this idea that AI is inferior, that there are the stuff that we're generating, like, Harrison, you were surprised that you were like, hey, that song was like actually kind of fun, right? We've talked in a previous episode, if I gave you a, I don't know why I'm saying Neruda, a Neruda poem that you might be like, oh, that's really sweet.
But if I give you an AI-generated poem, you might be like, what a jerk. So I think that we have these issues of, like, distinction playing a part here as well in terms of, like, what taste is, like, what sort of good taste is. And of course, that's all built into people's ideas about art.
But right now, I think we're...
Kelly Miller (17:45 - 18:12)
Well, it's still the newness of it. Like, I do think that'll fade. Like, it'll just become a natural part of things.
But at this point with disclosure in either advertising or in social media posting or in music that you listen to, I still don't even know what my question is. But are people gonna do it? I feel like it's almost like when faculty say, don't use AI, and you still do, and you're supposed to disclose it, but you didn't, like, so what?
Asad Syed (18:13 - 19:00)
Yeah. I definitely think we're gonna see examples of that in the future of people producing things without disclosing that it's AI for, you know, whether it be to paint a certain narrative or to influence something. I mean, if you see it, you hear it, like, you can believe it.
And so I think that distinction that we're talking about is really important just for the simple sake of the truth. There's just no substance to it for me. I'm sure I'd enjoy listening to it.
It'd sound really good, catchy, whatever it be. But I don't know. I feel like, at least for myself, like, music has a deeper meaning than that for me.
And I think there's just so much significance and importance to that, that losing that and kind of, like, just letting that kind of drift away into this kind of new generation of music would be a tragedy, honestly.
Harrison Brown (19:00 - 19:37)
Yeah, I think that's interesting. But I was thinking, we watch fiction movies and read fiction books that tell stories, even though they didn't actually happen. And so I think if they don't disclose that it's not AI, like, if I listen to that song and I took it for the truth, like, I could still take something away from that.
And I think it's scary and kind of weird. But if you don't know it's AI and you're listening to it and trying to take the truth out of it, I think we actually might be able to relate to those stories. Like, people cry over movies that didn't happen.
I know it's different. Like, you can see the people and you kind of build a connection for a couple hours with them.
Kelly Miller (19:39 - 19:41)
That's a really good, that was a really good point.
Lucy Anstett (19:41 - 20:13)
This is going back to our question about maybe the motivation for disclosing or not disclosing. But I'm thinking about like the class implications, because if we're using AI and a lot of tech companies are using AI to put more money in their pockets and make production faster and more efficient for themselves, that would be one thing. But it's also at the same time being marketed towards people with less resources as sort of that guise of accessibility.
So I'm wondering how like class politics and feelings of class will play into whether or not people want to admit their usage.
Carey Faulkner (20:14 - 21:14)
That's an interesting point. I was kind of thinking, sort of drawing a little bit of a parallel to Alison Pugh's point about the inequity that might come in relation to connective labor when it's sort of better than nothing, right? So in this respect, you know, your AI art or music production is sort of better than nothing, but it's not good.
Like, are we going to have these moments of only folks who have more economic resources will have access to real human or sort of bespoke performance? So I'm trying to think what's the distinction. And in the music case, I see that you can see a live performance of humans, but you can't see a live performance of AI.
So what are other kinds of ways in which there might be these kinds of distinctions drawn out? So I was thinking about that distinction, live performance versus not, but then on the animation side, we don't have that. When we went to that moment where it went from being hand-drawn animation to computer-generated animation, is that sort of what we're thinking about?
But the next level of that, right, where it's sort of like it took away the art to make it computer-generated.
Lucy Anstett (21:15 - 21:53)
I do think artists have been drawing the line in their own communities for a long time, because you think about visual arts, people, oh, use a projector. Well, that's like cheating, that kind of thing. And then not to counterpoint seeing AI perform live, but in Japan, there's an artist in quotes, Hatsune Miku, who's a projection.
And I know people have been flocking to see Hatsune Miku before, and I'm like, that's not a real person. But then you have to think about the artist that created her. So in that fact, some people did have to put in creative labor, yeah, to create that hologram.
So I feel like we have been contending with these questions for a long time. And that way, it comforts me to know that this isn't a completely new.
Kelly Miller (21:53 - 21:56)
Yeah, that was my point I was making. It's an old problem, new medium.
Asad Syed (21:56 - 22:26)
I guess what she's making me think about now is just the point you made about the artist and their hologram. Do people know who the creator of that hologram is? I'm trying to think of a book series where people love the author.
There's characters in the book that people love, but then people also love the author because they've created really great works. Is this person taking some glory for their artist, or is it like this person has always wanted to do something like this, but doesn't necessarily feel as though they can because maybe they can't sing or they don't like the attention, but they want to have a role. It's just, it can get really complicated.
Carey Faulkner (22:26 - 22:56)
So I do want to ask you all one other question, because I was thinking about some of the articles that we've read about data. And I was thinking about ghosts in the data and also that Ruhil piece about forgetting. So I was wondering if you could speak at all to thinking about how the data that these systems are trained on might have consequences for the jig that you heard from the Shaboozey song and other kinds of things.
Lucy Anstett (22:56 - 23:37)
Yeah, I think especially when thinking about systems being trained on like Disney, a lot of times the media that we see, it does have bias in it. Like what is the most popular media? So I think that does have consequences on what type of art AI is able to make.
And I think a lot of AI art, the women all tend to look a certain way. And so I think that says something about the bias in the data using to train the AI model. So I definitely think there is a case of bias happening within the data that is going to alter the type of art.
And then that sort of goes back to the thing of, is it capable of creating anything new or is it just like echoing bias?
Harrison Brown (23:37 - 23:59)
Yeah, I think at least for music, in the example of Talisha Jones, she created the singer Zinnia Monet. That's not a real person, obviously, but she would be able to regulate the lyrics. I mean, there's still a person behind it.
Like when she was talking about authors and books, there still is a real human behind that in some ways. I think that might help out at least in music and how it goes in the data.
Asad Syed (24:04 - 24:36)
It's trained upon the data, but it's also limited to the data, right? There's a sense of creativity with AI in the sense that it can be creative within its realms or within what it knows. I think you can make that argument for ourselves as well.
Like I only know so much, I can only create so much, but at the same time, I don't think that AI has the same capacity of creativity and thinking I do because I have free will. AI only knows what it knows or what it's given, and I guess I do too.
Carey Faulkner (24:36 - 25:10)
Right, so the data, right, so I mean, any individual human, sure, we are limited, but when we are thinking about how AI is trained, certainly vast quantities of data, but also the information that we have about the data that any kind of large language model or other AI tools are being trained on suggest that it is at least oriented towards maybe more Western or Global Northern kinds of perspectives, right? That the ability to create music or create new videos and things seems to have one perspective baked into it rather than access to many more perspectives.
Asad Syed (25:10 - 25:22)
And I also think we've given AI the totality of the internet, and it's like, again, this level of significance towards reading a book and obtaining that information or reading an article and obtaining that information, kind of teaching yourself or like, I guess.
Kelly Miller (25:23 - 25:48)
It doesn't always have to be new. Like a story last week was Val Kilmer, an actor who died over a year ago at this point, his estate just signed on to keep his acting going in his passing through AI. So now you have established people that are now part of our history creatively that can go on forever.
I know Matthew McConaughey is another one who's very into keeping things going through AI.
Lucy Anstett (25:49 - 26:14)
And I guess as like an overall point, thinking about Woolgar sociologically, I think this encourages us to try to have a sociological perspective, especially when thinking about things that are like kind of emotionally loaded for a lot of people. And I think it can even benefit like CEOs or people who are interested in making money if they know what people want from their art, maybe that can help them guide their AI usage as they roll out these new programs.
Carey Faulkner (26:14 - 26:24)
That was a really fantastic note for us here. Oh, that was not a pun, was originally not intended, but now it's intended. So we've covered a lot of ground.
Kelly, take us away.
Kelly Miller (26:24 - 26:32)
All right, that was awesome. So thank you all for this latest episode of Unmasking Machine, A Sociological Look at AI. Thank you for listening.
Transcribed by TurboScribe.ai