October 3, 2025
If you know me, you know that curating is one of the things that I have aspired to do in my future for a while now. Thinking about curating lately, it has become more real as I apply for jobs and begin to apply to graduate school. One thing that had not even crossed my mind previous to my job hunt is how curators use AI when planning/designing their galleries. As representatives of real artists, you would think that most curators would be staunchly anti-AI, right? Wrong.
Curation has never been a neutral job field. Choosing which works are put in shows, galleries, and public spaces determines the work that is cemented in history and people’s minds for the rest of their lives. It influences the sales market, pop culture, and even political atmospheres. Today, as museums and galleries experiment with artificial intelligence to tag collections, make acquisitions, and even design exhibitions, we face a new ethical shift. What happens when the curator is not solely human… what happens when the curator is using, even solely using, AI?
According to Priscila Arantes, author of the article “Museums in Dispute: Artificial Intelligence, Digital Culture, and Critical Curation”, AI has begun to be used in museums to enrich visitor experiences, with digital “assistants” that provide real-time information on artwork and information on historical contexts. AI has also been used for preventative curation that allow for “structural analysis and scenario simulations” (8), like a model of St. Peter’s Basilica that is used for its preservation and remote monument access. There is even an AI platform called “Pen” that allows for museum-goers to create their own collection based on what they find interesting in a museum’s collection. This is great news for those without means to travel. It is also great news for curators and preservationists who are trying to enrich the public with important artwork.
However, there is a darker side to AI usage in curation. The article “AI as Curator: From Algorithmic Mediation to Agentic Autonomy” by Ahmed Imed Benamara, expert in ICT and Digital Culture, details how AI museums use is programed to make curated exhibitions (9), and I have yet to find a completely agentic program. In his article, Benamara details the AI is powered by algorithmic data sets that already exist like the online, digitized collections of museums like MoMA. This can severely limit artwork that can be chosen for exhibitions, as museums like MoMA historically, even up until the past decade, favor white, mostly male, artists to keep in their collection. There is an illusion of objectively if something is curated by a machine, but we must remember that this is simply not the case.
Journalist Erin Dickey reviewed Duke University’s Act As If You Are a Curator: An AI-Generated Exhibition that was displayed in Spring of 2024. In this experiment, students gave ChatGPT a prompt to “create an exhibition purported to explore ‘themes of utopia, dystopia, the subconscious, and dreams through a diverse range of works of art.’” (“Act As If You Are a Curator: An AI-Generated Exhibition” 1) While the AI did pick some evocative and not widely known work, Dickey proposes that “when we take a collection of objects and are told that they have X, Y, and Z in common with one another, it is very likely the case that we will see X, Y, and Z in them… experiments like this can… capitalize on our hallucinatory functions while obscuring [our] own internal patternmaking mechanisms” (5). In a sense, when art should be expanding our minds into making our own connections between work, relying on a computer to do this takes away from human perception; artwork is made perceptible instead of perception occurring, which is the opposite of how curation should be (arguably).
However, Nasher curatorial assistant Juliane Miao has a different perspective, noting that AI may pick up on patterns that humans overlook (Dickey 5). I lean towards disagreeing with this point of view, however. I feel as though an algorithm cannot replicate human experience, and that if you get enough of a variety of people in front of a collection, someone is bound to make the same connection that a machine will make. It simply is a matter of who sees the work, as it is with any curated body of art.
Curation has always been about storytelling. AI has proven to be efficient and helpful in the world of curating. The question, then, is not whether AI will enter the gallery, it already has. The question is whether we will curate responsibly, treating AI not as an oracle of objectivity, but as a collaborator whose biases and debts must be acknowledged.
Works Cited
“Act As If You Are a Curator: An AI-Generated Exhibition,” Panorama: Journal of the Association of Historians of American Art 10, no. 1 (Spring 2024), https://doi.org/10.24926/24716839 .18990.
Arantes, P. (2025). Museums in Dispute: Artificial Intelligence, Digital Culture, and Critical Curation. Arts, 14(3), 65. https://doi.org/10.3390/arts14030065
Ben Amara, Ahmed. (2025). AI as Curator: From Algorithmic Mediation to Agentic Autonomy Author.
July 5, 2025
In the world of AI-generated art, one of the first and loudest voices pushing back is Matthew Butterick. Mr. Butterick is a lawyer, graphic designer, and co-plaintiff in a major lawsuit against AI companies accused of scraping artists’ work without permission. I spoke with Mr. Butterick over Zoom in early May about the legal and ethical mess surrounding AI art tools, and his insights were sharp, urgent, and clear; this isn’t just about copyright. It’s about consent, and the blatant lack thereof within the AI industry.
Butterick didn’t hold back while discussing platforms like DeviantArt, which have faced backlash for allowing AI training on user-uploaded images; he specifically addressed images of artwork by artists who are unaware that AI art platforms are scraping their work.
“DeviantArt pleads ignorance,” he said. “Artists aren’t giving consent for their art to be used by these platforms, which includes withholding consent. There are moral implications for that.”
In other words, just because a company can scrape something from the internet doesn’t mean that it should. For Butterick, the heart of the issue is simple: artists don’t have a say in whether their work is used in AI-generated "artwork".
Generative AI companies argue that their training practices fall under “fair use”, which is a doctrine meant to protect transformative or educational use of copyrighted material, but Butterick sees this as a shaky defense.
“AI is claiming fair use,” he said, “which is probably the biggest legal issue.”
The problem is that copyright law was designed to protect human creators. It assumes people need incentives to make things like recognition, compensation, and control over their work. Machines don’t need any of that. And yet, they’re operating under the same legal umbrella.
“Machines don’t need the incentives that copyright intends to create,” Butterick pointed out. “Artists do.”
The U.S. Copyright Office released a Report in January 2025 addressing the copyrightability of generative AI. The Office did report that existing principles of copyright law can apply to AI, but concludes that generative AI can only be protected by copyright when a human author has determined sufficient expressive elements. This is not the case for artists who are unaware that their work is being scraped by AI.
I brought up a comparison while discussing with Butterick: Napster. In the early 2000s, it disrupted the music industry by making it easy to share, technically steal, songs. Musicians were unsurprisingly upset about this, and lawsuits were filed. Eventually platforms like Spotify and Apple Music emerged, which now give artists a cut, around 70%, of the revenue.
“It’s a similar situation,” Butterick said. “Now we have Spotify. But AI art generators aren’t doing that.”
The tech has changed. The problem hasn’t.
There are ethical alternatives. Companies have the option to train their models on public domain content or sites like Unsplash that offer royalty-free images, but are choosing not to do so.
“It comes down to choices,” Butterick said. “They have the option to train something on free domains. They’re choosing not to.”
It’s a deliberate decision to prioritize convenience over consent and profits over people.
Despite being a lawyer leading a legal charge, Butterick is realistic in saying that lawsuits alone won’t solve this.
“Litigation isn’t going to solve everything,” he said.
The law can create boundaries and do the best we can do to hold companies accountable, but long-term change requires a cultural shift. There must be a broader recognition by the general public that artists deserve control over their work in digital spaces, just as they do in physical ones.
The AI art debate isn’t just about technology or law, it’s about respect. Until artists are given real choices, consent, and compensation, the fight isn’t even close to over.
April 17, 2025