The past two years have marked a significant moment for visual forms of generative AI. In July 2022, text-to-image model Midjourney entered an open beta; a few months later in August 2022, open-source model Stable Diffusion was released; and the very next month, Open AI’s text-to-image model DALL-E 2 was made available to the public without a waitlist. Major tech companies have also released their own proprietary visual generative AI tools geared toward individuals, marketers, and enterprise users.
Since then, there have been a plethora of viral trends, commercial applications, political applications, and even court cases relating to these remarkable-- and controversial-- AI models. There has been both excitement as well as concern about the possibilities they engender. There have also been important discussions about who is excluded and harmed by these models, as well as how to protect against or remediate those harms.
While academics have begun to engage with these issues from a critical perspective, quite a lot of the research on generative AI thus far has come from an applied perspective or has focused on large language models like ChatGPT. This workshop looks to address that gap by bringing together scholars who use critical approaches to workshop papers that explore the many implications of visual generative AI.
Hosted and funded by the Data, AI, and Algorithms in Society hub of the University of Sheffield Digital Society Network, this workshop seeks scholars who are doing critical research related to visual forms of generative AI. This includes, but is by no means limited to, work on the following topics:
Visual AI and identity
Commercial applications of visual AI
The inequalities/biases of visual AI
The harms of visual AI
Visual AI and user behaviour
Visual AI and play/humour
The legal implications of visual AI
Visual AI imaginaries
Visual AI and mis/disinformation
Theoretical approaches to Visual AI
Visual AI as/and art
Visual AI and labour
Economies of visual AI
Visual AI and geopolitics/colonialism & colonial logics
Workshop Format
We are aiming for a workshop that will feature approximately 35 to 50 people at all career stages. If accepted to the workshop, participants will be expected to provide a short, work-in-progress paper of approximately 2,000-3,000 words to share with a group of 5-7 other participants. Participants will be grouped thematically.
While participants will read the papers of each group member, every paper will have a Key Respondent who will be responsible for leading discussion on that piece. The goal is that each participant will come away from this workshop with constructive feedback that will help develop the work-in-progress paper into a full book chapter or journal article.
How to Apply
Participants will need to submit a 500 word (not including references) abstract via the Google Form on the Apply page by 11:59 pm on Monday, January 22, 2024.