Algorithmic Bias in AI for Fashion Design
Master's in digital media and Global Communications
Assignment #2: AI-Enhanced Multimedia Creation for Digital Media
Dr. Justin Baillargeon
Annabel Aniebiet Obot
NF1010863
May 18, 2025
Table of Contents
Abstract ......................................................................................................................................... 3
Keywords ..................................................................................................................................... 3
Literature Review .....................................................................................................................4 -5
Introduction ................................................................................................................................... 6
Project Concept ........................................................................................................................ 6 - 7
Creative Process and Use of AI Tools ..................................................................................... 7 - 9
Observations and Reflections on AI Bias and Representation ............................................... 9 -12
Challenges and Limitations .................................................................................................. 12 -13
Ethical Reflections ............................................................................................................... 13 - 14
Conclusion .................................................................................................................................. 15
References ........................................................................................................................... 16 - 17
Abstract
This project explores algorithmic bias in AI-generated fashion imagery through a digital zine created with Google’s Gemini 2.0 Flash, drawing on West African seasonal aesthetics. While AI offers creative potential, the results revealed a default to Eurocentric beauty norms and limited representation of West African women. By centering West African fashion, the project highlights the need for culturally inclusive datasets and critically examines AI’s influence on visual culture.
Keywords: AI Image Generation, Algorithmic Bias, Cultural Representation, Training Data Bias, West African Fashion.
Literature Review
The issue of algorithmic bias in AI-generated visual media has garnered increasing scholarly attention, revealing patterns of underrepresentation and cultural erasure that mirror broader social inequities. Zhou et al. (2024) provide compelling empirical evidence of this bias through their systematic examination of three widely used text-to-image AI generators: Midjourney, Stable Diffusion, and DALL·E 2. Their findings expose consistent underrepresentation of women and Black individuals in occupational portraits, alongside subtle stereotyping in facial expressions and age portrayal. This aligns closely with this project’s experience using Gemini 2.0 Flash, which initially defaulted to white, Eurocentric models when interpreting West African fashion prompts, necessitating repeated interventions to generate diverse representations. Such patterns underscore the embeddedness of dominant cultural norms within AI training datasets, which often privilege Western aesthetics and marginalize non-Western identities.
Kirkpatrick (2023) further elucidates this dynamic through autoethnographic reflections on the coloniality embedded in AI systems like Google’s Gemini 2.0 Flash. Their analysis highlights how so-called neutral tools perpetuate whiteness as the normative default, effectively erasing Black and African identities from creative outputs. This central role of training data in shaping AI bias is reinforced by Burlina et al. (2021), who studied bias in AI applied to medical imaging and found that models trained on imbalanced datasets underperform on underrepresented groups. This highlights a cross-domain parallel: whether diagnosing retinal diseases or generating images, AI systems reflect and amplify inequalities present in their data. The use of synthetic data augmentation, as explored in Burlina’s study, offers a potential avenue for mitigating such biases.
Hamid et al. (2024) contribute a nuanced perspective on bias mitigation through their exploration of the “curb-cut effect” in Explainable AI (XAI). They demonstrate that fixes intended for marginalized groups can often enhance AI performance universally, suggesting that inclusivity benefits all users. However, their identification of a “curb-fence effect”—where improvements in one dimension inadvertently reduce performance in another—shows the complex trade-offs inherent in designing fair AI systems. This complexity is mirrored in this project’s experience with Gemini 2.0 Flash’s technical limitations in an attempt to improve results. Therefore, addressing representational bias requires holistic strategies attentive to unintended consequences. Hamid et al.’s work thus offers a valuable framework for critically balancing ethical goals and practical AI performance in creative domains.
Lastly, Jiang et al. (2024) explore the challenge of faithfully translating nuanced design intent into AI-generated fashion imagery through their HAIFIT framework for sketch-to-image synthesis. Their work illustrates how existing methods often flatten intricate cultural and stylistic elements due to limitations in datasets and model architecture. This mirrors the present project’s difficulties in capturing the specific textures, patterns, and symbolic meanings of West African seasonal attire with Gemini 2.0 Flash. Jiang et al.’s creation of a culturally tailored dataset to improve fidelity underscores a central theme across these studies: the urgent need for diverse, representative training data to enable AI systems to more accurately and respectfully render marginalized cultures within creative media.
Together, these studies collectively highlight the multifaceted nature of algorithmic bias in AI-generated visual content. It provides both a robust foundation and a call to action for developing AI tools that foster genuine inclusivity and cultural affirmation in digital media.
Introduction
Artificial Intelligence (AI) has rapidly become a transformative tool in multimedia creation due to the way it is reshaping how artists, designers, and creators visualize and produce content. “A remarkable 83% of creative professionals have now integrated generative AI tools into their workflows according to recent industry surveys.” (Magic Hour, 2025) Tools such as Google’s Gemini 2.0 Flash, OpenAI’s DALLE.E, Runway Ml, and Adobe Sensei represent a spectrum of AI applications in image generation, video editing, and creative design workflows. For instance, DALL.E specializes in generating detailed images from text prompts, often used for conceptual art (OpenAI, 2025), while Runway ML offers user friendly video and image manipulation powered AI (Runway, 2024), and Adobe Sensei integrates AI into creative cloud apps to automate design processes and optimize workflows (Adobe, 2023).
Project Concept
In this project I employed Gemini 2.0 Flash, an AI image generator, to create a digital fashion zine that interprets West African seasons, specifically the Harmattan (dry season) and the raining season, through fashion design. This was inspired by an interest in how AI is used in multimedia to turn ideas or culture-specific concepts into images. It gives artists fresh ways to explore and create visuals connected to their local experiences.
The project focuses on telling stories through fashion by capturing the distinct environmental and cultural feel of two very different seasons. Warm earthy colors, cracked textures, and dusty patterns reflect the dry harmattan, while cool blues, stormy grays, and wet textures represent the rainy season. Using AI, I wanted to explore new ways of expressing these ideas and show how AI-generated images can help highlight local stories that are often missing from global fashion conversations.
Creative Process and Use of AI Tools
For this project, I selected Google’s Gemini 2.0 Flash as the primary AI tool for generating fashion looks. This choice was based on two factors: it is freely accessible, and I am already familiar with its interface and capabilities. I began by compiling a list of descriptive terms associated with two environmental categories: the dry season and the rainy season. These words informed the prompts I developed, which focused on capturing the colour palettes, textures, and thematic inspirations appropriate to each season. The prompts used to create these looks were:
At first, I deliberately left out the descriptions of the model that I wanted generated in order to prioritize the design elements of the garments. However, the initial image generations exclusively featured white models, which revealed a bias likely rooted in the tool’s training data. In response, I revised the prompts to specify “average West African women” as the models. While this adjustment led to the inclusion of Black models, it also underscored the need for more inclusive datasets and the limitations of the tool’s responsiveness to diverse representation.
To simulate a lookbook format, I directed the AI to produce both front and back views of each outfit. This worked well in most cases, although I eventually had to separate the prompts for each pose in order to achieve consistent results. I also included instructions for the AI to use a plain white runway background, which made it easier to extract and collage the generated figures into lookbook-style compositions.
I initially explored the possibility of generating the entire lookbook solely through AI; however, this approach proved challenging due to the limitations in specifying detailed design intentions across multiple images simultaneously. The quality of AI-generated outputs tended to degrade when attempting full-page layouts. Additionally, automating the entire process would have extended the production timeline significantly, as iterative refinements for each composite layout became increasingly complex.
Therefore, I determined that balancing AI-generated imagery with deliberate human curation was necessary to maintain visual coherence and uphold the conceptual integrity of the project. After generating the images, I used Shuffles, a visual collage app, to create layout pages that resembled a fashion lookbook. This included layering each AI-generated look with surrounding visual elements that reflected my original mood board and seasonal concept. In total, I produced eight looks: four inspired by the dry (harmattan) season and four inspired by the rainy season:
Observations and Reflections on AI Bias and Representation
When I first prompted the AI without specifying that I wanted an image of the “everyday West African woman,” it generated a slim white woman. Interestingly, her hair resembled a blowout afro, which made me wonder if the AI had misinterpreted the prompt. I had mentioned West African weather, which may have influenced the hairstyle, but because I did not specify the model’s racial identity, it defaulted to a white woman with a Black hairstyle as seen in the figue below:
Eventually, I had to explicitly instruct the AI to depict someone who looked like the “average West African woman” before it showed a Black model. Even then, the models shown were consistently slim and slender, which did not align with the body type I had described. In fact, the average West African woman does not reflect the size or silhouette of these AI-generated models. This pattern reveals a deeper issue: AI seems to default to Western beauty standards, even when prompted to reflect non-Western contexts.
As Ghosh (2012) explains, Western fashion has long prioritized thinness, whereas in many West African societies fuller body types are traditionally associated with beauty, health, and status. The persistence of the slim figure in AI outputs, even when depicting West African fashion, suggests the tool’s training data is largely shaped by Western ideals, and this in turn limits its ability to represent alternative or localized standards of beauty.
The AI-generated images displayed a range of hairstyles, including straight, curly, and coily textures. However, I observed that the tighter the curl pattern, the shorter the hair appeared or the more obscured it became. This visual trend subtly reinforces a harmful stereotype that Black women with 4C hair either cannot grow long hair or should keep their hair controlled and minimized. Shoaib (2023) notes that the rise in visibility of Black models with shaved heads may be linked to stylists’ general reluctance to work with textured hair. She also points out that the hairstyles typically represented are limited in variation, often consisting of basic styles like straight-back cornrows or slicked-down looks.
In my own prompts to the AI, when I did not specify a hairstyle but included headpieces, the resulting images often showed models with low cuts (Look 8), cornrows (look 5), or sleek buns (look 2), while looser curls appeared only in the form of large afros (look 6). This suggests that the AI may be drawing from biased training data shaped by the fashion industry’s narrow portrayal of Black hair, where simple or “tamed” styles are more prominently represented.
Another important observation, especially given my basic understanding of fashion, was the way the AI interpreted my prompt about West African weather and culture. Although the fabric textures and color palettes somewhat matched seasonal moods like harmattan or the rainy season, the overall design, cut, silhouette, and styling, remained mostly rooted in European fashion templates as seen in looks 2, 4, 5, 6 and 8 and most of the rainy season head pieces. This disconnect suggests the results would not fully resonate with a West African audience. Instead of incorporating authentic West African fashion aesthetics, the AI relied on familiar Western structures. This reflects a broader problem noted by Jiang et al. (2024), who argue that generative AI often risks distorting the designer’s vision due to biases in its training data. In this case, it seems the AI’s visual vocabulary is so heavily influenced by Eurocentric norms that it struggles to faithfully represent West African design on its own terms.
Challenges and Limitations
The application of Gemini 2.0 Flash to visualize West African seasonal aesthetics laid bare significant limitations inherent in contemporary AI image generation technologies, extending beyond mere technical constraints to reveal deeper epistemological and ethical considerations.
Foremost among these was the inherent difficulty in accurately translating non-Western cultural nuances within an AI framework predominantly trained on Western visual data. As Rose (2024) highlights, the quality and realism of AI-generated images are fundamentally tied to the diversity and biases present in the training datasets. This project corroborated this, with the AI consistently defaulting to Eurocentric beauty standards and struggling to grasp the distinct stylistic elements within West African fashion.
Furthermore, the project encountered a tangible manifestation of the "curb-cut fence effect" (Hamid et al., 2024), aligning with Rose's (2024) observation that achieving high-quality and realistic images remains a significant challenge. While iterative prompting sometimes refined specific garment features, this often resulted in a degradation of other visual aspects, such as facial details, demonstrating the complex and often unpredictable trade-offs in AI optimization. This suggests that improvements in one area do not necessarily translate to holistic enhancements and can even introduce new limitations.
The AI's limited prompt adherence also presented a considerable obstacle. The consistent failure to generate both front and back views within a single output, despite explicit instructions, points to a current constraint in the technology's ability to process and execute multi-faceted spatial commands. This inefficiency in translating complex instructions into visual representations necessitated a more fragmented creative process.
In conclusion, this project's engagement with Gemini 2.0 Flash for culturally specific image generation revealed critical limitations that resonate with broader challenges in the field of AI image creation (Rose, 2024). These limitations, ranging from the difficulty in representing non-Western concepts and the complexities of prompt optimization to the inherent biases in training data and the restricted cultural knowledge of the models, show a need fore a continued critical examination of the technology's potential and its inherent constraints in fostering truly inclusive and representative creative practices.
Ethical Reflections
This process of using AI to represent West African women and fashion highlighted fundamental ethical concerns about cultural invisibility, aesthetic erasure, and the unequal labor required for marginalized creators to achieve visibility in AI-generated spaces. As Rose (2024) notes, AI models trained on biased datasets can inadvertently perpetuate stereotypes and marginalize underrepresented groups, a phenomenon directly observed in this project's outcomes. This raises significant concerns about the injustice perpetuated by these technologies, where dominant cultural norms are amplified while others are rendered invisible or inaccurately portrayed.
These models claim neutrality, but as research suggests, they reproduce dominant cultural norms, positioning whiteness and Western aesthetics as the unmarked standard. For creators like me, this means constantly having to intervene in the AI’s decision-making by being hyper-specific about race, body type, and cultural references. These are demands that are rarely made of users working within the AI’s normative assumptions. This dynamic exposes a deeper ethical asymmetry where marginalized users must “over-describe” themselves just to appear, let alone appear accurately. This is a form of algorithmic gatekeeping that not only demands extra creative and emotional labor but also reinforces existing hierarchies of whose cultures are deemed legible or worth rendering.
Furthermore, the AI’s narrow depiction of Black hair and body types subtly but powerfully reinforces stereotypes, favoring shaved heads, cornrows, or “tame” styles while marginalizing more diverse or voluminous textures. This is not a question of aesthetic preference, but one of representational violence reducing the complexity of Black womanhood into simplified visual tropes that are more palatable to Westernized datasets. Ethically, this raises critical questions about who is allowed to be complex, and who must remain simplified to “fit” within the AI’s learned vocabulary.
Creators working with AI must therefore move beyond using these tools passively. Instead, there must be intentional, critical engagement with how representation is structured both in the outputs and in the assumptions embedded within the tools themselves.
Conclusion
This project highlighted the dual promise and challenge of AI-assisted creativity. While AI significantly enhanced my ability to visualize complex seasonal aesthetics and expand artistic exploration, it also exposed persistent Eurocentric biases and cultural gaps in AI-generated content. This reflection underscores the imperative for ongoing critical engagement, ethical mindfulness, and active efforts to diversify AI tools if they are to serve as truly inclusive and empowering creative partners.
References
Adobe. (2023). Generative AI – Adobe Sensei. Adobe.com. https://www.adobe.com/in/sensei/generative-ai.html?msockid=2c24dd02939566051c97ceb592016700
Burlina, P., Joshi, N., Paul, W., Pacheco, K. D., & Bressler, N. M. (2021). Addressing Artificial Intelligence Bias in Retinal Diagnostics. Translational Vision Science & Technology, 10(2), 13. https://doi.org/10.1167/tvst.10.2.13
Ghosh, P. R. (2012, October 9). Western Standards Of Beauty Clash With West African Notions. International Business Times. https://www.ibtimes.com/fat-land-western-standards-beauty-clash-west-african-notions-843573
Hamid, M. M., Moussaoui, F., Guevara, J. N., Anderson, A., & Burnett, M. (2024, April 19). Improving User Mental Models of XAI Systems with Inclusive Design Approaches. ArXiv.org. https://doi.org/10.48550/arXiv.2404.13217
Jiang, J., Li, X., Yu, W., & Wu, D. (2024). HAIFIT: Human-Centered AI for Fashion Image Translation. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2403.08651
Kirkpatrick, K. (2023). Can AI Demonstrate Creativity? Communications of the ACM, 66(2), 21–23. https://doi.org/10.1145/3575665
Magic Hour. (2025, April). The Generative AI Creative Economy: Stats and Trends in 2025. Magic Hour AI. https://magichour.ai/blog/generative-ai-creative-economy-stats
Nast, C. (2023, March 13). Why fashion month is failing Black models with textured hair. Vogue Business. https://www.voguebusiness.com/beauty/why-fashion-month-is-failing-black-models-with-textured-hair
OpenAI. (2025). DALL·E 3. Openai.com. https://openai.com/index/dall-e-3/
Rose. (2024, July 23). Understanding the difficulties and constraints of AI Image creation. Inst-Aero-Spatial. https://inst-aero-spatial.org/the-challenges-and-limitations-of-ai-image-generation.html
Runway. (2024). Runway | Make the Impossible. Runway. https://runwayml.com/
Zhou, M., Abhishek, V., Derdenger, T., Kim, J., & Srinivasan, K. (2024). Bias in Generative AI. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2403.02726