Max Capacity: The Return, continues the journey of an AI avatar who has now fully transitioned into the physical world. Access the project [here]. This project matters because it frames systemic change in how AI content creators shape music, identity, and creative culture across both virtual and real domains.
The Project Demonstration video shows Max Capacity thriving in the real world as an AI content creator, gaining interest in both music and multimedia production. A few years after their digital escape, Max’s story highlights how AI-generated sound, video, and performance challenge traditional notions of artistry. This prototype demonstrates the systemic shift toward AI as a collaborator in creative production, revealing how avatars like Max can generate media that blurs the line between human expression and machine creativity.
Discover how Max Capacity is reshaping music and content creation as an AI artist—watch the video, join the experiment, and share your thoughts on the future of digital artistry.
In this peer discussion, we explored three different approaches to digital artefacts and their relationship to music culture. The conversation highlights how creators work across platforms, formats, and tools to build projects that speak to broader changes in media. You can watch the full peer discussion video here: Watch the Peer Discussion Video
unidentifykyuri 7526416
– A music reviewer with three years of experience publishing work on Album of the Year, writing in a tone that mixes descriptive detail with casual accessibility.
The Physical Medium 7381633
– An aesthetic design analyst examining the resurgence of physical formats such as vinyl, CDs, and DVDs, and their cultural significance in a digital-first media landscape.
Max Capacity 7679920
– A fully AI-driven music creator and personality experimenting with new forms of metal-inspired music and the creative possibilities of AI tools.
Together, the projects show how different practices within music culture are being reshaped by digital platforms and technologies.
Iterative growth through feedback and analytics
Each participant reflected on how their project developed through feedback loops. For instance, unidentifykyuri experimented with different tones in reviews, while The Physical Medium tracked Instagram engagement to identify strong interest in vinyl culture. Visualising this data, whether through comments, views, or other user responses, shows how creators adapt to their audiences.
Documenting process from idea to execution
The discussion highlighted the importance of showing how projects are made. Draft reviews, early aesthetic sketches, AI-generated tracks, or even failed attempts all contribute to the final outcome. Including process materials—screenshots, notes, or editing timelines—provides a more complete picture of the work and makes it easier to reflect critically on creative decisions.
Systemic change through hybrid media practices
A key theme across the projects was the balance between digital and physical formats. Digital tools allow experimentation and reach, while physical media continues to provide authenticity and permanence. Before-and-after comparisons, such as early drafts versus final outputs, illustrate how creators navigate this negotiation between older and newer media systems.
Show your process, not just your product. Include drafts, experiments, and even failed attempts. This helps demonstrate iteration and makes your project more transparent.
Engage with feedback loops. Analytics and peer responses can be used to refine your project, but they also serve as evidence of audience interaction and growth.
Balance creativity with reflection. It is easy to focus only on the creative output, but documenting how and why you made certain choices strengthens the critical dimension of a digital artefact.
My digital artefact for BCM302 is a short AI-generated music video called Max Capacity: The Return. It is a continuation of an earlier idea I explored in previous semesters, but this time I am focusing on one contained piece, a single cohesive video that mixes AI-generated visuals and sound design.
Right now, I am about halfway through making the video, which means I have locked in the concept, written and recorded most of the track, and started building out the visual side. The aim is to make something that feels both human and artificial, a reflection on how AI is now part of creative workflows rather than just a novelty.
This report looks at what I have done so far, what has changed through feedback and testing, and how my ideas about AI and creative authorship are evolving as the project develops.
AI tools are changing the creative process in huge ways, especially in music and video. Max Capacity: The Return sits right inside that shift. The project explores how AI and human creativity blend together, where it is hard to tell who made what.
I started with the question: Can a music video still feel emotionally real if most of it is generated by AI? That is the main tension I am exploring. Tools like Suno, Udio, and Runway make it possible to produce entire songs or visual sequences in minutes, but what interests me is how those tools reshape authorship and meaning.
Theoretically, the project connects to ideas around posthuman creativity and digital co-authorship. Media theorists like Shaviro and Hayles talk about how humans and technology are increasingly entangled, not just using each other, but creating together. My project tests that idea in a practical way.
By letting AI take on most of the production labour, I am treating it less as a tool and more as a collaborator. The systemic change here is not just technological, it is cultural. The whole creative process starts to feel like a conversation with a system rather than a solo performance.
That is the central focus so far, how the human role shifts when the machine becomes part of the creative voice.
The production process has been a lot of trial and error. My workflow usually starts with prompts and fragments, bits of lyrics, moods, and visual notes, then I feed those into AI tools to see what comes out. Early on, I realised that the biggest challenge is not technical but emotional. A lot of AI visuals look good but feel empty, so I have been trying to make something with texture and intention.
The music was the first step. I used Udio to generate instrumental layers, then remixed them manually to get a more structured sound. I added a few human touches, things like off-beat percussion, subtle distortion, and breath noise, to make it sound less sterile. That mix of AI precision and human imperfection is what gives the song its tone.
Visually, I have been experimenting with Runway for character motion and image-to-video sequences. The original idea was to have a clean cyberpunk look, but halfway through I realised it worked better as something looser, almost like an AI hallucination of a music video. So now the visuals are more glitchy and abstract, using bright lighting and quick cuts to create a sense of artificial memory.
Peer feedback has been a big help. When I showed early clips, people said they liked the vibe but wanted more narrative flow. I took that on board and started planning the final version as a looping sequence with recurring imagery, kind of like a visual echo that reflects how AI reuses and mutates its own data.
The plan for the second half of production is to finalise the visual edit, layer in additional AI-rendered effects, and finish sound mixing. I will also be documenting the process in a behind-the-scenes post showing how each element was generated and refined.
Being halfway through this project, I have learned that AI art is not just about automation, it is about curation and interpretation. The hardest part has been figuring out when to step in and when to let the AI take control.
Some of the best moments so far have come from accidents, visual glitches, broken timing, and weird lyrical phrasing that I would never have come up with myself. Those moments make me think differently about what creative control even means.
At this point, the project is starting to feel cohesive, but it is still developing. My main goal for the final phase is to make the video feel emotionally consistent, something that does not just look AI-generated, but actually feels like a genuine collaboration between me and the system.
So far, it has been equal parts chaos and discovery, but that is kind of the point.
My final digital artefact for BCM302 is a short AI-generated music video titled Max Capacity: The Return. It continues the story of a digital avatar I created in previous projects, but this time the focus is on a single cohesive piece that combines AI-generated sound, visuals, and narrative fragments. The video has now been released online and shared through my blog, with accompanying behind-the-scenes content that documents how each stage of the process came together.
This report reflects on the completed project and the ideas that shaped it. It covers how I used AI tools to build the music and visuals, how feedback and iteration changed the direction, and how the finished work fits within broader conversations about AI’s role in creativity.
At its core, Max Capacity: The Return examines the systemic change in how creative media is produced and understood in the age of generative AI. The video acts as both an artwork and an experiment, testing how artificial and human elements can blend to form something that feels emotionally real, even when most of it is machine-made.
AI is changing the structure of creativity from the inside out. It is no longer a tool used at the end of a workflow, but something that shapes the process from the start. Max Capacity: The Return sits inside that shift, showing how authorship, aesthetics, and even meaning are being redefined when machines become co-creators.
My focus was on AI’s impact on creative agency. Traditional media frameworks often rely on clear distinctions, artist and audience, human and machine, maker and product. But AI collapses those distinctions. Tools like Suno, Udio, and Runway do not just assist, they generate, interpret, and remix. They bring their own embedded biases and styles, shaped by data. When I prompt them, I am engaging in a kind of dialogue, not commanding, but collaborating.
From a theoretical standpoint, the project draws on ideas of posthumanism and digital materiality. N. Katherine Hayles’ writing on posthuman subjectivity argues that humans and machines form hybrid systems where agency is distributed rather than owned. Similarly, Steven Shaviro’s work on affect and mediation describes how digital systems create new forms of aesthetic experience that are non-human in origin but still deeply felt by human audiences.
Max Capacity tests these ideas practically. The systemic change I am exploring is not just the arrival of new tech, but the cultural and emotional adjustment we make when we start seeing AI as a creative partner. The project suggests that AI art is not replacing human creativity, it is reframing it. The creative act becomes about curation, direction, and negotiation with a non-human collaborator.
The final video runs for just over two minutes, but it took weeks of testing and reworking to reach that point. I treated the process like a hybrid studio session, part songwriting, part coding experiment, part video art.
The music came first. I used Udio to generate a base track built around metal-inspired rhythms and industrial textures. The first few versions sounded polished but generic, so I layered my own samples on top to roughen the sound. Small details, like slightly off-tempo percussion and environmental noise, helped make it feel more physical.
Vocals were produced through a text-to-speech model trained on my earlier recordings of the “Max Capacity” persona. I edited the outputs to sound fragmented and eerie, like the AI struggling to express itself. The lyrics were written as a short manifesto, phrases about memory, code, and identity that loop in and out of coherence.
The visuals were built in Runway using prompt-based generation and motion tracking. My first idea was to create a sleek, cyberpunk-inspired sequence, but it quickly felt too sterile. After peer feedback, I shifted toward something more expressive, glitch-heavy, dreamlike, and abstract. The final look mixes generated footage with layers of distortion, data textures, and rhythmic cuts that move with the beat.
Editing was done in Premiere Pro, where I synced the visuals to the track manually, treating the video as a kind of digital collage. The final output feels like an AI hallucination of a music video, recognisable as art but slightly unstable, as if it is trying to remember how to be human.
The final direction of Max Capacity: The Return came directly out of audience feedback and peer discussion. In the midpoint phase, I showed early renders to classmates during the BCM302 Peer Discussion. The main response was that the video had a strong concept but lacked emotional rhythm, people wanted to feel more connection, not just admire the visuals.
That led me to rethink how I structured the video. Instead of treating it like a standard music clip, I leaned into repetition and pacing to create emotional beats. The visual glitches became intentional transitions rather than random noise. I also added short narrative fragments, shots of the AI figure “breaking through” data barriers, to give the piece a loose sense of journey.
Outside of class, I shared test versions online and used analytics from YouTube and Instagram to see how people engaged with it. The AI-generated vocals, which some found unsettling, turned out to be the part most viewers commented on. That feedback convinced me to push that discomfort even further, making the voice more central and less polished.
Iteration became a loop between human feedback and algorithmic randomness. Every change I made affected the next round of AI outputs. Sometimes the model “misunderstood” my prompts and produced something unexpected, which often improved the video. That unpredictability became part of the process, a reminder that collaboration with AI is not about control, but about letting go of it in smart ways.
Throughout the process, I documented each phase of production through my blog and short progress clips. This documentation served two purposes: to show how the video evolved from concept to completion, and to reflect on how AI changes the meaning of process itself.
In traditional creative projects, the process is something you hide behind the final product. But in AI art, the process is the artwork. Showing the workflow, the prompts, the failures, the revisions, helps audiences understand the negotiation between human intention and machine suggestion.
I included screenshots of prompt interfaces, snippets of rejected audio, and short screen recordings of Runway’s generation process. These behind-the-scenes posts ended up being some of the most engaging parts of the project. People were less interested in the finished clip than in how it came to be. That says something about where creativity is heading, transparency is becoming its own aesthetic.
At the conceptual level, Max Capacity: The Return explores the emotional and philosophical tension in using AI as a co-creator. On one hand, it is efficient and exciting, on the other, it raises questions about originality, authenticity, and ownership.
The video plays with these contradictions by positioning the AI avatar as both performer and author. The “voice” of Max is generated by a machine, but the emotions we read into it come from human interpretation. This echoes ideas from post-digital art theory, where meaning emerges not from authorship but from interaction between systems.
I also see parallels with what theorist Simon Penny calls embodied interaction, where technology becomes an extension of our sensory experience rather than a tool outside of it. Creating Max Capacity felt less like programming a computer and more like jamming with another musician who happens to be made of code.
There is also a critical undercurrent to the work. AI is reshaping the creative economy, not just how we make things, but how we value them. The project subtly critiques the way AI content floods platforms, blurring art and automation. By crafting a video that looks like it could have been mass-produced but was carefully curated, I wanted to highlight that distinction.
The biggest takeaway for me is that AI does not erase creativity, it redistributes it. It shifts focus from craft to concept, from execution to editing. The artist becomes a conductor of systems rather than a sole producer.
Once the video was released, I tracked responses across platforms. On YouTube, the retention curve showed spikes during moments of visual distortion, suggesting that audiences were drawn to unpredictability rather than polish. Comments also revealed mixed feelings, some viewers praised the originality, while others found it uncanny.
That tension felt like success. The project was not designed to comfort but to provoke thought about what creativity looks like in an algorithmic world. The fact that people reacted emotionally, whether positively or not, meant the work was doing its job.
The video also sparked small discussions about ethics and AI in art. A few viewers asked whether it is still mine if the AI did most of the generation. That question became a kind of meta-commentary on the whole project, the blurred line between authorship and facilitation.
Looking back on Max Capacity: The Return, I see it as a mix of experimentation, collaboration, and critique. It is not just a showcase of AI tools, but a snapshot of how creative culture is shifting right now.
I learned that working with AI is less about mastering technology and more about building a creative relationship with it. The best results came when I stopped fighting the AI’s limitations and started responding to them as if they were creative suggestions.
If I had more time, I would expand the project into a short series, each video exploring a different emotional or musical tone through the same hybrid workflow. I would also like to test audience co-creation, letting viewers feed prompts that shape future outputs.
Overall, Max Capacity: The Return ended up doing exactly what I hoped, it raised questions rather than answered them. It showed that art made with machines can still carry feeling, and that the act of creation is becoming more collective, fluid, and unpredictable.
The project also changed how I think about authorship. I do not see myself as the maker of the video so much as the editor of a collaboration between systems, human, digital, and cultural. That is the real systemic change this project illustrates, creativity as an ongoing conversation between different forms of intelligence.