Image by Hamcus
My project began with a Milanote page while watching the first lecture. The first idea was to create a motion-captured avatar who would send artefacts from his world. After creating the notes for Max, I began looking into how to animate the first draft avatar using AI to create the script (Chat GPT), character and background design (Dall-E), and character animation (Elai). The general idea behind Max's character was inspired by a few YouTube projects, including Xanadu and AI Angel. AI Angel inspired Max to be not just a digital character but a Metaverse avatar who was fully sentient. I was always a huge fan of project Xanadu. The depth in which one man delves into a large scaling story and incredibly motion-captured animation blew my mind. I thought if one man could make a fully immersive 15-minute video by himself in just over 10 weeks, I could achieve something similar with a team of other experts.
With all of my work, the first draft was prepared, and so I shared it (along with my notes) to the Discord to try and find some teammates and hear some feedback.
During our week 2 tutorial, I talked with my new teammates. As we talked, Max Capacity's story changed. The team and I brainstormed the character design and new story motivations for Max. Also, after the first draft was completed, we decided Max would be non-binary, mainly to appeal to a wider audience and to fit the theme of an online avatar in a new world. Max would now be an avatar attempting to escape their universe and hop right into ours. With this new story in mind, we developed the tangible aspect of the project further with new ideas of Max sending through items to test their machine before eventually sending themselves through. Max was becoming a fully-fledged character, especially thanks to new AI animation software discovered and inspirations which we discovered in week 2.
(The second draft)
Project Pitch: Max Capacity
Since week 5 Ive been hard at work on max capacities first video upload. I worked hard, slowly attempting to further develop max capacity’s character design based on Drewe's base drawing.
Runway ML has been a MVP in AI development ever since their creation of Stable Diffusion the most advanced AI image creation tool since the beginning of all this new tech. before week 5, Runway released a teaser for their newest development, a tool called Gen 1 which uses a process they call video to video. This release blew me away and I immediately signed up for a limited beta release.
(Gen 1 uses text or images to incorporate the image or text prompt's style and animates over the original footage provided by the user.)
A week or two later I had moved onto a tool I already researched for our project development, EB Synth. With no news about Gen 1's release date. later on I knew Gen 1 would have been perfect for our project but I believed AI video wasn't yet possible, thats what I thought, at least until I checked the website for Runway ML one more time...
(Eb synth uses a frame the user draws over and utilises AI to animate all other frames in the shot using that style.)
When I checked runway ML's website I noticed that Gen 1 was now available as a payed service. After some thought I payed the fee and began experimenting.
i had filmed test footage of me moving around and waving my hand for EB Synth. when EB Synth worked and I changed directions, I decided I would use the same footage for Runway. I began this new process using just a small selection of Runway's different AI tools
Runway had a background remover tool which removed the background of my video within minutes. I thought this removed background would work better for the video to video tool Gen 1. This turned out correct and the AI developed max's world based on the images I provided as style sources.
I then used Drewe's character design for Max capacity that he designed in tandem with my ideas about max. This tangible drawing was used heavily to develop the video for our first upload.
This is a frame of the video runway created when I mixed the removed background version of my test footage and the drawing of max capacity. In the BTS video you'll see the AI took liberation with multiple aspects like max looking like a piece of paper himself. Partially because of this, max's movements were lacking a human feeling so I went back to the drawing board.
There are so many more parts to this fully developed video in which you'll see in the Full BTS video coming soon. This video wont be released on max capacity's social medias however, to provide the cannon of this AI generated story.
I remember how I was unsure of AI video tools and if they'd be viable during this projects timeframe. I thought these kinds of tools would not yet be developed. But then Gen 1 was released, and I was blown away by the speed in which AI research is being completed. A month ago, Runway Ml release Gen 2 an insane development in Ai video tools. I believe before this project ends Gen 2 will also become available on the runway website.
Overall so far I am beyond excited for future developments of Max Capacity. Just like this project's video all future videos will be mostly written using Chat GPT. Along with Runway ML's gen 1 for future video developments.
During these concluding weeks, I dedicated myself tirelessly to our team's final animation, pouring my effort into every intricate detail. The culmination of our collective endeavor is embodied in the captivating final video for this term, where Max Capacity achieves a significant milestone by successfully transmitting their inaugural digital object, aptly named "the gauntlet of transference." It marks a pivotal moment in Max's journey, and I am thrilled to have played a part in bringing it to life. Furthermore, I took the initiative to curate Max's other enthralling stories, meticulously documenting them on their personal website, a platform that I personally crafted. This website serves as a testament to Max's evolution and serves as a repository for their remarkable adventures and experiences.
The logbook updates for Max were crafted through a collaborative effort between Chat GPT and myself. These entries delve into three captivating B stories. The first story revolves around Max's initial failed experiment involving a leather jacket, cleverly linking to the abandoned leather jacket idea we had only recently discarded. The second story presents Max's (fictional) guide on the process of creating their own merchandise, providing valuable insights that I have outlined below. Lastly, there is a final update detailing Max's delightful experiences as they embrace life in the real world, relishing their time as a student at UOW. Moving forward, I have exciting plans to produce additional video content that showcases how Max accomplished this remarkable personal project once the term concludes.
Moreover, I proudly introduced our inaugural merchandise item, the "Digi Dweller Tunic," as part of Max's expanding brand. The design was a personal endeavor, utilizing AI image alteration software from Runway ML, which allowed me to explore creative possibilities. The design itself boasts an '80s sci-fi movie poster aesthetic, carefully selected after considering the kind of content that potential fans of Max Capacity might enjoy. To refine the design, I employed Adobe Illustrator to fine-tune the texts and arrangements, ensuring a cohesive and visually pleasing result. I thoroughly enjoyed the process of experimentation, particularly in leveraging the printing technique multiple times on the shirt, resulting in a captivating glitch effect. Notably, the arms and back text pieces showcase prominent examples of this aesthetic transformation. For a closer look at these intricate details, I will be sharing an Instagram post later in the blog.
Top row: Runway Ml design process of stock footage, to animation for video, finally into poster design
bottom row: original design with hard to read words in a first draft mockup, and the new design on a shirt I printed and heat pressed myself.
In addition, I took proactive measures to enhance the visibility of our first video by utilizing the Instagram advertising tool. With a targeted approach, I focused on local viewers within a specific demographic. This strategic advertisement campaign, spanning over six days and costing me $10 Australian, is projected by Meta to reach over 1,000 accounts in the Wollongong area. These accounts possess interests that align perfectly with our target audience, ensuring optimal exposure and engagement.
With all of my new responsibilities in this group project Ive pivoted to a different but still entertaining approach to max's content. As the work started to pile up I opted for a smaller but still compelling second video element, Max's blogs also fill out some of the story beats, however I plan on possibly continuing maxes story as new technology emerges. Every few weeks a new tool is released and a new deeper possibility is discovered.
In a perfect world I would be presenting two new videos for this final project. The next few videos I had planned to include Max's final escape into the real world. Sadly time and technology restraints held me back. I wanted to use this other incredible ai tool called Wonder Studio which automatically switches out live actors for any 3d character created by the user or available online. I would have used this to bring Max's final blog post to life showing the realistic 3d model of max enjoying life as a student at UOW. However Wonder Dynamics has this software locked behind a submission system. I am in the waiting list I just believe they put creative companies above students in terms of importance. If I do get access to the software I will immediately create another video. As my original goal was to do motion tracked/ CG animated videos and a final video using a different style than Max's style in the metaverse would be an interesting and compelling visual.
The final inspiration for Max Capacity originated from the trailer for a game called Marathon, which captivated me with its visual style and mesmerising sound design. I was such a fan of these elements that I incorporated them into my final Max Capacity video, enhancing its overall impact. The combination of this "graphic realism" style and my own creative vision resulted in a truly immersive video.
Finnally after 13 weeks of work I am very happy with my group project and love the work we created however i feel a even higher quality project could have been achieved with more time and resources so I am happy to continue max's story independently after this term. Overall this has been my favourite class of uni so far as I get to explore all of the creative endeavours I want.