KNB227 CGI: Technologies
By Andrew McLellan
By Andrew McLellan
Assignment 2 - Abstracted Meta Human & Music Video
Environment
The first thing we did as a group was planning for the music video.
We needed to decide on an environment to pursue from the following:
My Forrest Clearing environment:
Connor's Graffiti Alleyway environment:
Charlie's Japanese Temple environment:
Jackie's Collapsed City environment:
Kayne's Neon Dark Room environment:
Parker's Apocalypse Ruins environment:
I set a vote up in the discord to see which of our proposed environments we wanted to pursue
The results were a majority vote for Jackie's Collapsed City.
Motion Data Capture
During week 6 we undertook some live capture in the virtual production studio.
This process involved first getting our dancer into the spandex suit and placing tracking dots in specific locations all over their body.
From there we set up a character for our dancer and ran a callibration test, asking our dancer to perform a range of motion so that the software knows how the character would can move and can solve properly.
Once the calibration was complete we chose a song to start with and had our dancer stand in an A-pose.
Then all that was left to do was press record and have our dancer work their magic.
We ran through 6-7 different songs with 2-3 takes each. There were a couple of times where we needed to re-position the tracking dots when they moved around due to the dancing.
This session provided us with plenty of data to chose from for our music video.
Exporting Motion Data
The next group task required us to chose which motion data we were going to use for our video. Charlie, one of the dancers, was doing some really cool ariel flips and spins which we all agreed would look really cool with all the particle effects flying around.
So it was an easy choice for data to use.
I downloaded the necessary files from the Teams group and imported them into Shogun.
I then set up the data management to help organise takes. Though I already knew the take I wanted to use I thought it would be good practice to set it up anyway.
I selected all the branches of the dancer and then exported the take as both an FBX and BVH file making sure to add a 90-degree offset on the Z axis so that it was orientated correctly in Motion Builder.
Mapping & Retargeting
In Motion Builder, I repeated the steps from the part 1 assignment, importing my MetaHuman and mapping the joints.
I could have just opened a scene I made for assignment 1 and started there but I wanted more practice at running through the whole pipeline.
Once the skeleton was mapped I zeroed out all the motion data and my metahuman so that they were a near perfect match.
When trying to link the motion data to my MetaHuman I ran into an issue that made me waste a good 4-5 hours. I couldn't get the character settings window to show up no matter what I tried. Thinking it was its own window I went through all the window settings trying to find a way to make it appear and after that didn't work, I tried online and in the MoCap discord. Thankfully someone in the discord had the same problem in Assignment 1 and told me to try double-clicking on the character.
Lo and behold the character settings window appeared, and I felt like a massive idiot.
After linking the characters I scrubbed through the data and made sure there were no extreme distortions or collapsing of the mesh throughout the dance routine.
It seemed like the data was pretty solid as I could see next to no errors in it after watching it 10 or so times in slow motion. There was only a few moments where the hands clipped through each other but that is an easy fix in Unreal.
Exporting
The exporting phase proved to be another time sink with running into issues. No matter what I tried the data wouldn't import correctly. I would get import errors and the import would cancel or I would import the data but it would just be a stationary skeleton.
What was weird though was that the animation take was still there, even the frame count matched from MotionBuilder to Unreal and yet there was no motion on the MetaHuman.
This issue stumped me for a long time, I spent around 10 hours trying to figure out what was happening. I tried re-exporting and importing the data countless times using a bunch of different settings, I also tried re-targeting to the FBX version of the motion file but kept running into the same problem. I asked my peers again in the discord and tried a few of their solutions but to no avail.
Thankfully I was able to solve the issue and once again it was me overlooking a very simple step. During the retargeting process, I forgot to actually plot the animation to the skeleton....
As this step is at the end of a different tutorial video than the exporting one when I rewatched it to refresh myself on the process I didn't watch the previous video which had the plotting step in it.
After I plotted the animation I imported it into Unreal with no problems and I could continue my project.
Animation Clean-up
The way our team split up the animation for the music video was to divide the total number of frames of the take by the number of members in the group and then randomly assign an order for each of us. The plan was to each clean up animation and keyframe camera moves for our section and then stitch it all together at the end so the dancer would be switching metahumans but the motion would stay fluid.
I got pretty lucky with my section, only having a single area in my 315 frames that required clean up.
There is a part where charlie puts his hand on his hip and kicks his leg out, when retargeted to the MetaHuman the hand clipps into the hip like the image above.
Using an additive keyframe layer on the animation take I moved and rotated the hand to sit on the MetaHumans hip properly as you can see above.
Particle Simulation & Abstraction
I first decided to remove the clothes and shoes from my MetaHuman as they took focus away from the the particles I was going to create.
I followed along with the tutorial and inserted my own quixel assets to create a basic falling leaf effect.
After thinking about it for a while I realised that I no longer liked my original idea, though the natural green leaves looked great in my scene, we had changed the environment to the blue-themed collapsed city and I no longer felt like this style of leaves was very interesting or fit the setting so I decided to tweak the concept into something else.
I still liked the leaf theme but wanted something more than just plain greenery. So I jumped into Maya and modeled a simple leaf and then
textured it with different emissive colours in substance painter.
After creating these leaves, importing them into unreal, and setting up the materials I set up a new particle system with them. The idea was to have the leaves fly off the body of the dancer and then spin and float gently to the gound, basically trying to emulate a real life leaf.
This was me experimenting with gravity curves, I wanted the leaves to sort of float upwards after detaching from the body and then begin to float down, in this gif I set the upwards gravity way too high but it looked kinda neat, like a verticle conveyor belt.
This effect was getting closer to what I wanted but it had a couple of problems. For starters the rainbow leaves did not achieve the look I thought it would, and instead looked a bit like colour vomit. Secondly the leaves spin forever on the floor which distracted from the
As a little bonus when I was flying the camera around I went through the floor and this is what the spinning leaves looked like. very cool effect.
When experimenting with rotation I was inspired by the similarity of the Australian Rosewood seedpod, otherwise known as the Helicopter Leaf. I immediately wanted to swap my leaf model to one of these because I basically had the particle system set up perfectly to slot them in and I knew it would be a drastically better effect.
Heres the model I made for them, I made sure to move the pivot point to the center of the seed so that it would rotate from the correct position.
This is the first attempt at adding them, they were a bit large and messy but I already liked the way they spun and fell more then the first leaf.
The next problem I wanted to solve was colour. Having seen what the rainbow vomit looked like I decided I wanted a colour changing material that would have all the leaves the same colour and cycle through the rainbow in unison.
In my naievity I believe this would be a pretty simple effect to achieve since I assumed a lot of people would have tried this. I was very wrong. It took me a couple hours of searching just to find a tutorial on dynamic materials that was doing what I wanted.
After a long time of trying to understand how the node based system worked with materials, I managed to make two separate colour changing materials.
This first one used the object's world rotation/positional data over time to add value to the R,G,B channels. While this wouldn't sync the colour changes I was happy with it and wanted to see how it would look on my MetaHuman. This is when I ran into another problem. Mesh objects spawned from a particle system don't have their own world space, they use the origin of the object the emitter is attached to, and when MetaHumans are animated using mocap data their origin never actually moves. So because this material relied on the object's origin moving around all my leaves just stayed the same colour the whole time. Very sad news for me, this method was not going to work.
After some time spent doing more research, I managed to find another way to make a colour changing material, only this one didn't change on its own, it was set up in a way that you could click a variable on the object and move the colour wheel value around to change the objects colour in real time. While this worked fine at first, I couldn't figure out how to change the colour value through nodes in the particle system, I probably could have solved it eventually but I had already spent a very long time on just this one aspect and was running out of time to complete the rest of the project, so it was time to move on.
Because I couldn't create a colour changing material I went back to the neon rainbow idea but toned it down to 4 colours that complemented and could fade into each other. This proved to work very well and gave me the feeling I was looking for with the colour changing, a neon party mixed with nature.
The last thing I did was to change the texture of the MetaHuman itself. After running the simulation a few times seeing the bright skin under the leaves took away from the experience, it felt separated rather than a cohesive piece. So I exported the skeleton mesh and painted over it in substance with some wood texture and glowing green underneath to contrast the rest of the neon colours. This instantly made my project a lot nicer to look at. Sadly after a lot of messing around I couldn't get the MetaHuman head to import properly into substance, I think something about there being way too many joints for the face, but even trying to export just the mesh it threw up a bunch of errors that I didn't understand. I made a workaround by increasing the number of particles for the head section so that it hides it and you cant even tell its skin in the final video,
This was my final Animation take with particle system and new textures. The last few things I changed were the spawn orientation of the leaves to a random angle but kept them mostly flat as they looked way cooler spinning horizontally and then just tweaking the gravity a bit more to time the upwards floating with the exact moment they detach from the body.
All that's left to do from here is the camera keyframes and render with some music.
Camera Animation
Below is the set up used for the Camera, I added an empty actor to my MetaHumans head and then parented a basic actor that was in the scene to it. From there I set the camera to Tracking and selected my FOCUS object. This kept my MetaHuman in focus no matter where it or the camera was.
This final step of the project caused me a lot of trouble. It started off with no problems, I moved my Sequence into Jackie's environment, set up my camera, and animated some keyframes for it that matched the energy of the dancer. It took me around an hour and I was ready to hit the render button.
Just before I pressed it I saved my work, and all my keyframes became majorly offset from where I had them and all my tracking broke. I had no idea how or why this happened and I couldn't even undo it. I tried to reposition my metahuman so that the camera orbited it correctly but after an hour of trying I just couldn't get it exact enough so it looked terrible.
I gave up and remade the camera moves in Jackie's level thinking it was an importing problem when I brought it from my level. I finish the whole process again and then go to save before I render.... the same thing happens...
This was extremely frustrating, I had no idea what was going on and my work was being erased for trying to save it? It made no sense at all. I tried asking people and searching online but there was nothing about this issue anywhere.
Running out of time and options I tried something else. I thought that maybe Perforce was messing with the scene when I tried to save it as it was Jackie's scene and maybe I didn't have permission to save it.
I made a local copy of our project and began to animate in there. Only this time I didn't make the whole thing before saving, I did two keyframes and saved it and once again it offset all my keyframes in all axes.
I was getting very frustrated at this point not knowing how this keeps happening. I even tried rendering without saving but pressing the render button offset everything again.
I noticed that all the objects that were being offset were ones from my imported Sequence, so I tried re-making the socket, focus, and camera in the scene without them being spawned from the sequence. I key-framed some animation and saved it and thank god it worked, that was the last possible thing I could think of.
My guess is that because the actors in the sequence were spawned in on runtime it set them to the location they were originally made in my level whereas the ground for Jackie's city is about -2440 on the Y-Axis. The weird thing was that the MetaHuman was unaffected by the offset even though it was also spawned by the Sequence, which was annoying because if it moved with everything else all I would have had to do was move the ground up.
So with a solution found (though probably not the correct one) I once again keyframed my camera movements and saved it. The only thing that moved was the focus became unpaired with the MetaHuman socket since the MetaHuman was still spawned in so after every save I needed to re-pair them.
After what I thought would be a 1 hour task turned into 15 I finally had my Camera animation done and could render everything out.
I took up the task of collecting the team's renders and stitching them together in post. Originally we wanted each person's camera to line up with the next person on the last keyframe so that the video was a seamless transition but with how late everyone in the group left it with all the problems they were having also it was going to be impossible to organise it. So in the end I just did some simple transitions that seemed to fit each MetaHuman.
We also changed the song that was used for the data capture as it didn't fit the scene or our MetaHumans, I went through the list and found one in the same tempo that matched the vibe a lot better. It was a bit short so I went into Audacity and chopped it up so it looped perfectly.
I rendered the video out of my editing software and with that, the project was DONE!
Final Render
Credits:
Team 8:
Connor Fitzpatrick || n11289350
Oarker Laansma || n11244593
Andrew McLellan || n11218274
Jackie Nguyen || n11133023
Christopher Bean || n11278919
Dancer:
Charlie K
Song:
HAWS by Konstatin Simonov
Project Analysis
Having spent the last few months learning the pipeline and skills necessary for capturing human movement and abstracting it with computer graphics simulations, I have come to understand that this is a dynamic process and can be applied in a multitude of different ways. This art form is about finding a balance between the captured human form and the influence of the computer simulations. Too much of either can cloud the final product, subtracting from its desired effect. Having just completed a project on the matter there are some key considerations I would pass on to anyone undertaking a similar project.
Having a clear understanding of what you want your final project to invoke in viewers is key to making a cohesive piece of art. You don’t need every detail lined out but a solid framework of what your environment will look like, what abstraction you will be using and what movement you are wanting to capture will play a huge role in amplifying your final product. Each of these three components all influence each other in both directions in the pipeline. For example, knowing what abstraction you will use influences the kind of motion you would want from your performer and knowing the motion influences the abstraction. Its about finding the balance between these steps and experimenting throughout development.
I would have loved to be able to go back to the virtual production studio after setting up my abstraction to have the dancers perform while being able to see the simulations. Having them see it in real time and being able to alter their motion to better show off the simulations would have made for a better piece. Of course, this isn’t really possible during a university semester when everyone has clashing timetables. I think in a perfect set up your performer could be altering their motion, while you alter your abstraction & environment all simultaneously to really home in on an experience you wish to create. I believe a lot of the magic would be created in a space where all three major components are working off each other at the same time.
Along with these conceptual considerations there are a few technical considerations I would pass on to others following in my footsteps.
Starting with capture, it’s very important that you place tracking dots in the correct positions and monitor their locations between each recording as they tend to move around when dancers perform large motions. Doing so ensures quality data is captured and will save hours in post clean up. Along with this is making sure your performer has at last one A-pose and T-pose recorded. Again, this will save hours in the retargeting process, it basically boils down to the better your captured data the easier the rest of the process will be.
Ways to save time with mapping & retargeting involve using the A & T poses you recorded to quickly match up joints and rotations between your MetaHuman and motion data. It would also really help to have a MetaHuman that closely matches the body type of your performer, particularly in the shoulder region. I made the mistake of using a female MetaHuman in Assignment 1 when my performer was male, and you could clearly see the shoulder area bulging out as it tried to match my motion data.
When in Unreal, having your environment set up before making your abstraction can help a lot with cohesiveness. If you make your abstraction first, then then drop it into your environment later you may find that it doesn’t fit the scene at all, and you will need to remake it. I ran into this problem in Assignment 2, since we voted on another group members environment to use the whole theme changed and as such my particle system made no sense. As mentioned earlier the three major components benefit from being made in tandem or at the very least a plan for each one.
To summarize, the main point I would push across to those undertaking a similar project is to have an idea for the whole project from the start so that the vision stays uniform. But be prepared to undertake a lot of experimentation when trying to find the balance between the influence of human motion and the influence of the computer-generated simulations. As well as the general look and feel of the visual components. Your first idea will likely not be your best, and that’s absolutely fine. If you take the time to flesh out your ideas and build upon them with a close relation between all major aspects you are bound to create an interesting and compelling work.
References
Van Opdenbosch, Paul M
Animation: an interdisciplinary journal, 2022, Vol.17 (2), p.244-261