Izaac St Pierre ~ n11498188
For the purpose of Assessment 1, we are required to go through the production pipeline of motion capture, from capturing the data in real-time, to using the data in a virtual space. Firstly, we will be learning about the technologies used and how we will capture the data in the motion capture studio. Then, we will be learning how to process the animation data to be usable and production worthy. Lastly, we are required to demonstrate that we were able to experiment with the animated mesh to formulate two proposals for our final film in Assessment 2. One being a scene in Unreal Engine that a virtual character could be dancing in, and a second proposal being how I think I could abstract the dancer.
I will keep this journal updated as we progress, documenting each step in the motion capture process. My objective with this journal is to share my learning experiences and the challenges I encounter.
In the first studio session we were given a run down of the motion capture system we will be using and how the different parts work to give us . Motion capture works having an array of cameras that detect infrared light. The cameras all work in unison by sending the captured light in real-time to a computer, which then calculates the positions with in the space using data from all of the cameras together.
The placement of motion capture dots is of particular importance, without correct placement the software cannot correctly sync to the character rig. The dots are placed at places of importance respective to a 3D character skeletal rig.
In the second part of the studio class we had 6 people in our class suit up so that we could get a hands on experience. Firstly, we were put into groups and given the task of placing the motion capture dots on our fellow classmates. My team did a really good job and our person was ready for capture almost immediately.
Today we had a look a facial capture system and got to experience it working in real time. Facial capture, as it's name implies, captures the movement of an actor's face. Facial capture is almost crucial when trying to animate realistic and believable movements.
To make facial capture work the actor is made to wear a helmet with a camera attached. The camera is attached to head so that the actor can turn and move their head in different directions and the camera will remain fixed centred to their face. The process of capturing the information works by detecting reference points on the actor's face, while this can be achieved by having a computer use AI detection, high quality productions use black markers placed at specific points on the face, much like full body motion capture.
During our third week in the Game Development course, we focused on retargeting a prerecorded motion capture dance animation using Autodesk's Motion Builder software. This was a key aspect of our academic journey, as it introduced us to character animation, a vital part of Game Development and Visual Effects for theatre.
After retargeting, I was met with a problem where some of the mesh vertices were not following the rest of the mesh, it seems as though those particular vertices were not weighted to the skeleton properly. I thought maybe I might have to try and go through the process again to fix it, thankfully, one of my classmates had the same issue and responded that it should be fine when importing in Unreal.
After we had the mesh targeted to the new skeleton, we were shown the process of cleaning up the animations. Unfortunately, motion capture is not the perfect fix all solution and animations still need to be tweaked so the meshes do not twist unnaturally, intersect, overlap, etc. Following my tutors instructions I went through the process of fixing up the bone rotations to fix some of the problems, like incorrect feet rotations and the mesh collapsing at the shoulders. I have been reassured however that we will doing a lot of post animation tweaks in the next assessment, so I didn't go too far with this. After resolving these issues, I saved my exported my rig ready for Unreal Engine only to realise I forgot to turn the animations back on in the export options. So, back into motion builder, and a quickly baked the animations again.
Vertex issues
Collapsed shoulder mesh
Fixed shoulder mesh
The following set of images shows my rigged and functional MetaHuman skeleton and mesh moving to a motion captured dance in Motion Builder.
This week in our studio session we discussed an emerging technique, in camera visual effects. In-camera VFX is a term used to describe capturing footage in while the actors play out their scenes in front of a large screen with a high quality CGI scene playing behind them. With correct positioning, perspective, and lighting, this is effect can really sell the scene as believable, and it is currently being used by large studios in full scale theatre productions. It is fascinating to me that we have reached a point in this technology that we can make computer graphics so realistic looking they are being presented as such in cinema.
This week I set out to try some different effects on my character mesh. Firstly I had played around with Unreal Engine's amazing new Niagara particle system. I actually played around with Niagara a little in week 2 where I just applied Niagara particles effect from the content examples to a different mesh, which you can see in the following video.
When I imported my rigged character this week, the first thing was apply the particle effect I had made previously to it. The entire time we have been doing this project I have been trying to imagine ways in which I can abstract the character; one of the first effects I thought of was a fiery effect, and have pieces of their skin/mesh slowly disintegrate and float away. So when I first saw the reproductive mesh example's provided by Unreal I got really excited. The effect uses a mesh reproduction system, which you may of guessed, aims to reproduce a mesh with particles. It does this by sampling the positions of either the vertices or the triangular faces of the mesh.
While this effect is pretty amazing looking, I think the result is not what I want to go for. Since the mesh it reproduces is made up of small particles something just doesn't look quite right with it. Could be that it is just a little uncanny, which is something we were asked to avoid.
I have a few different ideas for character abstraction. My first idea was to create make a character that has tree roots or branches that come out from under it's clothes and travel along the limbs to eventually; by the end of the performance the roots would completely consume the character. I think this would be achievable with Blender's geometry nodes plugin. However, I am unfamiliar with the plugin so it might not be best. An alternative could be the character slowly forming showing veins slowly expanding inside the body from the heart until it reaches every part of the body, creating a full human shape. These might be pipe dreams for my skills set just yet, but I will save that one in my memory for later.
Another Idea I had was to make a smokey character similar to The Enchantress from the movie Suicide Squad. I could make the character made up of smoke that reacts to objects passing through it before reforming into the human shape as it continues on dancing. This should be fairly easy to achieve and I'm certain I could pull it off using Unreal's Niagara particle system.
What I would really like to achieve for my final project is to have the character look as though it is made of water but still conforms to the human form, and as the dancer moves around the water sloshes around and warps out of shape before merges back into the original form. I think I could pull it off given enough time. At this point I'm not exactly sure of how to best reach the desired effect. Although, I do have some ideas.
My first idea was to use ray marching a GPU shader algorithm that can blend different shapes together, which can be used to give the illusion of water merging. I'm not sure how possible this is, and I would need to do a little more testing. I do know that ray marching uses signed distance fields, which only applies to generic shapes, so if I were to go down this path I think I would have to create the effect from a bunch of spheres, which might not look so great. I did experiment with this and get a ray marching shader working, however I couldn't find a valid way of applying it to my character.
I contemplated using a feature built into Unreal Engine called Morph Targets, although I haven't utilized them yet. Morph Targets, as I understand, allow for blending one mesh shape into another. This got me thinking, how could I create the necessary animated mesh for this blending process? My proposed approach involves running the animated dance mesh through a fluid simulation in a different 3D program, such as Maya or Blender, and then incorporating it into the animations. However, this raised a valid question for me: if I were to take this route, would the use of morph targets still be necessary, or could I simply bake the fluid simulation into a mesh and pass it into Unreal Engine with a water material?
At this point, I am reasonably confident that I can achieve the desired outcome, so long as I can successfully create a realistic fluid simulation in another program and bake it into a mesh. In the coming weeks I intend to test working with Morph Targets in Unreal Engine and baking fluid simulations into a mesh in Maya.
Below is a quick look at the material I have created in Unreal to try to showcase the effect I would like to achieve. I don't think it is quite there yet, but I still think it looks nice. I will note, it does not look nice when the character is moving around to the animations. If I am going to go with this effect I will need more control over where the deformations occur.
I have a few Ideas for an environment, however I didn't have the time to make them all, so I just made one. Below I have shared some reference images of scenes I was considering.
A street-side food market, inspired by the game Street Fighter VI in the Old Town Market stage
A grungy alleyway, illuminated by a faint glow of street lights and neon signs
An abstract scene with a collection of random geometric shapes and patterns floating around
For my scene I went with something that would compliment my watery character, a dark and misty swamp/forest area with light streaming in. I thought it would be a good idea because it could say something about my character, perhaps they are some sort of elemental water sprite that is dancing in the swamp.
I have gathered some images to try to express the type of scene I am picturing. A swampy area with light piercing through the canopy onto the area where the dance is performed, possibly standing in a small amount of water. Typically, swamps are considered quite dark and scary places, they don't have to be though. Under the right light conditions they can be quite beautiful and majestic. Also, I am happy to bend the environment guidelines and turn it into a forest area.
Below is a video of the final stage I got my MetaHuman to before this assessment comes to a close. The video showcases my swamp environment and a MetaHuman character with the water material I have created. I have thoroughly enjoyed this unit thus far and I'm excited for to work on abstracting my character in Assessment 2. Thanks for reading.