Assessment 2
Music Video Creation
Izaac St Pierre ~ n11498188
Izaac St Pierre ~ n11498188
For Assessment 2 for CGI Technologies, we are required to create a music video using an abstracted 3D character using motion capture. This phase has 3 key steps. Firstly, we will need to capture the data in the motion capture studio. Secondly, we need to go through the process of cleaning up the data for production value. Lastly we need to abstract the character and create the final music video.
For the start of this assessment we were required to head in to the motion capture studio and capture some dance takes for the final music video. The dance performers were other QUT students whom take dancing classes, so the performances were very good and we got a few good takes from 3 different dancers. Not a lot needed to be done in this session simply running through the process of applying motion tracking markers on the dancers, and using VIcon software to record the takes.
This week I also touched based with my group members who will be making the final video with me. We had to decide on an environment to go with between the different ones we had all created. For this, my swamp scene does not blend too well with the other group members ideas for abstraction. I would like to have a scene that fits everyone's final character, that way it feels more authentic.
I proposed to my group that we go with one of their scenes, Lochlan's art gallery scene, simply because of it's simplistic design that fits almost any style. I further added that we could expand upon the scene in our own unique way. For instance, Lochlan himself is doing a stone sculpture character, so he could start with a basic room. Another person is doing a firey character, so they could do charred walls and fire/embers. I could make the room look swampy or have water rushing in, etc. While everyone agrees this is the plan we should go with, we need to keep these ideas for scenery until the very end and if we have time. There is no point in one of us doing an amazing scene if the rest of us cannot, it would look out of place.
This week I have realised the amount of work I am going to have to put into creating the final abstraction, on top of the other assessments I have due at the same time. Perhaps, at this time I am feeling as though I have put myself in deep water with my abstraction choice. I know I can do it, but will I have enough time? Although, there is still a fair amount of time before it is due. This week is more so about confirming with myself that I can do it, or if I need to move on with another style.
From the last assessment, I knew that, simply deforming the mesh didn't look good when the character moved around. So, thinking of new ways I could reach reach the desired effect, my only other solution as of now is to use particles that follow the character.
I followed the tutorial mentioned in my last blog post to set up a test scene using Bifrost fluid particles that follow a moving character mesh. It worked okay but had some issues. The mesh moved faster than the particles could calculate per frame, and it required a lot of particles to get a high-quality recreation of the mesh. Using a low particle count made the result look imprecise and 'blobby.' I spent a good amount of time experimenting, creating custom Bifrost graphs, and trying different ways to optimize particles for a balance between quality and performance.
After an extensive amount of time changing settings and testing different ways in which I could use Bifrost, I decided it would be a good idea look at other ways I could reach the same effect, I didn't think it would be wise to put all my eggs into one basket.
So, Firstly, I researched simulating fluid within the 3D software Blender. While Blender provides a simple way to simulate realistic looking fluids, ultimately it does not have the same amount of variation as Bifrost and is only applicable in certain scenarios; I was not able get the fluid to follow a mesh in a convincing way at all.
While I was experimenting with Blender, I realised I wasn't sure if I could even output the simulation mesh to Unreal Engine and have it animating. After a quick search I found that you can export simulation caches to Alembic files, and these files can also be imported into Unreal Engine. So, I decided to test this first to make sure I wasn't wasting my time. Luckily, I was able to get this working pretty quickly; The following video shows a test fluid simulated in blender, exported to an Alembic file, and imported to Unreal Engine.
A first look at Niagara Fluids plugin
Animated Alembic file working in Unreal Engine
This week I got to work cleaning up the data from the motion capture session. The process of retargeting didn't take too long, but I had some problems which needed to be addressed. The first problem being with the posture of the spine. I believe this was because I straightened out the spine for both models, and after restarting from scratch I left the spine in the shape it was while straightening out everything else, which seemed to work a lot better.
Posture after retargeting with a straight spine.
Posture after retargeting with the default curvature of the spine.
The next hing I noticed was that the toes were not bending. I wasn't sure how to approach this because the MetaHuman skeleton has five toes, but the Vicon skeleton only has one. I took me a minute to realise that the image in the definition panel had a different position for the 'toe ball bone' than on the skeleton. Although it looks a little strange in the panel and on the rig, it works how it should to bend all the toes.
Again I began fixing the animation data only to realise I had also made a mistake with the fingers. I had not extended the fingers all the way to the end on the MetaHuman skeleton, this caused the IK targets to be placed at the Proximal knuckle, which meant I wouldn't have any bending in the Distal knuckle.
After dealing with some issues and setbacks, the final task was fixing odd animations and mesh collapses. Surprisingly, my latest retargeting attempts resulted in better animations right away. However, the challenge arose when the character intermittently lifted off the ground, likely due to data capture issues. To resolve this, I animated the mesh object on the y-axis rather than the control rig, ensuring better alignment with a ground plane.
Notably, the model without clothes exposed issues in certain areas like the groin and wrists, where collapsing was more pronounced. Unfortunately, I couldn't entirely prevent the excessive amount of collapsing in these areas, instead working to minimize bending in these regions.
For a more authentic look, I used floor contacts for the feet and hands. In hindsight, this approach introduced more issues, and if I were to redo it, I'd avoid this step, as the original motion capture animations were already quite good.
Additionally, attempts to add rotation limits to joints for preventing collapsing initially looked promising but ultimately disrupted the natural look of many base animations. Consequently, I opted to forgo this strategy.
The extreme case of wrist and groin collapsing
Fluid simulation is a complex computational technique used to simulate the behaviour of fluids like water, smoke, or fire within a virtual environment. It's achieved by using many particles and applying mathematical equations to these particles to model their interactions, such as viscosity, pressure, and velocity. These equations are solved iteratively, creating a series of frames that simulate the fluid's movement over time. The resulting frames are then used to give the illusion of realistic fluid dynamics.
Turning particles into a mesh typically involves a process called "surface reconstruction". To create a mesh from these particles, you need to connect them in a way that forms a continuous surface. One common technique is to use the "Marching Cubes" algorithm, which analyzes the density of particles in a grid or 3D space and determines where surfaces should be. It then generates triangles that connect the particles in a way that approximates the fluid's surface. The resolution and quality of the mesh depend on factors like particle and 3D grid (Voxel) density.
Diagnostic Voxel preview
I jumped right into running simulations, I knew it was not going to be easy. For the most part, things seemed to be going well, I was able to apply my previous test runs with Bifrost fluids to the new character model.
I was able to push the quality of the mesh to a decently high poly count, giving me a fairly good representation of the original model. I did have problems when trying to go much higher than I already was. Meshes began to show cavities for no reason, and large portions of particles would just not spawn in.
While I was able to get the particles to follow the mesh fairly well, there is some weird problems to deal with. The following image shows a long streak of particles that have for unknown reasons decided to converge and follow each other. Surface tension, viscosity, and friction has all been set to zero, and there are no other options I can see to remedy this.
As you can see in the following pictures, the particles in the fingers have converged into one thin line. The particles will never expand away from each other and if they moved around long enough they would converge to a single point. I don't think this is how the particles should behave, they are given free movement outside of being attracted to the mesh, so I feel as though they should be pushing away from each other; but there is no option to change the distance between particles using fluid particles in Bifrost. I am pretty sure this is an oversight in the way Bifrost works and not something that can be fixed on my part.
I thought it might be a good time to try out other similar abstractions I could use as a backup solution in the case the original didn't work out. I liked the Idea of having fluid slosh around inside a glass character. Setting this up was pretty simple, only had to give the mesh collisions and make it a shell. For the most part this worked fairly well, however, I ran into problems when the character really started moving and limbs were being swung around at high speeds. No matter what I did, no matter how thick the collision mesh was, I could not get the particles to stay inside the shell. I figured it could be where the mesh has very tight spaces and too much pressure was building up, which would force the liquid to push though. So, I tried to create a lower poly mesh and smooth out theses areas, that didn't work. I tried creating a really thick outer mesh, that didn't work. I thought I would just add a kill volume outside the mesh so any particles that did leave the mesh would instantly disappear. The problem with that is that the particles would be ejected faster than they could be replaced, which broke the illusion of fluid moving around inside.
Eventually, I went back to the original idea of having the fluid follow the mesh. I decided to tweak some values and let the simulation run overnight, hoping it would hold together enough. I woke up to find that Maya had closed, there was no error message, and no simulation data recorded. So I tried again, simulating three hundred frames at a time and checking to make sure each part was working. To my surprise, everything was working well, the simulation data was recording and the fluid was holding shape well. I also was able to use what was already recorded and convert it to an Alembic file, no problems.
So I continued on with simulating in chunks at a time, but at around 1300 frames I started to run into some serious problems that I knew were only going to get worse over time, and I had another 4300 frames to go. Bifrost has a weird problem where all the particles end up converging into a single point. There doesn't seem to be a way to solve this either.
At this point, I've invested well over 100 hours in attempting to simulate particles using Bifrost. However, the further into the simulation I get, the more complications arise, resulting in a gradual collapse of the particles upon themselves. Despite implementing various fixes, such as introducing new particles and adjusting motion field values mid-simulation, none proved satisfactory. After careful consideration, I concluded that rectifying these issues would demand an excessive amount of time. Consequently, I've opted to pursue the final aesthetic using Niagara fluids.
This week saw me grappling with numerous approaches, investing considerable effort and time in the endeavor. I explored extensive methods to address the challenges, including closing the mesh, re-rigging a new skeleton, and subsequently retargeting and animating. Additionally, I experimented with a volumetric approach, creating a dense mesh using voxels, only to find it incapable of containing the fluid effectively. I also experimented with transient particles, spawning and dying within a frame or two, but the resultant mesh failed to produce the desired effect.
While it's disheartening to discard the substantial volume of simulation data, the endeavor has not been in vain. I appreciate the valuable learning experience Bifrost has provided. Despite the challenges, it remains a potent tool for VFX, and I anticipate leveraging its capabilities in future projects.
Now that I have a better understanding of how fluid simulations work, setting up Niagara fluids was a quick process. It took me less than a day to create a fluid simulation that follows the mesh as desired.
Niagara achieves its fluid surface look differently from other fluid simulation methods. Instead of creating a mesh, it uses Signed Distance Fields, also known as Ray Marching in a shader. This algorithm efficiently determines distances between shapes, allowing the shader to rasterize pixels on the screen using these distances. In this case, it calculates distances between particles and uses a threshold to determine when to draw the fluid. Interestingly, this is exactly what I had envisioned and wrote about in my first blog post for Assessment 1. The effect is so efficient that I can scrub through the animation in real-time without caching the fluid.
However, there are some drawbacks to using Niagara Fluids. Firstly, in the dance animation, which covers a significant area, I need to extend the simulation bounds. To maintain details, I have to increase the voxel count, impacting performance. Secondly, there's a limit to the number of voxels Unreal can handle before crashing. In my simulation, I've pushed the bounds just beyond what I need and increased the voxel count to the limit before it crashes. This means the details I have now are as fine as I can get them; I would have liked more details like ears and fingers, but it's not feasible at this point. Lastly, since the effect isn't actually a mesh, there's no way to achieve correct light shading, at least from my understanding. This results in no shadowing or accurate light bouncing, giving the character a little bit of an imposed kind of appearance.
As the final week approached for our submission, I endeavour to collaborate with my team to ensure cohesion with the final product. While I hesitate to criticize, it's evident that my video segment in the music video differs significantly from the rest, and I think I should address that here. While I acknowledge my additional experience, I expected better collective effort from my team.
All suggestions and questions I would try to communicate would go unanswered for days, if at all. All suggestions as to how we should put the final video and environment together were my own, and disregarded in the end. I offered solutions as to how we could build upon the scene as stated in the criteria, I got one out of five to agree and say it's a great idea, the rest ignored the question. One person had rendered their final take in the original scene without making any changes, and without telling anyone. I told the group I wanted to make a new room with the same design, a few of the other group members said they were going to do the same. So I spent a half a day creating a nice new environment.
As the deadline neared, I took on the task of compiling the final video since everyone seemed to be avoiding it. I collected the takes, only to find they lacked synchronization or any new scene content, none had made a new room as suggested. Forced to improvise, I crudely looped audio. Faced with the choice of re-rendering my take or discarding my scene, I was disappointed by the lack of commitment from the team. Managing a project for a team that didn't invest effort was disheartening. Apologies for the rant.
Motion Capture is an incredibly innovative technology that demands a deep understanding of its intricate workings. Integrating this technology into the realm of artistic abstraction is like merging two different universes. In this final post, as I conclude this extraordinary journey through the intricacies of motion capture, I wish to share my transformed perspective with you. It is my hope that by doing so, I can provide some insight into what might appear to be a daunting task. Over the course of this experience, I encountered numerous challenges and setbacks, but, in the end, I was able to emerge victorious.
The first part of these assessments took us through the fundamentals of motion capture, what is its purpose, and how it works. Capturing motions is not just about donning a suit and recording movements; it is a comprehensive process that touches upon every aspect of animation. The nuances of skeletal motions, the intricate interplay of joints, and the subtleties of body language are all factors that should be considered. To truly capture the fluid essence of motion, one must dive deep into these intricacies.
The subsequent phase in this assessment centred on mastering the art of retargeting, a pivotal step bridging the gap between captured data and virtual characters. In the retargeting process, our goal is to ensure that one character's skeleton replicates the movements of another, particularly the skeleton from the captured data. This phase presented formidable challenges. Even minor discrepancies, such as misalignment or incorrect bone rotations, may initially go unnoticed but can manifest as significant issues later on.
Compounding this challenge is the diversity among character models. Each character boasts unique attributes, including differences in size, bone count, and overall shape. This inherent variety can itself present a challenge. It requires experience and a discerning eye to determine the most effective approach to make the retargeting process seamless and successful.
Through my experience I discovered this to be where the most attention to detail needs to be focused. If I were to do it over again know what I know now, I would take the time to test every twist and every bend before I jumped into animating. This is the part where if something is wrong it is not always easy to undo. Solving problems here will save you frustrations later on.
In the world of motion capture, animation becomes a simpler but no less vital process. We are not so much "animating" in the traditional sense, but fine-tuning and perfecting what real people have already acted out for us. This is where the true beauty of motion capture emerges.
Instead of starting from scratch, we're entrusted with preserving the authenticity of human movement. Think of it as caring for the real-life performances and ensuring they remain faithful to the original. This straightforward process creates a unique connection between performance and technology, where we celebrate the genuine expressions of the performers in the digital realm.
Our journey in motion capture, from data capture to retargeting and animation fixing, is a fascinating transformation. We're not just animators; we're like digital magicians, bridging the real world with the digital one. This transformation goes beyond animation; it's a transfer of real-life into the immersive digital world.
Retargeting is where we make this transfer happen. We take the fluidity of skeletal motion and translate it into the language of digital characters. This process is a mix of art and science, where we carefully infuse the subtleties of human movement into our digital creations.
Fixing animations is like adding the final touches. It's the phase where we refine the raw data, making sure our digital characters move with both technical precision and a human touch. Through these phases, we're not just dealing with technology; we are the custodians of real-life performances. It's a journey of digital rebirth, where the real world's authenticity finds a new home in the digital one.
The process of abstracting a digital model can vary in complexity, depending on your chosen approach. It's a multifaceted procedure that benefits from careful consideration and perhaps warrants its own detailed breakdown into various possible paths. The key takeaway here is that the level of difficulty depends on the complexity of the abstraction you intend to pursue.
In the realm of realistic fluid simulation, where you aim to push the boundaries of what fluids typically do, be prepared for a challenging journey. Even for someone like me, with prior knowledge of simulations and techniques to create unique effects, I found this commitment to be more problematic than my initial expectations.
The combination of realism and abstraction, while fascinating, can indeed pose its share of difficulties. It's a delicate balance to strike, especially when attempting to fuse the tangible aspects of realism with the imaginative realm of abstraction. Looking back on my choices, I realise that I might have chosen one of the most complex abstractions to explore. However, in the pursuit of artistic growth, it's often the most challenging paths that lead to the most rewarding outcomes.
As I reflect on this insightful journey through motion capture, I'm left with a changed perspective. Motion capture is not just a technological process; it's an art in itself. It's a marvel of technology, where precision, adaptability, and an artistic sensibility can converge to bring characters to life.
In conclusion, the integration of motion capture into a real-time animation pipeline is an intricate undertaking, but it's one filled with rewards. The challenges I encountered were not obstacles but stepping stones, each teaching me invaluable lessons. It's a realm where technology meets artistry, and with patience, precision, adaptability, and a keen artistic eye, we can create animations that transcend the boundaries of mere technicality and become truly immersive and engaging.
This exploration has expanded my horizons into the world of VFX, and I hope that by sharing my experiences and core considerations, I can offer guidance and inspiration to others embarking on similar journeys. Motion capture is a fascinating world where technology meets creativity, and with the right approach, we can breathe life into characters in ways that were once unimaginable.
I hope you enjoyed reading about my experience through learning about motion capture and CGI technologies, and I thank you for reading. I will leave you with the rendered music video from my group and myself.