MARK AUMAN | N10752340
The first assessment of this project revolves around the development of a proposal for the music project that will be later developed as part of a production team. This proposal will include a concept for an abstract metahuman form and the environment it will be contained in. A proof of concept animation of the abstract metahuman within the proposed environment will also be developed, to showcase its feasibility for the final project.
As part of this unit, 6-hour intensive studio sessions are done to help us gain a deep understanding of motion capture systems and workflows. In the first session, we were introduced to the QUT Virtual Production Studio space, the basic pipeline for capturing motion capture data, and briefly covered transferring that data into Unreal Engine; something which will be detailed further in the next session.
In the first part of the session, we were shown how the space works. Surrounding the performance area is an array of motion capture cameras which are used to capture motion capture performances within. The motion capture system being used was optical, and I was quite surprised to learn how dependent the system is on being able to 'see' the markers on the mocap suits, as I had assumed the cameras were only needed for framing reference rather than spatial. Upon doing further research into motion capture, I found that optimal motion capture was one of two main forms of motion capture, with the other being inertial motion capture. Inertial motion capture makes use of gyroscopes placed on the mocap suit to turn the movement data of the performer into animations, which was more in line with my initial understanding of how motion capture works. Due to this, I was quite surprised to learn that optical motion capture is the system more frequently used by high-budget productions, while inertial motion capture is newer, cheaper, and more often used by lower-budget productions.
After being shown how to calibrate the volume, camera and subjects, we then broke off into teams and helped some students in their mocap suits by placing some retroreflective place markers on them. I was quite fascinated by just how many place markers we placed on the suit and was even more impressed that the motion capture software was able to tell what place markers we were missing with pretty much pinpoint accuracy, as our team in particular had missed just the outer ankle place markers. This made me understand why optical motion capture system was the preferred method for high-budget productions, as the high spatial-depth accuracy of the cameras allows the system to capture motion capture performances with incredible accuracy. With calibrations done, we began capturing some motion capture performances.
In the first capture session, the performers were asked to pretend to be various animals, which watching in real-time was quite bizarre.
Once the capture was finished, however, we could then replay the result in the motion capture software, and it was fascinating to see just how accurate the capture was, even before being cleaned up.
Later on in the session, I was able to direct one of the motion capture performances we did. The premise of this performance was that the performers were all people in a club: one person would be a DJ, three people would be dancing, one person would be a drunk and another would be the bartender. While everyone else is having a good time, the drunk would try to get a drink from the bartender but would be cut off by them. They would then walk off and end up vomiting on one of the dancers, aggravating them and causing a fight. As everyone freaks out amongst the commotion, the bartender pulls out a gun and fires it to scare everyone off. It was through this performance that I understood the importance of providing directions and using props while doing motion capture performance, as without them, the performers don't have much to go off other than their imaginations.
After capturing the performance, we then watched through it. One of the performers asked me if I had any notes on the performance, and I mentioned that I noticed that it took some time for everyone to react to the bartender pulling out the gun, so I suggested that either the bartender or someone else does an action or says something to bring attention to the action so that everyone in the scene reacts faster to it.
We did one more take of the scene, and this time the performer playing the DJ brought more attention to the bartender pulling out the gun, and as a result the performer's reactions were more in sync. We were happy with this take and so we moved on to the next activity.
After all the teams had gone through and captured their performances, we were then briefly shown the pipeline for taking the motion capture data from the motion capture software to Unreal Engine. The motion capture performance was first captured by the Vicon Shogun Live and Post software, which was then taken into Autodesk Motion Builder to be cleaned up and retargeted, and then finally loaded into Unreal Engine, where the motion capture data was then applied to metahumans as animations.
Through this session I gained a better understanding of the optical motion capture system, and the workflow involved in capturing performance data. At the end of the session, I asked some questions to clarify the assignment, and found that the focus of this assignment is to demonstrate your understanding of the motion capture, data clean up and retargeting workflow more so than the proposed design of your abstract metahuman form and environment (these are more important for the second assignment). Due to this, the design of the abstract metahuman form should be kept simple, with the recommendation being just to do some material changes, make use of the Niagara particle system, potentially parenting Quixel assets to the body, and avoid doing anything that requires changing the base mesh or rig of the metahuman. Using this as a basis, I started brainstorming ideas for what my abstract metahuman form could be.
I did some brainstorming to figure out what sort of abstract metahuman forms I would be interested in developing. The four initial ideas I had was a hive-like body that emits bees, a magma body composed of lava, rocks and fire, a stone-like ruins body with pillars poking out of it, and a nature body composed of roots, branches, bark, vegetation and fireflies flying around them. After doing quick sketches of each of them, I felt the ruins and magma body were a bit too basic/generic of an idea. While the hive body intrigued me, I felt the bee particles may be a bit hard to accomplish, and more crucially that there wasn't much experimentation to be had with variations in the appearance of the hive-like skin of the character. In contrast, I felt that the nature body concept would allow for more experimentation with the look of the body by using different branches, vegetation and bark with varying placements to create different looks and find what I am happy with.
With an initial idea for my abstract metahuman decided upon, I started looking around for references that I could base my concept art around. I searched for reference images of people and humanoid figures that were made of trees, bark, roots, vines, moss and any other vegetation I could think of, and put together a mood board containing these references,
These references provide a good foundation to base the concept of my abstract metahuman on. In particular, I like the image in the centre of the humanoid lying down with bark/wood-like skin, as I feel that would be quite feasible to recreate, and would act as a good base to add additional details and features. For those additional features, I was inspired by these images in that I felt that adding roots, clumps of moss, and even some sparsely placed flowers all over the body would help to add some visual interest and help my metahuman feel more abstract; I feel the two images on the far right of the mood board and the face made out of roots at the top best visualise what I am going for with these features. There is a lot of experimentation to be had with this concept, and so from here I will do some rough thumbnails and concept art to get an idea of what I will aim to create in Unreal, and then when I get to that stage I will have more freedom to experiment with the specific placement of plant matter on the body to find what is most visually interesting/appealing.
I proceeded to sketch out various abstract metahuman forms based on the initial concept of having a nature-based body. I started off with the base roots/vine body base that I had initially thought of (1), as well as did a variation with some branches acting as thorns and antlers (2). Using my moodboard as a reference, I tried seeing how a bark based design (3) would look. It was a bit too simple for me, so I did a variation with roots growing over the body (4), but after evaluating the design, I felt making a design where the bark felt layered may be hard without making changes to the mesh, which is something I wanted to avoid. Overall, from these first batch of designs, Design 2 appealed to me the most.
In the next design, I used Design 2 as a base and modified it to add some patches of moss to it to break up the base body texture (5). I liked how this looked and experimented with this further (6 and 7), with these designs adding in a dirt base body and patches of dirt I could experiment with in further designs. I also experimented with adding in flowers to the design (8 onwards) , as I felt that it could be used to add elements that stand out on the body, along with that, I introduced a grass body base which I quite liked. By Design 9, however, I was feeling the designs were getting quite messy due to the added elements clashing with the root body base. I would later try to revisit this design philosophy in Design 13, but again, felt the overall direction of the design was too 'busy'.
In Designs 10 and 11 I tried doing similar designs to previous ones (6 and 8), but replaced the root body base with a grass body base and felt the design had more visual clarity, indicating that the perhaps a full root body base is too visually busy despite my appeal to it as the initial design I had thought of. With Design 12 I did an even simpler design where the humanoid had a 'shirt' made of roots, and shorts made out of 'moss'. While an interesting design, I think it doesn't fit the nature them well enough. As I mentioned before, I experimented by seeing what a root body base design with numerous elements on it would look like (13), but it only reinforced that I needed to look at making a more simplistic design.
With my final thumbnail design (14), I stuck with a grass body and experimented with adding these wooden plates over certain parts of the body. I quite like the simplicity of the grass body with these wooden plates placed on top of them. I wasn't fully happy with the design, however, it did give me some good insights into what I am potentially looking for in my final concept design. The base body I feel needs to be simple, so as much as I like the root body base, I feel the a grass body base will be more suitable and feasible, although I may do some testing in Unreal to see what fits best. I gravitated more to the designs that featured branch antlers, so I felt that the final concept design should also feature them. The wood plates in design 13 provided some clear definition to the figure, but I felt they weren't organic enough, so replacing them with pieces of wood that are more irregular should make it feel more organic/natural, and should be easier to source in Quixel Bridge. Lastly, I liked the addition of flowers throughout some of the designs, so I feel the final concept should include them to help break up the repetitiveness of the body base.
Taking what I had learned from the thumbnail sketches, I created a rough concept sketch for the design of my abstract metahuman.
I have named my abstract metahuman concept the 'Woodsman', in reference to the fact that the form is mostly comprised of wooden objects. The base of the body is primarily composed of this sort of dirt and root skin. Located on the forearms, upper arms, thighs and shins are these pieces of wood, which are similar to the wooden plates from thumbnail design 13, which I felt added some visual interest to the design. Flowers also feature throughout the torso of the body, and stand out due to the body being primarily brown. A large flower has been placed on the forehead, along with the branch antlers on either side of it, creating a visually interesting headdress. Lastly, fireflies surround the antlers and the hands of the Woodsman which I plan to implement through the Niagara particle system, with the intention being that as the metahuman dances around, the fireflies create visually interesting trails that help enhance the performance.
To help visualise the final design, a coloured rendition of the sketch has been done. Here it is easier distinguish the dirt/root looking skin from the wood pieces located on various parts of the body, and helps bring some visual clarity to what decorative elements such as the flowers and fireflies will look like on the final design.
Previously, I had said that I thought that the grass body base would be best to use, however, when experimenting in Unreal, and applied a grass material to a metahuman head, I found that I didn't quite like it. It felt too it would be too repetitive/generic, even if I added stuff like flowers and pieces of wood to the design.
I did find this roots-like material, however, and felt it had enough variation enough to be visually interesting even before adding other elements, and therefore could act as a good base. Because of that, it influenced the way I drew the concept sketch for the Woodsman abstract metahuman.
Speaking of metahumans, I've spent some time messing around in the Metahuman Creator and created my own metahuman.
I was then able to take that metahuman into Unreal, which I could potentially use to test animation data on, or even use to set up my abstract metahuman; although it's more likely I'll need to start off with a base metahuman mesh that I can more easily apply materials to.
In Studio 1 Session 2, we focused on learning about how to take our motion capture data, and do live retargeting so it can be streamed in Unreal Engine.
To do this, motion capture data from Shogun was streamed into Motion Builder 2022. In Motion Builder, the Vicon rig data was aligned to somewhat match up with the Metahuman Rig data, so that motion capture data could be applied to the metahuman in an accurate way. Then through LiveLink, the data from Motion Builder is streamed into Unreal so that the motion capture data can be applied to Metahumans in real time.
Some technical issues occurred throughout the session, but these were most likely caused by network issues due to data being streamed between multiple devices rather than any actual setup issues.
With this setup, we were able to record motion capture performances directly in Unreal. Something that I found fascinating and useful is that recordings in Unreal are not camera-based but based on objects that you select to record. I feel this gives a lot more freedom and versatility with the motion capture performances you take, as you can apply them in different scenarios and environments easily.
In addition to doing an introduction into live retargeting, we also played around with facial capture. The setup we used was mounting an iPhone into a head rig which was attached to one of the mocap performers, and with the LiveLink app running on the phone, the facial data it captured would be streamed to the linked metahumans in Unreal.
For my environment concept, I felt having a nature-based environment would be suitable. I looked for reference images of clearings in the middle of forests, and was especially drawn to images that featured objects such as rocks and logs placed around the centre of the image, along with images that included some fog or volumetric lighting; all elements I will look to included in my own environment.
Using the references I gathered, I put together a concept sketch for my environment. The environment is circular in nature, surrounded by a ring of tall trees, and the further you get away from the centre, the more of a fog sets in in order to provide depth to the scene. The ground will primarily be grass, with little tufts of grass in the poking out here and there. At the centre of the environment will be a roughly circular patch of dirt that the abstract metahuman will dance within. Surrounding this area are some tall stones, with the intention of their placement being to give some potential places where the camera can go behind in order for a transition to take place between the current abstract form and the next, something which will be important for assessment 2. Lastly, I played around with simulating some lighting, and felt that having a sort of spot light to imply light breaking through the trees and onto the metahuman would help focus the attention there.
Through this abstract metahuman and the environment, I want to create a naturally beautiful scene, but also one that is somewhat eerie, giving an almost cult-like atmosphere with the metahuman in the centre of these strategically placed stones, and the fog making the environment feel mysterious and secluded.
At this point I now have concepts for what my abstract metahuman form will look like, what my environment will look like and the two studio sessions we have gone through have covered how to capture motion capture performances and briefly covered how to retarget that data into Unreal. Before getting into making my proof of concepts versions of my metahuman and environment, I thought it'd be best if I took some time to learn the pipeline for taking motion data captured, retarget it to a metahuman, clean it up and import it into Unreal for real-time performance.
To help with this process, I exported out the skeletal meshes of the different components that made up my metahuman of myself that I had made earlier.
Taking that into MotionBuilder, I took some time assigning the joints of the metahuman rig to the ones shown in the Character Controls menu in Motion Builder so that it could be recognised for mapping/retargeting purposes.
Next I imported some motion capture data from Vicon. As I am still learning the motion capture data pipeline, I made use of the buddy take provided on Canvas so I could follow and compare my results to the resources provided.
With both the metahuman and vicon mocap rigs in MotionBuilder, I proceeded to adjust the two so that they would be aligned in pose. It was recommended to put both into a t-pose, which was quite easy as it just required zeroing out rotations of certain joints, or setting the rotations to 90 or -90 depending on the orientation of the rig.
When I was a happy with how closely matched the two rigs were, I set the metahuman rig to base it's movement off of the vicon mocap data, allowing it to follow the motion of the vicon mocap data.
Reviewing the mocap data being retargeted, the metahuman followed the animation quite well despite some issues with the feet and the arms periodically clipping with the torso, although these issues would be fixed later. There was a more concerning issue though.
It's a bit clearer to see in this video, but for some reason some of the mesh around the head area did not translate properly with the rest of the head, resulting in some distortion in the mesh at points where they are just floating in space.
Upon closer inspection, it seems some vertices in the face mesh were trying to stick to the origin of the scene. I initially thought there was something wrong with the way I had setup and retargeted the rigs, so I redid the whole process again, making sure I followed the tutorials thoroughly, and ensuring I had all the correct settings but got the exact same result. I concluded that this was most likely an error with MotionBuilder and shouldn't be too much of an issue once I imported the mocap animation to Unreal, as there the animation being applied would only effect the rig and not have any effect on the mesh itself.
The last few things I need to do in Motion Builder was some cleanup as some parts of the body such as the feet were orientated weirdly.
As someone who doesn't have much animation experience, I was quite fascinated by the process of being able to add animation layers on top of the base animation in order to make adjustments to it and 'clean up' the data.
With the mocap data retargeted and cleaned up, I exported the metahuman back into Unreal.
By creating a level sequence, adding the metahuman to the track, and then applying the animations imported from MotionBuilder, I was able to achieve the following result above. It can be observed that the floating vertices issue I was experiencing in Motion Builder does not occur in Unreal, seemingly proving what I had speculated previously that it would just be an issue in there.
I wondered if the fix had just been a result of some final data cleanup in the schematic window, but upon reviewing the final version of the retargeting files after that had been done, the displaced mesh vertices were still there. Nonetheless, I should keep an eye on such issues in the future and take note of any effects it has on the results of my work.
I also learned how to do live facial capture and have it be applied to the metahuman's face, which I would record so I could combine the facial capture with the motion capture performance.
By then adding the facial capture animation as a track to the face of the metahuman, I was able to combine it with the motion capture performance to create what is shown above.
Now that I had learned the basic workflow for applying motion capture data to a metahuman in Unreal, it was time to start developing my abstract metahuman.
Earlier on I had set up a metahuman head through a video recording I took, resulting in it having pretty similar facial structure to me. I thought it would be cool to try and generate a full metahuman from this and use it as a base for my abstract metahuman.
When I went to download the full metahuman, I felt that the facial structure of the metahuman was different, even taking into consideration the different lighting setup. What I'm guessing is happening is that the head in Unreal generated from my recording is similar to a metahuman, but not completely the same, as I would later find the properties of this head and the head of a metahuman to be slightly different. Therefore, the standalone head likely has more liberty in recreating the subject's face, whereas the head generated in Methuman Creator has to fit within the constraints of the property sliders used to define the features of the metahuman. Therefore, the geometry data of the standalone head was uploaded to Metahuman Creator to generate the full metahuman, but in the process, the more accurate likeness of my actual appearance was lost due to it needing to fit the constraints.
Regardless, I was able to download the full base texture metahuman. I confirmed that the heads were indeed different (although still had similarities, providing support for my previous theory), and moved onto to develop it to look like my abstract metahuman concept.
The first issue I ran into is that the generated metahuman came with flip flops, while my concept had the metahuman be barefoot.
I couldn't find a way to generate a metahuman in Metahuman Creator with just bare feet, but luckily it was an easy fix, as I just had to go into the metahuman's blueprint and delete the footwear.
Next, I applied the base root material to the metahuman. First to the body, then to the head.
Interestingly, when zooming away from the metahuman, the head lost the texture. This was due to the fact that the head mesh uses different materials at different levels of detail (LODs).
By applying the same root texture to the different LOD material slots, I ensured the root texture would appear at any distance.
Overall I am quite happy with how easily the base root texture applied. The ground part of the texture provides a nice clean base, while the roots wrap around the body quite nicely.
I then decided to try and put the antlers on the model, and found this branch model in Quixel which would represent them nicely.
Placing them on the head, they seemed to fit quite nicely.
I ran into an issue, however, in that despite them being placed and parented to the head of the metahuman, they did not move with it, resulting in them floating in space during the animation sequence.
After looking around a bit, I discovered that I needed to specify a socket of the metahuman rig that the object needed to be parented to.
After properly assigning the sockets of the antler branches and moving them into place, they attached properly to the head and moved with it.
I then proceeded to adds some tree bark models I found on Quixel to the upper and forearms, ensuring to socket them to the correct joints of the metahuman.
The addition of these 'wooden plates' looked quite nice and I felt added some dimension to the base root texture.
My original concept art also had some bark on the hands, but due to the setup of the rig joints, I couldn't get the piece of wood to site nicely on the back of the hand without it sometimes lifting off it at points as the joint point of the hand was closer to the wrist than it was the actual middle of the hand, so I ended up removing these hand wooden plates.
I proceeded to apply similar tree bark models to the thighs and shins of the metahuman to complete the wooden plate look from the concept art.
Overall, I think the wood plates help to add some more visual interest to the metahuman in a simple but thematically appropriate way.
The other main decoration on the model are the flowers that sprout on the head and parts of the body. I found these pale reddish-purple flower bunches to be the most similar to the flowers depicted in the concept art and used them on the metahuman.
I was a bit concerned with how I would be able to parent the flowers to different parts of the torso, but found it be surprisingly easy, with them only needing to be parented to the closest spine socket. It was here it clicked for me that much like how the rig influences different parts of the mesh based on its weighting/influence on different areas of the mesh, the sockets influence the objects parented to them in a similar way, acting as a pivot point, so as long as they are parented to a pivot point that makes sense, they should follow the movement of that socket quite nicely.
Overall, I quite liked the addition of the flowers, however, I felt the design could be unified a bit more. The wooden plates and the flowers currently have no relation to each other, and so I went about trying to fix that.
To achieve this, I added smaller instances of the flower bunches to the wooden plates, feeling that doing so would make the individual elements feel more related to each other, and less like pieces that have just been placed haphazardly on the metahuman.
With the addition of these tinier flower bunches, I felt the design of the abstract metahuman had more of a theme throughout it.
With the major physical decorations added to the metahuman, the next step was to add the fireflies, which would fly around the antlers and hands of the metahuman. I found this package on the Unreal Marketplace which contained a firefly particle system. Not knowing how to use Niagara, I thought it would be good to see if I could get this working instead.
I imported the package, and placed some firefly particles on the hands of the metahuman, although, it didn't really produce the effect I wanted.
In the video above, you can see that the fireflies are initially bunched up around the hands as they are intended to be, but as the metahuman moves its hands, the fireflies spread out different and rotate in weird ways. This was most likely due to the particle systems being parent socketed to the hands, causing the particles to move and rotate like this whenever the hands moved and rotated.
After tinkering around a bit in the settings of the particle system, I found a setting to simulate the emission of particles in local space rather than world space. In the video above, you can see the effects of it. While the fireflies now follow the hands of the metahuman, they do so in a way that I felt was unnatural and visually uninteresting. My intended vision was that the fire flies create these interesting trails as the metahuman dances around, however, this particle system was getting me nowhere.
I decided it would be best that I watched some Niagara tutorials to get a better understanding of how I could modify this firefly particle system to do what I wanted it to do. While watching tutorials, I noticed the setup for the Niagara blueprints they had were different to the one I was using above, leading me to believe the particle system I had downloaded was not actually using the Niagara system. I had assumed it had been, but nowhere on the asset page did it say the system used Niagara, and I neglected to consider that this package had been developed with its own custom system.
I then found how to create my own Niagara particle system blueprint, and instantly found that it allowed me more freedom with being able to customise the behaviour of the particle system. I spent some time experimenting with the spawn rate, spawn radius and lifetime of the particles, getting it to a point where it looked similar to the 3rd party fire fly effect.
Eventually I got the result you can see above. Wanting to see how it would look on the antlers, I applied the effect to them too.
With this done, I tested the animation and got the result shown above (the direct lighting has been lowered to highlight the firefly effect). Here it can be seen that as the metahuman dances, the fireflies create these interesting trail patterns, which I feel adds visual interest and enhances the performance of the metahuman. Overall, I am quite happy that my custom solution for the fireflies effect works as intended, and that the overall aesthetic of the Woodsman metahuman looks quite nice.
This abstract metahuman I have developed is still a proposal for the final project, however, and so the overall look and effects applied to it are not final and can be iterated upon to further enhance its look and performance, but this so far provides a good proof of concept for how materials, quixel assets and particle effects systems can be used to construct my final abstract metahuman.
In Studio 1 Session 3, the focus shifted slightly from motion capture to in-camera visual effects. In-camera visual effects involves the use of a large wall or multiple walls made of LED panels displaying a 3D environment with a physical camera mapped to the space, so that when the camera moves around filming performers and props, it gives the illusion that they are really within that environment. I'm quite fascinated by this filming technique, not only because it makes use of game engine technology, but also because of the issues it fixes with more traditional filming techniques such as filming in real-life locations and filming on green screens. Filming in real-life while the most accurate, can be largely dependent on the lighting and weather conditions of the location being favourable, increasing the cost and time to film if they're not. Green screens provide a good solution for allowing more control of the environment, although at the cost of performers not being able to visualise the space that well while filming, and relying heavily on post-production work to realise the vision of the scene. In addition to that, the green screens often 'spill' onto surfaces, scattering green light on subjects in the scene. In-Camera VFX combines the best of both worlds, allowing the production team control over the environment, while allowing the performers to better visualise the setting they are in as well as have the lighting of the scene be accurate.
I was vaguely familiar with this technology beforehand due to The Mandalorian, but I will admit I was a bit skeptical about how good the technology would be. While watching some clips of the technology and even during The Mandalorian, I felt that at points there was a loss of depth, but I felt this had more to do with the fact that I felt there was a limitation in this technique in that the actors can't really interact with the background environment and vice-versa, which made me subconsciously feel there was a divide between the performers and the environment. When the output of what was being shown on the camera was shown, however, I was surprised by how much it looked like the car was in the outback environment it was being filmed on. It was explained that several factors can influence the believability of this system, which can range from something as fundamental as making sure the camera is aligned properly, the system is running at a good frame rate, and subtle details such as making the car move up and down slightly to simulate it driving or wind blowing in the faces of the performers.
I always wondered how exactly the physical camera was linked to the Unreal system, and was fascinated to find out that the physical camera is mapped to a virtual camera in an Unreal scene and that the distance between the physical camera and the volume screen has to be calibrated/measured to be the same as the distance between the virtual camera and the plane that displays the view of the world called an 'endisplay'.
The thing that fascinated me the most, however, is the fact you can move objects and make adjustments to the scene in real time. I played around with the system a bit, moving around some trees and road signs, as well as trying different camera shots and lighting setups, all of which was updated in real-time. This in my opinion is the greatest feature of this system, as it would allow production teams to be able to make changes on the fly to tailor the environment to their liking, significantly reducing the reliance and amount of work done in post-production.
With the proof of concept of my abstract metahuman complete, I now need to create a proof of concept of my environment to compliment it.
I started off with an empty plane, placing my own metahuman at the centre for scale reference.
Referring back to the concept art, the element that populates the environment the most are the trees, so finding assets to represent those was a good place to start. I initially started by looking in Quixel and found these trunk assets that I thought would be a good starting point.
While the base of the trunk was a good 1 to 1 to the concept art, I was concerned with the fact that the model abruptly cut off where the tree branches out and has leaves. I felt if I were to use just the trunk assets from Quixel, I would be limiting the potential shots for my take, as filming in an upwards direction would reveal this unnatural level geometry. The trunks would also not produce shadows consisting of big blobs of leaves, which I feel would be expected in a forest scene.
Therefore I started looking for some full tree assets I could use in my environment. I found this European Hornbeam Tree asset pack on the Unreal Marketplace, and felt I could use these tree assets.
I took the asset pack into the project to test thre tree assets and was happy with the models provided and began exploring more of the asset pack to see how I would arrange these tree assets to produce the environment I wanted.
I selected these six tree assets to construct the bulk of my environment. The three trees on the left would form the inner circle as they were smaller and had less leaf spread, which would allow them to better define the circle that the main environment is composed of. To fill out the scene and provide the illusion of the full forest, I planned to use the three bigger trees on the right to surround the inner circle of smaller trees.
I started by marking out a circle around my metahuman using one of the smaller trees.
I then took the viewport camera and placed at the maximum distance I could without clipping into one of the trees to ensure I had enough space to move the camera around the environment, taking into consideration the potential shot takes I will do later on in the project.
To ensure that the inner circle did not feel too repetitive, I replaced some of these initial trees with some other types of trees and added in some smaller trees in between the original circle.
Already a forest environment is starting to form, although the edges of the plane can still be clearly seen, which is why an outer ring of large trees are needed.
I started surrounding that initial tree circle with the larger trees, which from a birds-eye view, makes the centre of the environment look a lot like a forest clearing.
Even with these large trees added in, however, I ran into an issue. Despite their addition, there was still a large amount of clearing between trees allowing a view of the plane beyond. This was due to the trees having taller trunks that were more slender than I had expected, and the leaf bearing parts of the trees starting higher up.
I was able to find a solution, however, in an element I had disregarded earlier. The branchless trunks I had originally considered had much thicker bases, so by scaling it up, I could place it behind the other trees to fill up the clearings.
With the leaves of the inner circle of trees blocking the awkward cut off of the top of the trunk, it now looks more like a natural part of the environment
I got a few trunk models and placed them around the outer cricle of trees, which did fill up some of the clearings, but still it was far too open for my liking. I didn't want to place anymore of these trees, in fear that it would look too repetitive (as well as what it would do to my computer). Therefore, I needed to find another way of obstructing the view between these trees
I detailed in my original concept art detailed that there would be fog in the environment which would grow thicker the father out from the centre of the environment you got. I set about trying to find a way to achieve this, and after playing around with the exponential height fog of the scene and enabling the volumetric fog, I achieved the following effect. A lot of the view of the plane beyond the environment had now been obstructed.
Some tweaks and calibration was needed, however, as the volumetric fog at points would obstruct closer parts of the scene.
After modifying the starting distance of the volumetric fog and increasing its density, I was able to get the desire effect where the clearing in the middle of the forest was clear, but as you get farther into the forest, the more dense the fog gets.
I added a grass texture to the floor, and looking at the scene from an upwards angle, you can see the fog creates a very mystical atmosphere to the scene.
I was curious to see what my abstract metahuman would look like in the scene, so I brought it in and was quite happy, although, I felt the light was quite flat.
I experimented with the colouring of the lighting and found that this blue hue which affected the lower parts of this scene to be quite nice and made the scene feel more moody, although a side effect is that the plane beyond the forest was now visible again.
I experimented with the lighting some more, and found this dark green hue that paired well with overall cult-like aesthetic Iwas going for with the scene. I also changed the direction of the light to be directly overhead to add to this ominous atmosphere. Due to the fog, I was able to add more trees to darken up patches of the scene where you could see the plane in the previous photo.
The fog looked especially nice when looking at the treeline above, as it sort of seeped through the leaves, further enhancing the mood of the scene.
The original concept art also featured a dirt patch in the centre where the Woodsman stood. I was able to find a Quixel asset I could use to represent this. The colour is a bit lighter than I would want it to be, but I liked the natural shape of it and still felt it was a good representation of what I was going for. I also added some small plants around the edges of the forest to provide some more variety to the ground plane.
The last elements to be added where the stones that surround the Woodsman. I needed something tall and slender and struggled at first, with the only suitable rock in Quixel that I could find being the one above.
I scaled the rock to be slightly taller and thinner, and then placed six of them in a circular arrangement around the Woodsman.
This completed my creation of the environment, at least to a proof of concept stage. More work could be done to the the texturing and decoration of the ground plane to make it more fine-tuned, and the circular formation could be comprised of more varied rocks, but the current iteration provides a good visual for the intended vision.
I feel this proof of concept still gets the mood across of the scene, and I feel this is largely due to the lighting and the fog. The fog obstructing the view of what is beyond the forest gives the scene a mysterious vibe, and makes this clearing somewhat abnormal, which is further enhanced by the almost precisely placed stone formation around the Woodsman. I did change the lighting slightly at this point from directly overhead to slightly on an angle as I felt it provided more interesting light shafts and better looking shadows.
Lining up the camera to somewhat match the original concept art, I feel I have created a close matching real-time environment to it, although the trees are a bit more spread out, which I feel is a given in order to make the scene feel more realistic, as well as ensure that I can easily get shots.
In Week 5, we formed groups for the second assessment, and in our first group meeting while discussing what sort of movements we wanted to capture in our assessment 2 capture session, it was brought up that we need to consider the space in which we are capturing data, and how that relates to the environments we are creating. I realised I hadn't really considered the full capture space, and the circular rock formation may limit the performance space, so I slightly increased the radius of the rock circle to allow for more space. I may find that I need to increase this radius more if the performance we capture extends beyond the current size I have allowed for, but that's what great about real-time environments, we can edit them on the fly quite easily to better fit the performance.
Data Selection
The next step was to clean up and retarget some data from one of the data capture sessions we had done. Pretty much what I am doing here is taking what I learned from the first lab session when learning the motion capture data workflow, and showing that I can apply it to another set of motion capture data. The first step was to first select one of the motion capture takes we did to use as data for cleanup and retargeting. At the time of doing this, the Week 3 capture data folder was missing the exported fbx files, so I only looked at the Week 2 capture data. I wanted to get a performance that showcased some dancing or somewhat involved vigorous movements, as going through and doing the cleanup and retargeting workflow on this data would give me good practice and experience for when I need to do cleanup and retargeting of the dance performance for assessment 2. After looking through the takes, I eventually settled on Take 14 of Week 2.
Exporting The Skeletal Meshes
The following take had the performers doing quite a few dance moves that I felt would be good to try and cleanup and retarget. Going from left to right, performer 1 and 5 I felt had quite clean performances with a range of movements that would be good to work with, but ultimately I selected performer 1's performance as I felt they overall had more varied movements that would present more interesting cases for data clean up and retargeting.
While I could just use my base metahuman for this process again, I felt it would be good to try and do the process on my Woodsman abstract metahuman to see what sort of issues crop up with it rather than find out later down the line. The first step was to export the skeletal meshes that comprise the Woodsman.
Setting up the Metahuman Rig in Motion Builder
Interestingly, unlike last time, there were no legs, torso or feet skeletal meshes to export. The feet were quite easily explained, I had deleted them before as I did not want any footwear on the woodsman. As for the torso and legs skeletal meshes, I think there aren't any because the base of this metahuman did not have any clothes, and therefore there are no meshes for these parts that need to be exported. This somewhat simplifies the process as I only need to export the body and face meshes.
I then imported the exported skeletal meshes into a new scene in Motion Builder.
Here's another view of the skeletal mesh with the joints of the rig showing.
Next I started the process of characterising the rig so that Motion Builder could understand the structure of the metahuman.
I find it quite fascinating that areas such as the spine and the neck have slots for a large amount of bones, however for the metahuman rig, only two are needed for the neck for example, and the last joint is used to control the head. This is most likely to allow for more complex movement with higher fidelity rigs, but it's still interesting that this system can work with less joints.
The shoulder is another area that has multiple segments, however, I've only assigned the longer bone to the first slot as that is what was done in the tutorial. I at first felt like that I should assign the highlighted bone above to the second slot, however, after studying it for a while, I felt this was more of a hinge-like joint rather than like an actual bone, which is why it wasn't assigned when previously doing this process. If there are some weird distortions with the shoulder, I'll try this process again and ensure I assign this bone.
Another thing of note is that after assigning the hand and finger bones, I got errors saying that the arms aren't parallel and the right hand is missing bones. This occurred during my previous attempt at this process, but it is interesting nonetheless that such an issue exists.
This is an easy fix, however, as I can just go ahead and manually assign the finger bones of the right hand so that they are properly set up.
Interestingly, when I went to characterise the toe bones, the same issue didn't occur and the symmetricality worked as intended.
Reselecting Motion Capture Data
I then imported the capture data from Take 14, but it was at this point I realised that this take did not start an end with a A or T pose, which would make lining up one of these rigs and the subsequent data cleanup and retargeting difficult, so I quickly found a new take I could use.
After looking over the takes again, I decided to use Take 12, which did have the performers go into an A Pose at the start and end of the take. In particular, I quite liked the dance performance of the performer in the back, as while there was another performer who danced more vigorously, they were knocked out by another actor and didn't have much more movement after that, so I selected the performer in the back as they remained moving a bit longer.
Preparing The Motion Capture Data
When I imported the take into my Motion Builder project, I ran into an issue. For some reason the data had altered the position of the metahuman skeletal mesh and was moving along with the performance at points. If I recalled correctly, one of the performers was using a gun prop which had markers on it as well, so what I think must have happen is that when importing the animation, Motion Builder didn't know what to do with the extra markers, and instead assigned them to the metahuman mesh.
At first I didn't know what to do, but I eventually got into schematic view where I could view the node trees that make up each of the performers. I figured that if I deleted the ones for the gun prop, I could delink the metahuman from the animation data, and realised I would need to do this step regardless as I would need to remove the trees of the other performers I didn't need and just keep the one I needed, which I ended up isolating as the Lachlan rig.
I ended up being able to delete all the other performers and just get Lachlans mocap rig on its own, but there was still a lot of unrelated place marker data and the metahuman still moved with the prop gun markers.
There turned out to be a big tree of unlabelled markers that I also needed to delete in order to get rid of those of unwanted place markers.
The extra place markers were gone, leaving only lachlan's rig but for some reason the metahuman rig was still off centre and had the gun prop animation applied to it.
After some research and experimenting, I found out I needed to select the root object in the metahuman which contains the data for the bones that make up its rig, go to the key controls panel, select animation and clear all of the animation properties.
After doing that and zeroing out the position and rotation of the metahumans root, as well as reorientating the Lachlan rig, I got the metahuman rig clear of animations along with just the Lachlan rig animation.
Something to note is that the Lachlan rig is off-centre, with the origin point of the mesh being quite off. I'm not sure if this will have any effect on the retargeting process, so for now I will proceed with the rest of the retargeting process.
When I went to characterise the lachlan rig, the neck and head bones weren't configured properly, so I quickly manually assigned them.
The arms of the lachlan rig were also not symmetrical, but this fine as now I need to orient the rigs to be in a t-pose, which allow them to be.
Preparing The Rigs For Retargeting
The rotations I needed to apply were slightly different from those I needed to apply to the Buddy take, most likely due to the initial orientation of the two rigs being different, so I took some time to figure that out.
After adding a slight bend in the arms and zeroing out the legs, the Lachlan rig had been set up in a t-pose stance.
Orienting the metahuman rig was quicker as it was pretty much the same as I had done for the other metahhuman rig for the buddy take process.
I then lined up the two rigs to see how well the rotational planes matched and how close in pose they were. The metahuman was slightly taller, although this may be due to the Lachlan rigs heel joints being more flat on the ground, but other than that the two were quite a good match.
When I went to go retarget the motion capture data to the metahuman, however, I got this weird effect where the arms of the metahuman went up and the head was bent weirdly.
I couldn't figure out where I had gone wrong with the retarget so I decided to find out if the rigs had been set up correctly. It turned out that the vicon rig hadn't been fully characterised properly, with the spine being the biggest offender. The top spine bone had for some reason been set to the bottom node of the character's spine, with the rest of the spine bones being unassigned.
The fingers weren't set up correctly either.
After fixing up those issues, however, I seem to have just made things worse when I realigned and retargeted the rigs. I redid the process twice, making sure I followed the exact same process I did with the buddy take, but no matter what I did, the same result occurred. I felt there were a lot of different factors that could have created this scenario. For one, I feel the prop motion data applying an animation to the metahuman rig messed with things, even more so when I decided to clear the animation as that may have had unintended effects on the rig's setup. That may have then caused issues with the vicon rig characterisation which was why the bone assignment was so inaccurate. I felt as though for the moment, I couldn't solve the issue, and decided to try and use some other motion capture data.
I started fresh and imported motion capture data from Take 2 of the Wednesday session in Week 2. In this take, the performers were asked to act out the movements of different types of animals. The most notable thing is that because this take didn't have props, no mocap data would be applied to the metahuman rig and so theoretically there is less chance of the retargeting being messed up.
The performer I chose from take two was once again Lachlan, which was a coincidence, but a good one as I could test if there was an issue with the process I was doing, or see if perhaps it was an issue with the way Lachlan had been dotted up and if so, use a different performer.
Lining up the data, it looked somewhat promising.
But once again when I went to retarget I got similar results.
I at this point wondered if there was just something wrong with the Lachlan rig, so I set up another performer, this time Will and started mapping it. Notably, there were no errors with the characterisation process with Will that I had to fix, which was a good initial sign.
However once again, the arms failed to retarget properly. The feet also are a bit of an issue, but it's less major and can be fixed with data cleanup. The arms, however, are a lot more noticeable, and I would rather try to get them to retarget correctly first before applying data cleanup which I feel is for more subtle fixes.
After a few more hours of experimenting, I finally got the metahuman retargeted to the motion capture data. I rewatched the lab videos again to make sure I was doing the process right. During the video, it was highlighted to make sure both characterisations were locked before you proceeded to retarget. I checked that I had done that, but decided to unlock and then relock the characterisations to make sure. After unlocking the vicon data characterisation, Will's rig went back to its initial A-pose. I realised I had been adjusting the characterisation while it was locked and so it wasn't actually applying those changes in rotations in the mesh. So I redid all the alignments, making sure the characterisations were unlocked, and then relocked them once they were ready to retarget, and then it finally retargeted the arms properly.
Here is a video showcasing the raw retarget of the data. As it can be seen, the data already looks quite good ignoring the vertex mesh bug that occurs in Motion Builder that I've discussed before. Now that I know what was causing the retargeting issues, I could go and try to retarget the other data I was trying to use before, however, the Lachlan rig data I was working with before also had some weird issues with the head that I don't think can be solved with the unlocking and locking of the characterisations, so I am happy with the current data I am working with. I also feel that this performance I am currently working with goes well with the theme of my woodsman abstract metahuman, and that the movement will provide a good opportunity for some data cleanup due to the bounce poses that often occur throughout it.
The next step was to clean up the animation data. As I was preparing to do that, however, I instantly noticed was that the feet of the metahuman were raised when they were supposed to be flat on the ground.
Now that I knew I could safely unlock my rigs to edit them, I unlocked them to compare their feet. The vicon rig had been adjusted to have the feet and toe bones flat on the ground, when the metahuman is set up to have its feet bone slighlty up from the flat toe bone.
I reconfigured the vicon data so that the feet bones more closely resemble the metahuman rig.
This resulted in the two rigs lining up a lot more closely.
Which also looked great at the start of the take.
And the desired effect of having the feet starting flat on the ground was achieved, and the legs of both rigs match up really nicely.
Due to the difference of the arm and spine alignment, however, these segments of the body are slightly off. This is fine, however, as all this means is I know where I should look when looking for where I need to clean up the data.
Cleaning Up The Data
After plotting the character, I started to examine the animation. The most common pose throughout the animation was this bunny-like stance. For the most part this pose looked quite clean despite the difference in the arms since they are just hanging out in front of the body.
The legs, however, had some issues. In the first image above, the lower thigh near the knees were quite deformed, and later on there is quite a lot of intersection between the calves and the thighs. Since these bounce poses are quite frequent, I will need to fix up these deformations so that the animation doesn't look as weird.
To start off, I locked the base animation so that I would not interfere with it and then added a new animation layer called 'Bounce Adjustments'. I felt that raising the hip node would be key to fixing up these leg deformation issues, so I added a keyframe to it on the first frame of animation so I would have a reference point.
At the first point where I felt the bounce pose caused the most deformation, I selected the hip node.
I then raised the node up so that the thighs and calves didn't intersect as much, with the idea being that if I keyframe this adjustment as early as possible, I save on having to adjust it more times later down the track.
The other issue was that the legs at certain bends looked quite deformed around the rear knee.
To fix this, I moved the hip node back a bit so that the deformation in the rear knee isn't noticeable and key framed it.
I was worried that since these adjustments causing the metahuman rig to not line up as well with the vicon data would cause the animation data to not look as good. I scrubbed to a few frames after the latest adjustment, however, and found I was still happy with the state of the animation. Overall, the essence of the bounce pose was still maintained, but the adjustments had ensured that parts of the model like legs did not deform as much.
For other deformations, moving the hips didn't do much such as the case above, and in this particular instance, only the right leg was a problem.
The, solution I implemented was key framing the right leg and moving the leg slightly to the right so that the knee wasn't as deformed. This looked quite natural, as in the animation, the metahuman was turning to the left, making it look more like the right leg was providing support as the metahuman turned.
The issue was that when it went back to travelling in a straight line, the right foot was now too close to the left one.
To fix that I adjusted the position of the right leg to match closer to the vicon data and keyframed it, thereby putting the metahuman into a more natural pose.
The adjustments so far had the side effect of making the metahuman rig lean towards the right when the vicon data showed it should be straight on.
To fix this, I dragged the hip node back to be centred between the legs and keyframed the pose, resulting in a much more natural looking pose.
From there I didn't have as much issue with the bounce poses, however, when I reached the end of the animation, the metahuman was now up in the air. I realised I had neglected to set a key frame for the end pose of the animation at the hip node.
So on the last frame I dragged the hip node down so that the feet rested on the ground plane like the vicon data, and then keyframed the position, resulting in the metahuman rig ending on the ground rather than in the air.
When I went to review my cleanup so far, I realised I wasn't very happy with it. The result of my adjustments to the bounce poses was that the bounces made the legs more snappy and move less naturally. What I had gained in terms of more realistic look of the mesh was a loss in the fluidity and realism of the animation.
Rather than scrap all the work I just did, I decided that what I would do was reduce the influence of the Bound Adjustments animation layer as currently it was on 100%. The whole point of animation layers as well is to allow for non-destructive edits to the base animation while also allowing the edits to have different levels of influence. The current influence of the Bounce Adjustments layer was way too strong, so I reduced its influence to 30%, with the above video showing the result. The output is a lot smoother, while still retaining some of the deformation fixes present in the original 100% influence version.
There are still some issues with the animation that need to be cleaned up. It can be seen in the previous video that around frame 205, the right leg jitters after the metahuman jumps up. To fix this, I removed the keyframe at 211, which was the one that moved the hip slightly back, as I felt what was happening was that the keyframe was forcing the animation to move really quickly to get into the position specified by the keyframe, causing some unnatural jitter in the legs.
The next thing I needed to fix was some jitters in the knee at this point where the performer is pretending to clean themselves with their hand.
I was able to apply some keyframes to the knee to stop it from jerking as much from one side to the other, but the twitching was still pretty constant, and affected more than just the knee. It became apparent to me that keyframe adjustments wouldn't be enough to clean up this issue.
After doing some research, I learned about filters, in particular the Butterworth filter, which you can use to apply noise to your animation to try and smooth out any jittery and twitchy movement. I applied this to the whole animation to give it a general smoothing, and then experimented with applying a few more butterworth filters to the crouching section discussed before in order to get the knees to not jitter around as much.
The result of that butterworth filtering is shown in the video above. It can be seen that overall the filtering has made the knee movements much smoother. There is still a slight jitter between two of the frames, but for some reason, even after trying to filter it out it just wouldn't go away. It doesn't look as bad as the jittering before, however, and could be seen as the metahuman flinching as it's grooming itself with its hand.
The last thing I needed to adjust was making sure the hand's didn't clip (as much) into the head when the metahuman is patting itself. I also need to consider that the Woodsman metahuman has antlers on the side of the head, so by raising the height of the arm it should allow the antlers enough space to fit in the gap.
I created a new animation layer called Hand Adjustments, and whenever the hand reaches the top of the head I raised it up so that it is just touching it. I did raise it higher at one point so the hand wasn't clipping the head at all, but it looked strange for it to just be floating in the air, so I had it clip a little bit. The frames go by so quick that unless you are looking super close, you won't be able to see the fingers clip the head that much.
After all that, the data has been cleaned up. Overall I think I was able to smooth out a lot of the jitteryness of the animation as well as get rid of the majority of awkward deformities. I think a side effect of this process is that it looks a little different to what the performer's raw data was portraying, which as I stated before, I believe in this take they were trying to mimic animal movements. I don't think this is necessarily a bad thing, as the adjustments and cleanup I have done I feel make the overall animation feel more like a weird animal-humanoid creature bouncing around in a very eerie way, which I think fits the concept of the Woodsman Abstract Metahuman. You can still see a little jitteryness at the end of the animation, but that is because the performer went outside of the volume space, so the place markers on them could not be fully tracked, but once they returned back into the space the animation became a bit smoother again. This is fine, as when I take this animation into Unreal, I can then cut down the animation to however long I want it, as I don't think I'll want the end of the animation where the performer goes back into an A-Pose.
As evident by what I had detailed in the previous section, there were quite a few issues I ran into while importing, retargeting and cleaning up motion data. This concerned me as I felt that a lot of the issues I ran into could be solved, although due to my inexperience I was unable to and resorted to the solutions that I implemented. Therefore during the Week 7 lab session, I discussed these issues with the unit coordinator Paul in order to learn how to solve these issues in the future.
I didn’t end up using Take 14 because the take didn’t end up having an A-Pose at the start or end of it, however, Paul informed me that if you had access to another take with the same performer, you can get A-Poses and retarget using the performer from that take and then apply the animation of the same performer from the other take to the metahuman. This made a lot sense when I thought about it, as if the rig and solver data is the exact same, there really isn't any reason why you can't use two different takes of the same performer for different purposes in the retargeting and cleanup process. This could prove important for assignment 2, where I might find I really like a certain take and want to use it as the animation for my metahuman, but find that another particular take is better for retargeting.
Paul also informed me that you could also alternatively adjust the data in vicon, but that is outside the scope of the unit as Vicon software
I also asked Paul about the issue I ran into where when importing motion capture data that had props in it. He said that this occurs when the metahuman is not given its own namespace, Motion Builder goes through the hierarchy of the project, and since the prop place markers are all called root, the motion capture data of these props is applied to the root node of the metahuman. The solution to this is to either give the metahuman and its nodes its own namespace so Motion Builder doesn’t mistake the prop markers with the root of the metahuman and then create instead of merge when importing the motion capture data, or alternatively, by first importing the data and cleaning it up to just the data you want and then merging in the metahuman rig into the Motion Builder to retarget the data to it and perform cleanup.
This does not invalidate the methods I employed in order to get the data retarget and cleanup that I did, but instead provides me with a wider array of options for when I do data retargeting and cleanup for assignment 2.
With the animation cleaned up, it was time to take it into Unreal. I created a copy of my environment scene containing my Woodsman metahuman, and created a level sequence where I could apply the animation as a track to the metahuman rig just like I did with the buddy take.
I was a little bit concerned about when the Woodsman would pat its head if the arm would clip the antlers, as while I did account for it during cleanup, I had no actual visual for if this would be the case. Fortunately, when I reviewed the animation, the arm did not clip the antlers when the Woodsman patted itself as I had hoped. For the second assignment (and for future projects concerning motion capture data where the subject has external appendages), perhaps when I go to cleanup other data I should consider importing the antler meshes so I have a clear idea of the bounds I have concerning the headspace of the metahuman, and therefore allow more accurate cleanup of the data.
Overall, however, I think the animation was applied well, with the clean up helping to make the animation look smoother.
Like I mentioned before, however, some of the data at the end gets jittery due to the performer stepping outside the bounds of the capture space, and I didn't even want to use the beginning or end of the take anyway as they contain the performer in an A-pose. Therefore, I cut down the animation to only contain the parts of the performance I wanted as shown above.
The last thing I wanted to do was to make some further adjustments to the environment and lighting. While this is just a proof of concept, I felt conveying the overall mood of the scene through the environment and lighting now would help in establishing the creative direction and further development of the scene for assignment 2. In the current scene setup, I feel the lighting and the materials of the dirt patch and the grass are too bright. I want the scene to evoke a more eerie and mysterious vibe, and I feel those elements don't really contribute to that atmosphere, and instead make the scene just look like a generic 3D environment.
I therefore made edits to the lighting, fog and materials of the environment as shown above. To start with, I changed the materials of the grass and dirt patch, as I felt that this was the largest contributing factor to the generic look of the scene. I chose darker materials that I felt matched together better than my previous selections. Next, I changed up the lighting. I had angled it slightly earlier from a purely top-down light so that it provided better light shafts, however, upon reviewing I felt it was at the expense of the overall lighting and fog quality. I also thought the brightness of it was too intense for the intended mood of the scene, so to better support that, I made changes to both the intensity of the lighting as well as redid the angle it was facing. With the changes made to the lighting, I also made changes to the fog, as after reviewing the previous setup, I felt it was unnaturally dense, and wanted a more gradual falloff. The changes to the lighting helped to make the fog less dense, and after changing some distance settings on the exponential height fog object, I was able to get a more natural looking fog that retains nice looking light shafts.
With the proof of concepts of the abstract metahuman and environment constructed, as well as the application of motion capture data to showcase the metahuman's suitability to be used with performance animation data, here is a proof of concept showcase.
Overall, I think the proof of concept abstract metahuman and environment I have developed is quite good. The proof of concept abstract metahuman I have developed I have dubbed the Woodsman, with the intent being to portray a nature-based humanoid form, composed of roots, antler branches, wood plates and flower bunches. The nature of this abstraction I felt created quite an eerie figure, and to enhance that, I have created a forest environment that matches the metahuman, with the lighting and fog further contributing to the eerie and mysterious nature I am to convey with this character. I applied some motion capture data to this character that I felt would further convey this mood, but also to evaluate how suitable it would be for motion capture data application, and overall I must say I am satisfied with the result.
I do have to keep in mind that this is just part 1 of 2 of this project, and that the the second part of the project will involve combining my environment and metahuman with four others to create a music video. Therefore investing a lot of time into the refinement and addition of further elements to the environment and abstract metahuman I feel would generally not be worth it at this point in the project. Instead, it would probably be best that during the second part of this project, I discuss with my team our different concepts and the ideas we have for transitioning between our different scenes, and using that discussion to inform the refinement of the abstract metahuman and environment. Therefore at this point, to help set myself up for assignment 2, I have developed the project to a point where I have a good idea of how I can modify a base metahuman through materials, quixel assets and niagara particle systems, used that knowledge to create an abstract metahuman that I can further refine and modify, and also have created an environment base that I can easily add to and modify to be more cohesive with the rest of my group's concepts.