Assignment 2: Creative Project
MARK AUMAN | N10752340
MARK AUMAN | N10752340
The second assessment of this unit revolves around developing a music video in a production team, building off the metahumans and environments we developed for the project proposals in assessment 1. This music video will feature a metahuman dancing and transitioning between various abstract forms developed by the production team, in an environment that will also been constructed by the team. This production blog page will act as an individual project journal, detailing my analysis and findings throughout the development of the project and exploration of the pipelines/workflows involved.
A good portion of the second assignment will require us to work in a group of around 4-5 people. We formed these groups near the end of the Week 5 Studio 1 session, where I formed a group with Will Hughes, Texas-Pete Barnes, Conor Steele, and Tyler Fellman.
As a group we discussed the different concepts we had for each of our abstract metahumans. Mine was a man made out of wood and flowers, Will's would take inspiration from the Scanner Darkly scrambler suit, Tex was thinking of doing a very flowy and glowing neon forest concept, Conor wanted to do a David Bowie Starman-inspired space themed one, and Tyler wanted to do an insect hybrid person. We realised each of our ideas operated at different levels of abstraction and came up with a potential order for our different segments in the final music video, going from the least abstract to the most abstract idea:
Will -> Tyler -> Mark -> Tex -> Conor
We also discussed different ways of transitioning between our different segments of the video, discussing using things such as smash cuts and briefly obstructing the camera's view to allow time for a transition between the segments. We also discussed how to illustrate continuity throughout the video given our different concepts, detailing that we could use the same base environment, but then change the lighting and add objects related to the concept of our specific metahuman as the video progressed to help illustrate the increased abstraction over time.
We later had a team meeting to discuss how things would operate at our assessment motion capture session, which included discussing roles the team would do, what movements we wanted to capture as well as the song we would select for the performance. There was a high chance we wouldn't get a dancer assigned to our capture time slot, so Will volunteered to get suited up and do some dance performance that we could capture. I would operate the motion capture software, Tyler would direct the performance, Tex would take photos and videos of the session, while Conor would be in charge of dotting up Will, making sure no dots got lost and helping with props throughout the performance capture session.
We then decided that for the motion capture, we wanted to capture movements where the limbs were outstretched and do long wave-like flowy movements which would provide a good motion visual, some finger movements to provide a good indication of where data needs to be cleaned up, and just a good amount of movement around the whole available space to ensure we could make good use of the environment.
Finally, after looking at the song selection available, the team selected Modern Deep by Thomas Bernal as we felt it had the overall best fit to all of our metahuman concepts, and was of a sufficient length that we wouldn't need to do any looping of the track.
On the day of the motion capture session, we were informed by the unit coordinator Paul that we had no dancer assigned to us, which was fine as we had Will as our backup, but we also were told that in the event we didn't get good capture data, there was a lot of data from the Tuesday sessions which could be used, which was a nice safety net.
We first got Will suited up into a mocap suit.
We then went through and did the motion calibration setup. We didn't run into much issues except for some reason the left thumb marker wasn't being detected even though we had attached one, so it was likely that it wasn't reflecting light properly. We ended up replacing it with a different ring place marker and that solved the issue.
The above video showcases that setup process as well as a performance warm-up. We discussed a general outline of the movements we wanted for each of our segments. Will's segment would be the most human-like, Tyler's would then transition into more jabby insect-like movements, mine would be more slow lumbering tree-like movements, Tex would then be more fluid-like movement and finally Conor's segment would then have more star-like and explosive movements. Using that outline for our first take, Will did a warm-up performance to get a general idea of how to do each of the movements, and afterwards as a group we discussed notes for improving the performance.
We did multiple takes, improving the performance based on feedback the team provided Will on their individual segments. For example, I knew my metahuman had particle effects located on its antlers and hands, so I advised Will that in my segment, he should try to move his head and hands around in large swinging movements in order to emphasise those particle effect trails, while retaining the overall movement style for my segment. Take 8 I liked in particular as I felt the movements were varied enough for each segment that they feel unique but overall had a consistency throughout the whole performance.
I was operating the motion capture software, so I was able to have a good look at the captured data and review if anywhere the data seemed messed up. There were some moments near the end where I felt the explosive movements of Conor's segment was resulting in the capture system not properly tracking the arms, so after the capture we want back to review the footage and found it to be fine. It was most likely because I was also looking at the motion capture being streamed on the particle metahuman displayed on the volume screen at the back, which would have had a slight amount of latency.
In the end were able to get six takes worth of data. The first one was a warm-up so that one may not be as useable, and the last take was done to a different song called Hill Billy Jive for fun, but that data could still end up being used. Overall, the process went quite smoothly and we should be able to work with this data to create our music video.
There was a bit of a week break between the capture session done in Week 6 and the next session of doing work on this project as Week 7 was dedicated to finalising our project proposals. Work resumed during the week 8 lab session, where the focus was on getting our team's Perforce server set up, as well as exporting the motion capture data takes that we wanted to make use of.
Being a games student, I was already quite familiar with version control systems. So far for games projects, however, I have primarily used GitHub as the version control system, whereas this project would make use of Perforce, which I wasn't familiar with. Given how large Unreal projects are, using GitHub would not have been ideal to use for this project, and from what I can gather, Perforce seems to be better integrated for use with Unreal Engine.
I quickly found that Perforce wasn't that much different from GitHub. If anything, on the surface the only difference was in terminology. The online repository is referred to as a depot, with the equivalent of local clones of the repository being referred to as workspaces. One difference of note is that the P4V client doesn't automatically list all changes that need to be submitted like GitHub, and instead you need to add them semi-manually into a commit, which is called a changelist in Perforce. This can be somewhat alleviated by selecting the root folder of the depot/workspace and then adding all changes contained within it, so this somewhat gives more freedom with how you work on and update files.
I did find it strange, however, that we were able to view the submitted changelists from other teams in their own separate depots, which was due to the fact that all the KNB227 team depots were being hosted in the same Perforce server. I was also concerned it may pose a problem down the line that would make resolving conflicts and reverting to previous versions of the project harder, as we would need to sift through a bunch of unrelated changelists. This was an easy fix, as I later found that you could open a history window to view just the changelists that are specific to the depot you are working in.
After setting up my workspace and linking revision control in Unreal Engine, I quite liked how integrated and easy it was to view what files needed to be saved and added to the depot, providing an easier way of seeing what files had been changed or added and needed to be added to the server, which alleviated my concern from earlier.
I spent some time testing how exactly changes in the project are managed by Perforce and Unreal and quite like the features provided to assist in the process. For example files that have been added or changed but are yet to be saved are marked with an orange question mark.
Other symbols exist that provide the user an indication of the status of a file in the project, which I greatly appreciate and should allow for the team to work in tandem without interfering with files another person is working on.
The other task during the workshop was to export out our motion capture take data that we captured in Week 6. This process was done in Shogun Post, a software only available on the QUT lab computers, and therefore the process needed to be completed onsite. I wasn't able to capture images of the process, however, it involved the team importing in our take data from Week 6, reviewing which ones we liked and then exporting them out.
When importing the data, due to the nature of optical motion capture, there would be points where data loss could occur. In the examples shown by the unit coordinator Paul, the motion capture data he used had a data loss of around 25%, with red lines being shown in the timeline at the bottom of the Shogun Post editor for the points where the data loss occurred (in the above image the timeline is all yellow indicating there is no data loss in the capture being reviewed). Fortunately for us, the data we captured had a data loss that ranged between 0.4-0.6%. Regardless, you can process the raw data to smooth out any data loss that occurred, and with the low level we had, this process did not take too long. After processing the data, we did notice the takes were slightly smoother.
We then reviewed the takes to find which ones we liked. Out of our takes labelled takes 6-10, we found that takes 7,8 and 9 to be the most optimal takes. As mentioned previously, take 6 was a practice run, while take 10 was a fun take we did to another song called Hill Billy Jive, and so they weren't as optimal. We therefore prioritised exporting out takes 7-9, however, we ran into issues exporting the takes in FBX format, as the exported file in Motion Builder would only export with the rig but not that actual motion. This issue seemed to affect everyone in the class trying to export the data, with the solution being found was to instead export the take data in a BVH format. The only difference between this format and the FBX format was that when taken into Motion Builder, the data would not be oriented correctly and would need to be set up manually.
Shown above are videos of the exported raw take data. While other takes from other groups may use professional or more experienced dancers, we felt confident in using the takes we captured as a team. The reason for this is that we had a meeting to discuss our abstract metahuman concepts, movements and a song that would fit them, which we were able to expand upon throughout the capture session, discussing adjustments and refine the movements to better fit our respective concepts. Therefore we feel the movements we captured are tailored to our concepts and the selected song, and therefore we would have a more intimate understanding of how we could retarget, clean up and manipulate the data so that we can make a fully realised and cohesive music video.
With our perforce server setup, when I got home I proceeded to set up an empty unreal project act as a base project for our group to work in. I set up separate levels for each of our team members to work in.
To test that everything was working correctly, I got Tex to make some changes to the level and push them. They added a cube to their level, pushed it, and I was able to get the updates on my end, so we were able to verify that our version control system was working.
I then proceeded to test trying to transfer my metahuman and my environment from my assessment 1 project to the shared group project so I could become familiar with the process. My main concern was that doing this improperly would result in unnecessary bloat in the project, and with potential repository size limits, I wanted to avoid that. After doing a bit of research, I found that in asset actions, there is an option to migrate assets and their dependencies to another project. This was good as it would ensure only the required assets would be imported into the shared group project, eliminating any extra bloat.
Using the migration process, I was able to get my metahuman and proof of concept environment into the project. What was great was the original project these were contained in was 24GB in size, however, with only the required assets imported to the shared project, the total size of the project was only 10GB. I then proceeded to push the files so that we could test that the rest of the group would be able to access them.
My teammate Will notified me that while he was able to open the level, none of the assets in the level were showing, and therefore something had gone wrong in the process of pushing my files to the server.
Another image sent by Will showed one of the assets having no active mesh.
His output log also showed that several files had failed to load. I was concerned that the issue had potentially been caused by the quixel assets I used not being stored locally in the project but in a megascans library folder on my computer, as I have found that there are some files stored there, however, I assumed it was for easier loading of assets that had already been downloaded previously.
This wasn't the case, however, as another image from Will showed that he only had a few folders in his content browser, with many others on my end not present on his.
I then checked my P4V client and began browsing our depot's files. Usually when a file is available on the depot it will appear as a page with a green dot such as the the asset field shown above, however, several files in my workspace appeared as just a white page with no green dot, meaning they were still only local to my machine. For some reason, the files I had migrated over from my assessment 1 project were only partially pushed, which is why Will was still able to access the level but none of the assets were loading. Therefore, all I had to do was push all the missed files.
I could've sworn I pushed all the files, but for some reason they didn't. That first push I did just through the revision control menu in Unreal, but the second complete push was done within the P4V client, so perhaps Unreal did not detect all the changed files or I did something wrong, both of which were likely given the volume of files pushed. Therefore, for future pushes I can try to push from P4V directly to make sure all files are properly added.
With no classes in Week 9, our group met online to discuss the project. In the first half I discussed and recapped much of what I discussed in the previous section, going over pushing and pulling to the perforce server, as well as the set up for the unreal project as well as issues I ran into to help the rest of the team avoid running into similar problems. The bulk of the meeting was then spent discussing the project itself, with us first determining the environment for our project. We discussed that since the focus of the project is more on the application of motion capture data, it would best if we just used one of our team member's environments as a base and then make modifications to it to fit our individual segments. Therefore, we shared our environments and discussed the suitability of each of them.
Will's environment was quite urban compared to the rest of the team's and therefore there were concerns if it would fit with the more natural concepts such as Tyler's insect metahuman and my woodsman metahuman.
Conor's environment was in a similar boat as it was tailored to a more space-themed metahuman, however, we discussed that the HDRI skybox used could potentially be used in their particular instance of the environment to better tailor the look of their segment to their metahuman.
Tex's had a more natural environment and featured some man-made elements which gave a good meld that would allow it to fit to the different concepts of the team. Tex brought up, however, that the environment with it's bright lighting and geometry was tailored to emphasise the specific form of their metahuman and the motion capture performance applied to it, and that it may not work as well with the other metahumans and performances.
The final two choices were therefore mine and Tyler's environments. My concern with my environment was that there wouldn't be as much of a link to the less natural metahuman concepts such as Will's and Conor's ones without some major work being done to the environment, and so I was hesitant to use mine as a base.
Tyler's environment on the other hand featured both natural and urban environment elements. We discussed that we felt that this therefore made it a good environment base for the whole group and from here we could create separate instances of the level, or nest it in our own level scenes and then make further modifications to tailor it to our specific concepts. For example, Tex discussed bringing in the lighting and water from their scene and combining it with this scene to help enhance the performance of their metahuman in their particular segment.
The only concern was the size of the environment, as Tyler mentioned his assessment project was around 200+ GB in size. Therefore we tested migrating just the level and his metahuman to the group project to see how much space only the required content would take up, and it ended up only adding 4-5 GB to the project size which the team was content with. Therefore, we finalised that Tyler's environment was selected as the base environment for the group project.
From there we then delegated tasks that we needed to get done. For the moment, the focus is for everyone to import their metahumans into the project so if there are any issues we can address them early. Tyler has already migrated his environment in, so the team will just need to explore level nesting so we can experiment with how we can modify instances of the environment to fit the needs of our different segments. Finally we need to review the take data and decide what we think will best fit our different concepts. Some of this discussion already took place during the meeting, and with a brief overview we deemed that most of the team would prefer to use either takes 8 and 9 for their segments, however further review will need to be done to determine the exact start and end frames of each of our segments as well as to figure out how we will transition between them.
By week 10 we should therefore be getting into the data retargeting cleanup side of things and have a good idea on modifications for each of our instances of the environment and be working on them.
We also discussed a roadmap for the project's development. In week 9 and the semester break we would be focused on project set up and planning, weeks 10 and 11 would be focused on getting the data cleanup and retargeting done as well as finalise environment modification, then in week 12 we would start putting together the level sequences that make up the music video, with week 13 focused on getting that finalised and submitted. That leaves week 14 for us to finalise and submit our blogs. All in all, this meeting helped the team gain an understanding of the workflows involved for the project and establish a plan for this assessment's development schedule.
Seeing as my metahuman and even my metahuman are migrated already, my focus for the rest of the week and the semester break is making refinements to my metahuman, experimenting with making modifications to the base environment, and doing further review of the capture data. Currently I am leaning towards using take 8 as my segment in that take features more vigorous movement, so the review is mostly to determine the start and end points of my segment, as well as do some early identification of where I need to clean up the data.
The first task I handled during the semester break was making refinements to my metahuman.
Overall I am quite happy with how my woodsman metahuman in my project proposal turned out. Comparing it to my original concept art , I think the produced metahuman in unreal resembles it quite well. Therefore in terms of fundamental improvements to the metahuman, I don't think there is much to be done.
That does not mean that there are not improvements to be made though. For example, in the above image, the firefly particle effect system I developed obstructs the hands and antlers of the metahuman quite a bit, which isn't ideal. They also seem a lot brighter than in the previous image, where the environment lighting and textures were slightly different, suggesting that certain settings there are having an effect on the appearance of these particle effects.
Taking a look at the metahuman in motion, the firefly effect also seems to have this blurring effect, which I feel makes the overall quality of the movement feel lower. On top of that, while the firefly effect looks great when the metahuman is moving a lot, it doesn't look as great when the metahuman is more idle, contributing to it obstructing the view of the metahuman form. Ideally I think I would want the particle system to only spawn fireflies when the metahuman is moving, and therefore an improvement I can make is to have the particle spawning be based on something like velocity.
While the former issue previously highlighted will most likely be fixed by calibrating either the lighting or particle effect system when I port the metahuman into Tyler's base level environment, to help fix the the latter issue, I have isolated the woodsman metahuman into my test level scene in the group project so I can focus on observing it to improve the motion and spawning of the particle system.
Isolating it in such a manner also allows me to better pick up any other areas of the metahuman I need to improve. For example, after porting in my level sequence animation from the project proposal into my test level, in the first image above before the animation starts, you can see the wood plate on the thighs sit nicely on top of them, however, in the second image when the animation begins, the wood plates move slightly, sinking into the thigh and clipping into them.
It's even more noticeable when you scrub further into the animation, and can see the wooden plates on both thighs clip further into the legs. This indicates that these plates are likely to be assigned to wrong bone socket and need to be changed.
While not as noticeable, it also occurs on the wooden plates on the shins as shown on the shin on the right in the image above.
On the upper arm on large bends, the wood plate does clip slightly into the arm. Fortunately for the forearm, throughout the animation it seems to not clip into the arm bone it is connected to, indicating it is likely attached to the correct socket, and is placed suitably. Therefore, for the other wooden plates, some changes to the socket they are assigned to as well as their placement will need to be made.
To start making refinements I went into the blueprint of the woodsman metahuman. I decided I would start by adjusting the wooden plates as it would easier to do and therefore allow me more time to experiment with the particle effects systems. I started with the left thigh, and to diagnose the problem, I zeroed it's position so it would be at the location of its assigned socket, which was currently thigh_l. As it can be seen here, the location of the thigh_l socket is quite a lot higher than where the plate was placed (using the right thigh wooden plate as a reference), which is why when the woodsman bent down, it was usually the top part of the plate that clipped into the leg. Therefore I need to find a socket that is closer to the desired location of the wooden plate.
I was able to find a socket called thigh_twist_02_l that was a lot more promising, with its root location being a lot closer to where I wanted to place the wooden plate.
I then positioned and rotated the plate into the desired position, making sure it was still somewhat in line with the socket it was assigned to.
Examining the left thigh wooden plate at the start of the animation, it seems to sit a lot better on top of the thigh than it did before.
Examining it further, the wooden plate now continues to sit a lot better on top of the leg even when the metahuman is doing a bending motion, with the difference clearly being seen when comparing it to the right thigh wooden plate which is yet to be altered.
I then proceeded to make similar adjustments to the right thigh wooden plate using the same process. I zeroed out the position of the object, proceeded to find a suitable socket (in this case thigh_twist_02_2 seeing as it will just be the right counterpart of the left leg), and then position and rotate the object around that socket so it would sit on top of the leg nicely.
Viewing the effect of these changes during the animation, you can see that the wooden plates sit on top of the thighs a lot better. There is still a bit of clipping on the ends of the plates, however, it is a lot less noticeable than in the previous iteration of the woodsman metahuman.
It should also be noted that the current animation being applied to the metahuman features a lot of extreme bends on the knees, which is where the clipping occurs. Reviewing the segment of the group motion capture data pertaining to my metahuman, that animation will feature a lot less of these extreme knee bends, and so the current placement of the wooden plates should suit the purposes of the animations just fine, with the above image showing that when the metahuman is in a more upright position with a slight bend in the legs, the wooden plates still sit a lot nicer than they used to. Once I have retargeted and cleaned up my segment of the animation, I can then re-evaluate if further adjustments are needed to these elements of the metahuman, however at this point, understanding what the issue is and making some initial refinements will cut down on work I need to do later on if further adjustments are needed.
I proceeded to continue fixing up the rest of the plates. The issues in the shin plates were quite evident when I started zeroing them out, as the calf socket they were attached to was a lot higher than anticipated. I'm not quite sure why I didn't catch these issues during the project proposal phase, but given it worked then it must have been in the interest of time and also the fact at the time it was supposed to be just a concept, so the refinement process I'm doing now allows me to fix up these issues.
The socket for these shin plates were quite similar to the ones used on the thigh plates, with these ones being named calf_twist_02, so it seems these twist sockets are the most suitable location for these plates to be attached to.
The leg plates now sit on top of the shins a lot better.
Zeroing out the upper arm plates revealed that they weren't as noticeable due to the plates being somewhat parallel in direction to the socket, however the issues were once again a result in the mismatch of the position of the plate and the bone.
Once again, it was a matter of zeroing out and reassigning the wood plates to a more suitable socket, with the sockets this time being the upperarm_twist_01 sockets. As shown in the image above, the whole segment of the wood plate now sits on top of the arm instead of clipping into the arm, when when the arm is raised or bent in some way.
While the forearm wooden plates were pretty much fine, I decided to investigate why this was the case. I zeroed out their position relative to the current socket they are attached it and I found that the plate was a lot more parallel with the bone. Combined with the fact that the range of motion of the forearm means the wooden plate on it is less likely to clip into the arm, this is likely why not as many issues were visible when examining the animation applied to the woodsman metahuman.
Regardless, I still refined the placement of these forearm plates. Like the other plates, it turned out to be another set of twist sockets that would make the most suitable sockets for the plates to be assigned to them. These twist sockets are likely to be more suitable than what I had selected before due to the fact that they probably take into account the twisting/torsion of body parts while in motion, meaning items attached to these sockets are more likely to follow the motion of the body more accurately. In the above image, not much difference can be seen on the forearm plate, although it does seem to sit on top of the forearm a bit better. Along with the other reasons stated previously, it's likely that this area needed less improvement as the position of these plates are quite close to root of the socket it's attached to when comparing it something like the thigh plates.
With those adjustments made, the wooden plates on the woodsman metahuman should sit on top of the body as they should through a range of motions.
After reviewing the metahuman in motion as well, I also feel it provides the metahuman with slightly more dimension that it didn't have before due to these wooden plates clipping into the body.
Next up is to refine the particle systems that emit from the antlers and hands. Referring back to the proof of concept video, we can see that the fire flies create this trail effect behind the woodsman as it moves. Reflecting back on the original purpose of this particle system, it is somewhat supposed to enhance the performance of the metahuman by trailing the movement of the specific body parts that they are following, which are the antlers and the hands. Reviewing this footage now, however, I feel this can be tuned in somewhat, as currently the particles currently become a messy cloud of particles that more so distracts from the performance rather than enhance it. My intention here is to make more defined trails that help showcase the movement of the hands and antlers of the woodsman as it moves, and so I will need to make some changes to my particle system to help convey this.
To do this, I started experimenting with changing parameters of the particle system. I felt that the particles were spawning too slowly and maybe lingering a bit too much, so I increased the spawn rate but decreased their lifetime. I also decreased the radius that they spawn in as I felt at its current size, it was the obstructing the view of the hands and antlers a bit.
I also accessed the blueprint for the emissive material that the fire flies were using. It turns out its emissive property was set to 50, so I set it down to 0.5 as I felt that changes to bloom would be enough to make it glow brightly.
The new particle system setup is shown above, I decided to test the particle system in this environment rather than my test level environment as I would have the proof of concept and project proposal images to compare it to, and see if the changes are better or worse, though I may need to do some further calibration of parameters such as the materials emissive value once I port the metahuman into an instance of the group's base environment.
It's a bit awkward viewing the new particle system in a static image, so here is a video showcasing it in motion. As it can be seen, the particles now form more defined trails as the woodsman moves, even with more subtle movements, which I feel helps to bring attention to the way the woodsman metahumans rather than detract from it as it did in the previous iteration. There are some refinements to be made, however, as I feel the trails should start to spread out over time to mimic a more natural fire fly movement look, and in addition to that, the current placement of the particle emitters are a bit awkward in this current iteration.
In the next refinement pass, the first fix I made was to reposition the particle emitters. I ended up deciding I was fine with where the hand ones were placed as having them located more towards the wrist ensured that hand movement could still be seen.
I did move the antler particle emitters, however, as I felt the previous placement was a bit unnatural. I moved them up slightly so that the particle emitters look as if they are being contained within the end branches of the antlers, which I felt looked a lot more organic than before.
An issue that crops up when viewing the metahuman from far away, however, is that the antlers are not very visible, and makes it look there are these two glowing orbs floating above the head of the woodsman, which doesn't look that natural.
To fix this issue, I thought: since the antlers have some glowing fireflies around them, wouldn't it be cool if the antlers glowed as well? I then set about creating an emissive antler material, which I achieved by creating a new material and in the emissive node, I combined a colour value with the antler branch texture by multiplying them, with the emissive strength controlled by a parameter I set up. I also retained the base branch texture and normal map in this new materials as I didn't want the antlers to look like just a flat colour if it isn't glowing.
Applying this emissive antler texture to the antlers of the woodsman, they instantly become a lot more visible which is good to see.
Even from far away, you can better distinguish the antlers, and the placement of the antler particle emitters doesn't feel as out of place.
Up close the antlers also retain their base texture, so if I ever feel like having a scenario where the antlers are not emissive (e.g. animate the antlers to blink on and off), the antlers will continue to have that original texture they used to have.
I ended up liking the glow effect on the antlers so much, I decided to apply it to the eyes of the woodsman as I felt that was another element that was not clearly visible on the metahuman. I also feels it adds to the ominous aura of the woodsman metahuman.
Overall I think the additions of the glowing antlers and the eyes adds to intended look and feel of making the woodman feel mysterious and ominous.
Returning back to the particle system, the remaining improvement that needed to be made was to make the movement of the individual particles more firefly-like, and therefore wanted these particles to be more erratic in their movements. The base particle system I was using came with some settings to adjust the wind force being applied to the particle system. After experimenting for a bit, I found that applying a negative force along the Y axis allowed the particles to move backwards slightly, while applying a positive force along the Z axis allowed the particles to fly up. Combining this with an increased wind turbulence, I was able to get a particles that were a lot more erratic in their movement.
It can already be seen from the refinements added that the particle system looks a lot more natural. The particle emitters around the antlers especially benefit from this change, making their emission look less orb-like and flow better with the shape of the antlers, although that may just be how it is portrayed in this freeze frame.
The following video shows all the refinements discussed put together. Of note here is the refined particle system, as it can be seen that the individual particles move more erratically, but still retain somewhat of a trail to the object that they are linked to. In this way it is a good in-between of the original system shown in the project proposal and the previous refinement pass of the particle system shown in the previous video. With these changes implemented, I'm quite happy with my refined woodsman metahuman. There may be some more changes that need to be made, however, as I mentioned before I may need to calibrate the parameters of the particle system depending on how they look when porting them into Tyler's base environment, so the next step will be exploring level nesting and setting up an instance of the base environment for me to work in.
As detailed before, the environment the team selected was Tyler's one, as we felt it was one that could fit with our five concepts the best. Tyler was able to import this level into the group project, however, it would be good if we had a base version of this level that we could then refine if needed, and build off individually by creating our own instances of it work in. This will require the use of level nesting, which is something I will explore doing in this segment of the project.
The first part of this process was to create a duplicate of Tyler's environment. This ensures we keep a version of Tyler's initial environment, while creating a duplicate we can freely make edits to as well as use as a base for all of our individual levels we will use to construct our individual segments of the music video by making use of level nesting.
To start preparing the base environment, I deleted Tyler's metahuman as that will not be needed in the base level that everyone will use as an instance.
Next, I felt the lighting of the environment was quite bright (I temporarily added my metahuman as reference to see how the lighting looked) and after talking with Tyler about how exactly to go about it, I began making some adjustments to the base level lighting. For the moment the lighting will be contained in this base level, however, if the team decides to make use of their own lighting systems for their particular segments of the music video, we might have to separate the lighting off into its own level later on, but I will likely have to wait to discuss with the group before doing anything.
Adjusting the lighting to be less bright while retaining good lighting quality was quite easy. Using the Environment Light Mixer, I lowered the directional light intensity so that the lighting wasn't as intense, and then enabled real time capture in the sky light settings. According to the tooltip provided, enabling real time capture makes element such as the sky, sky dome and volumetric cloud be accounted in the calculation of environmental lighting, resulting in more accurate lighting, although likely at the cost of performance. A weird effect of the new lighting is the vines Tyler placed on the ground now appear a lot more blue.
By inspecting the material applied to the vines, I found that it had this emissive map that applied this blue emissive glow to parts of the vines. For whatever reason, however, it seems the glow is being applied to the whole vine.
For the moment, the fix I have implemented is creating a duplicate of the material, and creating an emissive strength parameter to control the influence of this emissive map, which is currently set to 0, that way if the team does want to make use of the original material, it is still there.
With this new material, the vines appear a lot more natural and in-place in the environment, so for now I will keep it that way and at our next team meeting we will discuss further changes.
Lastly, I did some level clean up, as there were some assets such as these lights and this grass plane that would never be seen by the camera.
I also hid the spot lights on the street as I felt that the main lighting of the scene washed out these lights and so there effect was negligible. If the team is all good with this, we can permanently delete these lights, or if not, we can easily reenable them back on.
Adjusting the angle and intensity of the directional lighting further, we get the following. While the lighting is still a bit bright for my liking, further improvements can be made once the team has discussed further, and the idea here is to create some nice neutral lighting that the team can work off of, so spending more time on adjusting the lighting to my liking would be wasted at this point.
With the base level initial set up done, I then deleted my metahuman reference and saved the level so I could explore how level nesting works.
I first created a duplicate of my individual level. The original will be used as a testing ground, while the second instance will be where I put together my segment of the music video.
After reviewing the week 8 videos, I found the level window which is where I should be able to add in the assessment 2 base environment as a layer of this level. To help in this testing process, the base of this level includes my metahuman, and should still exist if I add in the base environment as its own layer.
After dragging in the AS2 base environment level into this level window, the environment appeared in the level. What can be noticed straight away, however, is that the lighting is quite bright again.
The issue lies in the fact that the persistent level I am working with already contains its own lighting, and the instance of base environment also comes with its own lighting. As shown in the above image, two directional lights and sky lights now exist within the scene.
By deleting the extra lighting objects present in the persistent level I am working on, we are left with just the lighting present. This does present an interesting case that I should attempt to separate the base lighting into its own level, as it will make the option for team members to set up their own lighting a lot easier down the road, as they won't have to contend with lighting already being present in the level when porting it in as a layer in the level they are working in.
To set this up, I created a duplicate of the AS2BaseEnvironment level called AS2BaseLighting and then deleted everything except the main lighting and the related elements such as the volumetric cloud.
I then deleted those same elements in AS2BaseEnvironment. Note that there are still spotlights present in this scene providing light (as opening the scene unhid them).
Going back to my individual level I work in, the level now is dark due to the changes made to the base environment removing the lighting.
By dragging in the AS2BaseLighting level into the levels window of my working level, however, the lighting is brought back and looks how it should.
To help explain how level nesting works at our next group meeting, I have set up a new level called LevelNestExample containing the two base levels nested within it. Overall, this level nesting system should provide the group with a lot of flexibility when designing their specific segments of the music video, affording us the ability to do things such as create our own custom lighting while retaining the same base environment, or adding extra level elements as their own nested levels, which then can easily be added to other group member's segments. At the next meeting, I'll be looking to discuss with the group how this level nesting system can be used to assist the development of this project.
During our semester break group meeting, the team discussed what had been done during the remainder of week 9 and during the semester break. Will had discussed that he had been working on materials for his refined metahuman that he intends to migrate to the project once he is done working on them in another project. Tex had discussed that they had been starting to experiment with level nesting and leveraging it to add to the base environment and customise it to their liking, and from this process they found that when you play the level, you need to ensure those level layers are set to 'Always Streaming' so that they appear at runtime. Tyler had been focusing on refining their metahuman as well as starting to clean up their motion capture data, and some interesting findings they found was that you can specify what segment of the animation you want to make use of in Motion Builder by specifying frames in the start and end frame fields (denoted by fields saying S: and E:) of the timeline, which could be useful in ensuring we only retarget and clean up data relevant to our particular segments of the music video. Due to other assessment and illness, Conor wasn't able to get much done during this period, however, at the end of the meeting we assisted them with migrating their metahuman into the project, which at this point we were all comfortable with as everyone else in the team had done it.
I then went over the refinements that I made to my metahuman that I had detailed in the previous section, but of more importance to the group, I discussed my investigation into level nesting and how we can make use of it so that we use one common base environment, but then add on our own customisations and personal flair to it.
As mentioned before, Tex had already been experimenting with this as I had notified the team of my work with level nesting prior to the meeting, and so Tex had been able to modify their own instance of the base environment to include the lighting and water effects from their assessment 1 project proposal, providing the team a good idea of what is possible with the system. I then went through and did a demo of how to make use of level nesting so that the whole team could then go on and independently start making customisations to their instance of the base environment.
We then discussed some improvements that could be made to the base environment, which included fixing up some decals in the level that were experiencing stretching which would be handled by Tyler, and separating the lighting of lamps in the level into their own separate level for level nesting purposes which I would handle.
Our final topic of discussion was what possible transitions we could have between our different segments of the music video. Tyler and I discussed that given the themes of our metahumans, we could leaves or insects fly in front of the camera as a transition. Apart from that, we struggled to come up with ideas for transitions. We discussed that this we felt this was because we had yet to fully review the motion capture data and start retargeting and cleaning it up. Therefore, the group decided our immediate focus will be on retargeting and cleaning up our relevant motion capture data, importing that into Unreal to get a better idea of how they look in the space, and from there we can start properly brainstorming ideas for the transitions. To help with that, if we have time we will also spend some time making modifications to our individual instances of the level to suit our specific segments, making use of level nesting to ensure we do not affect the base environment that is shared in common amongst our individual segments.
With that all decided, the following task list shown above was developed.
During the week 10 lab session we used it as an opportunity to check how we well we were progressing with the project. Will and Tex would be unavailable due to attending PAX, and Conor had gotten COVID, so Tyler and I were the only ones to attend this session.
We showed Paul all the work the group had done so far throughout the project and what we were planning to do. Paul agreed that our focus should now be on getting the relevant data retargeted and cleaned up, so that we can have a better idea on how the animations look in the space we have set up in Unreal and can start putting together level sequences and the music video. Overall, the group is on track.
Due to the unavailability of team members throughout this week, we've decided to have a follow up meeting this coming Sunday to catch up members absent from the workshop as well as reaffirm the direction of the project, with a tentative deadline being set for the data cleanup and retargeting to be done by Wednesday of next week. As we have no official classes that week, we have decided to use that time to hold a meeting to discuss transitions and to start putting together the music video.
For the data retargeting process, I started off by exporting out my metahuman and putting them into a MotionBuilder project. While I could have used the same characterised metahuman I used before, I wanted to redo it as I wanted to try the process again to do it in a more refined pipeline, as during the project proposal phase, I ran into a number of issues during the characterisation of the rigs and the retargeting of the data.
After characterising the metahuman rig, I brought in the motion capture data from take 008 as from reviewing the takes we exported out, I felt this take contained the best performance for my particular segment. I noticed, however, that the imported take rig was a lot smaller than the metahuman rig.
After realigning the take rig, I confirmed this that it was indeed smaller. I'm not too sure what happened here, with the only difference between the last time I did this and now being the fact that we used the BVH format instead of the FBX format when exporting out our motion capture takes.
After discussing the issue with Tyler who had already begun the data cleanup and retargeting process, they said they also experienced this issue, and that they just decided to scale up the motion capture data when they were doing the retargeting, citing that they had not experienced much issues, with any clipping they found being handled in cleanup. Therefore, I followed suit, scaling up the take rig so it was around the size of the metahuman rig. It's not perfectly aligned, but once I zero out both rigs to prepare them for retargeting, I should be able to make further adjustments to them then.
After characterising the take rig I ensured to check that the bones were all assigned correctly, as this was an issue I ran into during the project proposal phase that slowed down progress quite a bit. Fortunately, it seemed all the bones were all assigned properly, with only the expected symmetry issues appearing. It seems the issues I encountered during the project proposal phase were due to their being multiple rigs and/or unassigned place markers being imported into the project. Paul had discussed that ideally you would import in just the one rig you want to use for retargeting and cleanup, however, that had not been done for the data used in the project proposal, hence why issues were experienced. Seeing, as we only had one performer in our capture session, we didn't need to worry about that.
I went through and started zeroing out the take data, and when I was done, I noticed that the character panel was showing that some limbs were not parallel even though I had zeroed them out so they would be, I then realised that the character was still locked and I had been working on the motion capture take instead of a clean take like Take 001. It's likely this or something similar is what occurred when I was doing data retargeting and cleanup for the project proposal phase, with that issue causing me a lot of trouble, however, from that experience I was able to catch the issue sooner here.
I redid the zeroing out of the rig with the character unlocked, and this time the character panel showed the character was all green except for the shoulders, but that was to be expected.
When I began to zero out the metahuman rig, I started off with the hips, and noticed that zeroing out the hips angled the character the slightly. I found this weird as I am using the same metahuman mesh I used in the project proposal and I don't recall encountering this, so this was a bit of a concern, but I decided I would see how the rest of the model would look zeroed out.
After zeroing out the rest of the model, I found that it was orientated how it was supposed to, so the angle issue from before was of no concern.
With the two models zeroed, out I took some time to make sure both were well aligned, before beginning to retarget the metahuman rig to the take rig.
After retargeting, I switched to the take 8 timeline and it seems the retarget was done properly. I did not encounter the same issues I did in the project proposal phase, so it seems I was able to learn from past mistakes to better improve my workflow in the pipeline this time around.
I then plotted the take data to my metahuman so I could begin cleaning up the data. It should be noted that at this point, the whole data take has been worked on. As part of this assignment is that each team member is responsible for retargeting and cleaning up their segment of the music video, I only need to clean up a part of this take. I will keep the whole take timeline at this point though, as once I take it into Unreal, I can cut down the animation to what I need it to be, and this will allow for easier overlap between the segments that come before and after my own one.
I began reviewing the take to get the exact frame markers for where my segment of the animation begins. My segment involves a lot of rigid rotational movements in order to mimic a tree like entity, with emphasis on the head and arm movement as that is where the particle emitters on the metahuman are located. I was able to find that frame marker 2830 was where my segment of animation began, transitioning from the more jabby and insect-like movements of Tyler's segment.
The jab movement shown above could act as a great transition point between mine and Tyler's segments. We had discussed we could have a flash of insect or leaf particles appear on screen between our segments, and this action could provide motivation for this happening. We could even potentially just do a straight cut between our transitions if the camera angle and position was the exact same, however, that is all dependent on whether we are using the exact same takes. While I am using Take 008, from discussions with Tyler I believe he is using Take 009, and so their will be subtle differences between the two, making the latter transition option less feasible. During our capture session, we had planned out the type of movements for each segment of the performance, so the movements for each will be similar regardless of take, tbut it is still likely we will do a particle-based transition between our segments.
I found that my segment ended at around frame marker 4460, as it was at this point the animation transitioned to the more flow-like movements that would be used for Tex's segment. Again, a bit overlap isn't too bad and somewhat needed, so getting the start and end markers a bit earlier and later respectively isn't too much of a big deal, but from here I can begin reviewing my segment and cleaning it up.
Like with the transition between Tyler's segment and the beginning of mine, the end of my segment and the start of Tex's segment features a big jab movement, and so we can make use of a similar transition. Like Tyler, however, I believe Tex will also make use of a Take 009, making the more feasible option be to spawn a particle-based system in front of the camera in order to smoothly transition between the two.
With the data retargeted to the metahuman and the markers for where my segment roughly starts and ends, I can begin doing data cleanup. Overall while reviewing the animation, I found that its quality was quite smooth, as there wasn't much jitteriness seen, so my focus would be on looking for any clipping issues in my segment.
The first issue I identified at the start of my segment is that the right foot of my metahuman clips into the ground plane.
I set a keyframe at the start of the animation for the right foot node and then aligned the camera at the 2830 frame so I could have a clear view of how much I needed to raise the right foot up by.
I then raised up the right foot node so that the toes would be resting on the ground plane during the frame.
The same issue occurred with the left foot at the end frame of the animation
Applying the same methodology, I set a keyframe on the left foot node at frame 4460 so that the left foot sat properly on the ground plane.
I then made similar key frame changes whenever the feet were clipping into the ground, or whenever they were up in the air when they were supposed to be laying flat on the ground.
Around frame 3312, the balls of the feet starting point upwards rather than laying flat on the ground like they were supposed to.
To fix this, I adjusted the foot's rotation and keyframed it so that it would properly sit on the ground surface, and so it would be more closely aligned to the take rig.
The ball of the feet pointing up issues were quite frequent, so I had to go through and fix them up by applying the same technique of rotating and moving the foot to be closer in alignment and position to the original take rig.
Making these keyframes changes allowed the feet to stay flat when they were supposed to be, making the animation look a lot more natural than before.
After fixing the issues with the feet, I review the rest of the animation, particularly looking at the fingers as they were extremities like the feet that I thought could have similar alignment issues. I found, however, that there wasn't really any issues with the hands, with not much being wrong with their positioning or rotation. Evaluating this, I feel the I had faced many issues with the feet that I had to clean up due to the fact that they are pretty much in contact with the ground plane at all times, and so clipping issues there were quite noticeable, while in contrast the hands are often freely in the air. In addition to this, it is likely that the optical camera system had an easier time capturing the hands, while the feet would have more often been obstructed by other parts of the body.
Investigating further into the bone assignment of the feet of the two rigs, the vicon take rig defines a heel bone, while the metahuman rig doesn't and instead defines the toes. Such differences may be why there was such a difference in position and alignment of the rigs in the raw retarget. Regardless, I was able to more or less clean up the data to fix these issues.
This video showcases the final cleaned up segment of the take. Overall I am quite happy with the result, the animation maintains its smoothness, however, the footwork seems a lot more appropriate and natural.
Happy with the cleanup, done I plotted the animation to the character's skeleton.
I deleted the tree for the take rig, so that I would only keep the metahuman rig with the animation applied to it.
With the animation applied to the metahuman isolated, I then exported it out and back into the Unreal project.
Here we can see the animation in action within Unreal. Because I kept the whole animation, it does contain the other segments my team members will be working on, but I am able to easily cut the animation to my specific segment, and as mentioned before, it also gives me flexibility in extending my segment to overlap more with my team's other individual segments.
With my animation segment cleaned up and applied to my metahuman, I moved on to making some refinements and modifications to my environment.
When I opened the project to work on my individual scene, however, I found this weird effect that was occurring with the skybox of the scene.
I thought that potentially the skybox material had been corrupted, but after doing some investigating, I found that there were two sky spheres. It turned out that my level still had its own default sky sphere which was clipping with the sky sphere present in the AS2 Base Lighting level that was nested within my level, causing this strange pattern to appear in the sky, so it is likely I forgot to remove all of the default lighting objects from my level when I did the process before.
By removing the default lighting of the level, the sky returned back to normal. Evaluating the scene, however, I felt the first environment modification I could make was adjusting the lighting. While I think the base lighting is quite good, I feel I could adjust it slightly as I want to better accentuate the glowing elements of my metahuman, as currently I feel they are a bit washed out by the rest of the scene. I think making these adjustments will enhance the presence of these elements in the scene, and thereby enhance the performance.
Overall I think the base lighting provides a really good foundation for me to work off of, which makes sense given I developed it as a way of providing the team some neutral lighting to use as the default for their levels. Therefore, I made a duplicate of the base lighting level called Level_Mark_Lighting, which I then replaced the base lighting level with in my level. From here, I could make edits to this lighting without changing the lighting of anyone else who was using the base lighting. Additionally, the level nesting approach means that if anyone found that they liked my lighting setup, they could nest it into their own levels.
For one, the bloom of the environment is far too bright, contributing to the washing out of the metahuman's own glow.
To fix this, I added a post processing volume to my lighting scene that affected the whole level and then lowered the exposure's intensity so the sky wasn't as bright.
I still felt that the sky was still too bright, and I couldn't find a way to change the lighting to my liking by changing the directional lighting or the colour of the directional lighting. I tried changing the colour of the sky sphere but found it wasn't making any changes, but after some experimenting, I disabled colours determined by sun position.
Doing this allowed me to manually change the colours of the sky and I was able to make the sky less bright, allowing the glowing metahuman elements to be more distinctive in the scene.
I could then increase the exposure of the scene to bring more prominence to those glowing elements, and the sky would no longer wash them out.
I was then able to adjust the direction of the lighting so that it better suited the conditions I wanted to have more seen, and then adjusted my post processing settings further so that the lighting produced did not clash with the glowing metahuman elements but instead enhanced them.
Next I moved on to adding additional elements to the scene. In my segment of the animation, there's a part where the performer looks around. I want this action to be somewhat motivated by a change in scenery. To remain in theme with my metahuman, I feel adding trees could help achieve this. To test this out, I took my scene from assessment 1 and nested it within my level. Lighting clash issues aside (from the level nesting), these trees added exactly what I was looking for.
I created a duplicate of my assessment 1 scene, nested it into my own level and then deleted the lighting from the duplicate to maintain just the main level's lighting, which I then adjusted so that it can properly light the metahuman over the added trees, which got me the result above.
There were still quite a few too many trees in the scene. There were many clipping into buildings in unnatural ways, or were just outside of view that it would be unnecessary being in the scene. Along with deleting those, I removed some extra trees, such as the ones in front of this big building as shown in the image above. The reason for this is that I felt deleting some trees so that elements of the base environment were still easily visible was important in signposting to the viewer that it is the same environment but that it has experienced changes between each segment.
Additionally, it was also done in consideration for the transitions. Tex's segment would come after mine, and their level had modified the scene to include a glowing emitting from the windows of the building which brings a lot of attention to it. By having it still be clearly visible in my own scene through a clearing of trees, it acts as a reference point between mine and Tex's segments despite the contrast in our two environment's aesthetics.
Given the urban base of the environment, I added some rubble piles around some of the trees closer to the centre of the environment to make them feel less out of place and imply they have erupted out of the ground, conveying this narrative that nature has taken over.
With those additions done, I was quite happy with how the final environment for my segment looked.
With the refined metahuman and environment done, I have everything in place to start developing my level sequence for my segment of the music video. I first imported my animation sequence into my work level much like I had done before with my test level, which is shown above.
This section will summarise the week 10 and 11 meetings due to their close proximity to each other. The week 10 meeting took place during the Sunday of that week, with the week 11 meeting taking place the following Wednesday, so not much work was expected to be done in between the two due to the team's other commitments. The week 10 meeting would be focused on catching up members absent during the workshop of that week and having some initial discussion for how transitions and level sequences will be handled, with the Week 11 meeting dedicated to having a more in-depth discussion about the transitions and camera setup for the music video.
During the week 10 meeting, after catching up the absent group members and establishing that the focus would be on getting data retargeting and clean-up done for the upcoming meeting on Wednesday, we discussed how transitions and camera movements would work in our project. We discussed the idea of leveraging level nesting by having one camera track level that we could use amongst all of our levels, where key frames would be used to set defined points for the camera would be during transition points regardless of whose level it is, so that we could potentially have smooth transitions between segments in the form of fade-ins and fade-outs, while also allowing us to do our own camera movements and cuts in between these key points. While we had some light discussion on the exact transitions between our segments, we would have a more in-depth discussion on Wednesday about them once the team had some time to finish up on retargeting and experiment with camera setups.
During the week 11 team meeting, the group spent some discussing the potential different transitions between our different segments of the music video. At this point, Will and Conor still needed to do some retargeting work, however, Tex, Tyler and I had completed this work and the so our focus would be on establishing what exactly our transitions would be so that the whole team would be able to work towards getting those complete, so that by next Wednesday we would be able to start putting together the music video. Previously we had discussed potentially using a single camera track level nested in our individual levels to allow for consistency and smooth transitions between the different segments, which Tex had been experimenting with since our last meeting. We had trouble getting the camera sequence to work with my level, however, with the camera for some reason going below the ground whenever we tried to render out the sequence.
The discussion therefore shifted to discussing what ideas the team had for transitions between our different segments. For the most part, the ideas the team had were to have particles obscuring the camera view and/or camera movements to facilitate a transition. After discussing these transitions, we realised that because we had a clear idea on what transitions we wanted to do and how they would work, having a single camera track would be unnecessary and had the potential of getting messy if multiple people were trying to work on it at the same time. Instead, with the list created above we decided that we would work on our own individual level sequences and have a clear idea of what additional assets (i.e. particle systems) that we would need to develop for them. Once they were done, with a target deadline of getting them complete by next week, we could then review them, make edits if needed, and put them together in Premier Pro to make the final music video. We felt this workflow would be a lot more convenient, as given the different stages of development the team members were in as well as other commitments the team had, this would allow us to work independently as much as we could without us getting in the way of each other.
With the direction decided upon by the team with how we would approach the development of our level sequences, I began to develop my individual segment by making a level sequence with shots to start off with.
During the week 11 meeting, Tex had pointed out to us that when we render out our sequence we may find that the camera view may start of black and then slowly fade in due to auto exposure. We had turned off global auto exposure in our project to stop this, and just to be sure, I turned off auto exposure in my post process volume and set it manually to my liking.
Despite these changes, when I did a test render, I had this issue where at the beginning of the sequence, the scene would fade in in this dot like format. I made sure my shot cameras were not set to auto exposure and yet still got the same results.
After doing a bit of research, I looked around and found that the Movie Scene Capture system I was using was a deprecated legacy system, and enabled the use of the Movie Render Queue system.
The Movie Render Queue menu provided a much more comprehensive and customisable way of adjusting the settings for the render.
When rendering out through the Movie Render Queue, the beginning images of the sequence did not have any the fade in dotting issues I was experiencing earlier and I could move onto putting together my shots.
For my first shot, I wanted to start in close up to the metahuman and then slowly zoom out to reveal more of the environment. During our week 11 meeting, Will had shown us this feature to blend between different cameras in a sequence. I decided to experiment with this as I was curious if it would be a viable alternative to translate the camera via animation.
Blending between the two cameras produced the intended effect, however, as the sequence transitions from one camera shot to the other, the camera loses focuses. I tried looking up ways to fix this, with the most immediate thing being that the settings of the camera have to be the same, but even then it didn't help. From my experimentation, it seems the longer the blend period, the more blurry and unfocused it gets, so it's likely camera blends aren't suitable for this specific use case.
The above video shows a clearer view of the camera blend test in a render. As the camera zooms out, the focus on the Woodsman metahuman is lost until near the end of the sequence when the blending has fully transitioned to the second camera. Another strange effect is that near the beginning of the render, the lighting within the camera view changes, as if it is adjusting once the blend period commences.
Since the camera blends were not achieving what I wanted to do, I decided to resort to just keyframe animating the position of the camera to create the intended effect.
Shown here is the keyframe animation camera transition rendered out. It achieves a similar effect to the camera blend attempt, however, the Woodsman is kept in focus and the lighting is kept consistent. Therefore, camera keyframe animations should be effective for what I aim to do, however, I will look to see if I can make use of camera blends somewhere within my sequence.
From here I placed my metahuman animation as a subsequence of my main sequence along with the audio track so that I would have an easier time compositing together my shots so that they were in sync with the animations and audio.
Shown above is my first pass of my segment of the music video. The segment is broken up into eight different shots, with each designed with a specific purpose in mind. The first shot starts in close up to the Woodsman metahuman as the intent is to have a transition take place in the form of fire flies obscuring the scene before dissipating and then zooming out to reveal more of the scene. The second shot continues this, which rotates around to reveal more of the environment as the metahuman looks around. The third shot then focuses on tracking the metahuman as it starts walking towards the left of the scene, with the focus being on the movement of the Woodsman, in particular the footwork and the particles around the antlers. The fourth shot builds on the previous shoot, but this time tracks the movement of the right arm of the metahuman, which then transitions into the fifth shot where a birds eye view is used to highlight the arm being up in the air. With the sixth shot, since the previous shots have had a lot of movement, I decided that this shot would be a front-on static shot. As the Woodsman goes through a large range of movements, I thought this was a good opportunity to use a static shot to allow this movement to be highlighted and focused on. The transition to the seventh shot is motivated by an over-arm wave like movement, with this shot being closer up to the metahuman so that the viewer can get a closer look at all its details as it moves. To make the shot more dynamic, the camera tracks the movement of the arms like previous shots. For the eighth and final shot, the intention is for it to transition to Tex's segment. The motivating action here will be the pushing motion that the metahuman does at the end of segment which will lead into a particle effect that obscures the screen, and so to telegraph and bring attention to that move, much of this shot once again tracks the movements of the hands.
Next I needed to develop the fireflies that would be used as part of the transition between Tyler's segment and mine. While they would be the same in appearance to the fireflies used on the Woodsman, they would need to behave slightly differently, as for these particles I needed a large amount of fireflies to obscure the screen before dissipating. Therefore, I created a duplicate of my original particle system for the transition.
For this new system, along with increase the number of particles spawned and making them only spawn once, I increased the warm up time of the particles so that when the scene was played, the particles would already have been spawned and be midway through their lifetime in order to have them obscure the camera view from the very start rather than have just start to spawn.
Adding in this particle system into the sequence, the result is shown above. Overall, I quite like this initial test, as due to this new particle system sharing the same properties as the original system, along with the movement of the metahuman at the beginning of the sequence, it looks like the firefly cluster is following with the movement of the metahuman's head.
During the week 12 workshop the team discussed where we at in terms of our individual progress as well as got feedback from Paul regarding the project. As most of the team were yet to fully put together the camera shots for their individual segments, we decided to set a deadline for getting them done the upcoming Sunday. On Sunday, we would have a meeting to discuss our renders, provide any feedback on each other's renders, and then start putting them together in Premiere Pro to create a music video. From there, next Wednesday we would have another meeting to review the music video and discuss any final refinements that need to be made to it.
Also during the session, a few improvements for my particular segment were discussed. Tex and I discussed that with changes made to their metahuman, it would be best to switch to using my firefly particles as the method of transitioning from my segment to theirs, and so I would need to work on putting that together at the end of my sequence. In addition to that, I noticed that some of the moss decals on the floor of the environment were stretching on some surfaces, and so I wanted to fix that as well.
Later in the week after the week 12 lab session, I worked on the refinements to my segment discussed with the group. For the fireflies at the end of my segment, I had some trouble figuring out how to spawn the fireflies so they would appear when I wanted them to, as the ones at the beginning of the segment just existed in the scene.
What I did initially was create a copy of my firefly particle at the start of the segment and then looped the particle effect while delaying it by enough time so it would play around 53 seconds into the segment so it would appear at the end of it to obscure the screen. The result is shown above, however, I felt that this was quite abrupt.
I felt this may be due to the nature of the transition being quite quick, and that when combined with the next segment, it should result in a smooth and quick transition. To test that, I duplicated the video in Premier Pro and then added a fade to white transition between the two and overall I think it creates quite a seamless transition. Despite that, I still felt the particle effect itself felt off, as the particle somewhat just appear, when my intention was for them to be motivated by the movement of the metahuman's arms.
My team member Tyler who was doing a similar particle system transition to me helped me out by letting me know that we could add a track called 'System Life Cycle' to control when a particle spawned into a sequence.
Once added, you could specify the lifecycle of the particle system in the sequence, and therefore control when it spawns.
With this set up, I then created a new version of my end particle system that was now configured to move towards the camera when it spawns. I then rendered this out, and got the result above. Overall, I am quite happy with how the transition looks. I believe it will look even better when transitioning to Tex's segment as it is intended to, as their metahuman will continue off the movements my metahuman does at the end of my segment, allowing for better continuity.
The other thing I had to improve was the moss decals in the environment, which would often stretch on vertical surfaces such as the side of the pavement.
To fix this, I first scaled the decals so that they wouldn't be affecting the vertical sides of objects such as the pavement to ensure no stretching.
I felt, however, that the absence of moss on the sides of the pavement looked weird since they were on top of it and on the road beside it. To get around this, I created additional moss decals and rotated them sideways.
That way, the moss decals appearing on the sides of these pavement blocks would be aligned to same plane and therefore wouldn't stretch and distort.
Lastly, I added more moss decals to span the environment to ensure that the moss decals at the outer edges weren't too visible, as the boxes surrounding the decals resulted in these awkward cutoffs. Due to extending the decals, they aren't as visible in the camera shots I have set up. These fixes I have implemented for these decals should also benefit other team mates whose segments also have these moss decals quite visible in them.
Putting this all together, here is the draft test render I have put together for my segment.
Once the majority of the group's segments were done, I was also responsible for putting them together to form the music video in Premier Pro. This process was relatively easy, as we had planned out our transitions in advance, largely making use of particle effects, as well as planning on using fade to white and black effects to help the transitions between our segments more seamless.
Here was the first draft of the music video put together. Conor's segment was missing as due to illness and other assessment commitments, they are yet to get their segment done and would likely be making use of the 48 hour extension period to get it done, and once done it will be added in at the end since it is the final segment of the music video. In addition to that, Will's segment still needed some lighting and material work that they were working on when this was put together. Overall, however, the transitions worked somewhat well.
I did feel the transition between mine and Tex's segment was a little abrupt. To alleviate this, I spliced and duplicated the beginning of my segment which has frames full of fireflies, and placed them at the end of my take to lengthen the amount of fireflies appearing on screen at the end of the segment. Then in between our segments, I added a purely white screen to further lengthen the transition, using dip to white fades to make this more seamless. This resulted in a transition that felt more natural, with a bonus benefit being that Tex's segment was now more on beat with the background music.
After Will finalised their segment, and Tyler made some minor adjustments to make their transition more seamless, here is the final music video minus Conor's segment at the end. The group decided to name the music video 'The Heat Death Of The Metaverse'. We decided upon this name because the music video features various metahumans that become progressively more abstract, as if we are delving into the metaverse and getting further from reality, and as this is happening, reality becomes less stable and over time the environment becomes more and more fractured and less real elements of the real world remain, hence the 'Heat Death' part of the title. Overall, the refinements made to the segments and the music video edit allow for seamless transitions, and for each of the movements of the metahumans to be on beat with the backing track, resulting in a pretty nice music video. All that remains for our group is to add in Conor's segment once it was ready.
After adding in Conor's segment (who was able to get their segment done before the 48 hour extension period), here is the final music video, The Heat Death Of The Metaverse.
The following is rationale describing my interepretation of the intended vision of the final music video outcome:
The Heat Death Of The Metaverse music video was produced with the aim of portraying a narrative through the changes made to the environment, and the metahuman at its centre throughout the music video. As the music video progresses, the metahuman subject becomes more abstract, being taken further from reality, and the environment responds by changing to reflect the changes to the metahuman.
The music video starts off with the most human-like entity of the metahumans, however, even this scene is not fully grounded in reality, as weird technological vines have spurted out of the ground, with a similar visual effect being present on the metahuman, which combined with the destroyed cityscape in the background, implies an almost post-apocalyptic environment. The metahuman getting rid of the computer that they were sitting at during the beginning of the segment conveys this message of them abandoning the only thing that had some semblance with actual reality. A tunnel of light, is used to transition to the next segment, almost as a way to imply the passage of time or that another dimension of the metaverse has been entered.
In this new segment, the metahuman resembles an insect human hybrid surrounded by egg sacs. The implication here is that a new form of life has taken over in this environment, and while still retaining some human elements, it has become more animalistic in nature. The figure shoots out swarms of insects throughout the segment, with the final swarm beginning to light up into fireflies.
As they clear up, a new metahuman figure appears, with it this time being a wooden tree-like figure devoid of human flesh, which now has fireflies trailing its hands and antlers that in a way incorporate the insects from the previous segment. As the woodsman lumbers around the environment, it's clear there have been changes, with trees populating the streets covered in more moss. The combination of this environment with the woodsman metahuman has this implication that nature has taken over, and conveys a further regression in the human form as the abstraction of the metahuman increases. The music video started with a very human-like form, transitioned to a more animalistic form, and now we have a more plant-based life form. The segment ends with the woodsman shooting out a flurry of fireflies at the screen which then fades to white.
As the fireflies dissipate, they seem to morph into these particles of light, which then come to reveal a being made of pure light. A very stark contrast in the current and previous environment can be seen, as it is now in almost complete darkness, illuminated only be the metahuman and the glow of the rocks and buildings behind them. Despite its differences, it continues the narrative built upon in the previous segments. The metahuman and environment have regressed further to the point where they are devoid of any forms of life, and its as if the fireflies from the previous segment merged with the woodsman metahuman to become one being. This segment adds in some water which could allude to life, however, its shallowness, stillness and emptiness only further emphasises the absence of it. At this point, the only element implying in the existence of the 'metaverse' is in fact the light present in the scene.
At the end of the segment, the glowing metahuman begins to violently shake before exploding in movement, with a flash of white light following. When it settles, we see a a metahuman begin to form out of particles, which are of the same colour as the glowing building and afterimages of the metahuman in the previous segment. It seems to imply that in an attempt to save the 'metaverse', the glowing metahuman's light exploded outwards so that existence of the environment could remain, hence why the environment is a lot more visible in this last segment. In order for themselves to remain, they absorbed what light they could from the environment and their afterimages, however, it is not enough, and they are no longer a full-bodied being, but just a collection of particles masquerading as one. Their frantic movements and sprays of particles suggest that they continue to try and save the metaverse by spreading more light as they can into the environment, however, their efforts seem to be in vain as the metahuman made of a nebula of particles lets out one last blast of particles before the music video fades to black.
This regression in the metahuman form, and the subsequent transformation of the environment over time is how the music video gets its name: The Heat Death Of The Metaverse.
Throughout this project, several considerations were made by the team and its individuals to get to the final music video produced. This project covered several aspects of CGI production, making use of motion capture and real-time environment technologies to realise the vision of the team, and for most of its members, it marked the first time working with such technologies, or at the very least the combination of them. In this final blog post, I will look to reflect on the development of this project, some key workflow aspects of it, observations made throughout, and considerations for if I or anyone else were to undertake a similar project again.
The development of this project built upon knowledge gathered in previous units, extending our understanding of how we can make use of real-time CG technologies. The pipeline of researching, conceptualising, and creating 3D characters and environments is something explored in previous units such as CGI Foundations (KNB127) and Digital Worlds (KNB137), however, this unit goes further in that we made use of motion capture technologies to apply a performance to a character within an environment designed around it. The first few weeks of this unit were dedicated to allowing students of the unit such as myself to gain an intensive understanding of the technologies involved in capturing motion capture performances. In doing this, new considerations needed to be made regarding how the 3D character and its movements would be portrayed in the environment, and how the environment would support it. When making the woodsman metahuman, I often had to consider the placement of certain elements composing it to ensure no ‘clipping’ occurred and maintain realism in its movements. While developing an environment to fit the characters they house is not something new to me, considering how exactly the character would move around the space was something a bit more novel to me in the early stages of this project’s development, and influenced the design of my individual project proposal environment as well as the team’s final environment; with both featuring a large open space at its centre to emulate and accommodate the motion capture space where performances were captured. Therefore, this unit helped me learn to better keep in mind certain considerations when building 3D characters and environments for a specific application, which in this case was motion capture performance.
One of the more interesting takeaways from this project was that the use of motion capture meant that we could extend beyond the limitations of the real world to enhance the portrayed performance. For example, in this blog I have discussed the development of a firefly particle effect located on the antlers and hands of the woodsman metahuman, citing that its inclusion was due to me feeling that it enhanced the large sweeping movements of my metahuman’s performance. A paper by choreographers Stephanie Hutchinson & Kim Vincs (2013, p.3) on a performer’s perspective on motion capture performance details this very thought process, stating that “different graphical and avatar environments require different ‘orders’ of physical control’. Hutchinson & Vincs (2013, p.3) further describes how trails are an easy way of tracking large arcs of movement, and that it’s one way of making the physical presence of the dancer be “mediatized and re-presented on screen in the form of avatar/figure/motion graphics that form an extended sense of ‘presence’ within and through the generated imagery”. While I am more so adding to rather than replacing the human form with a more abstract representation, the essence of what Hutchinson & Vincs describes here still applies, as I am making use of computer-generated imagery in the form of particle trails to enhance the performance of the metahuman in a way not possible, or at least as feasible within the confines of the real world. Therefore, the use of motion capture provided us with the ability to produce a realistic performance, but also the ability to extend beyond and enhance it.
In saying all that, looking at the project post-mortem, more exploration and experimentation could have been done to abstract my metahuman and consequently further extend the performance beyond reality as well. While the woodsman metahuman is clearly not a human, it’s overall form still resembles quite closely to one. In an article by Paul Van Opdenbosch (2022, p.34) detailing a framework for motion capture derived abstract animation, one segment stuck out to me wherein he describes selectively using or disregarding individual data points captured in a motion capture performance as a way of further abstracting the final outcome. In Van Opdenbosch’s (2022, p.35) case, data points from the motion capture data were selected to showcase an abstraction of a human performance in the form of a cloth simulation, highlighting that through careful selection of these data points, the performance can still retain human-like characteristics in its movement despite it being applied to a non-human form. Given that my goal with the woodsman was to create a tree-like entity, I could have leaned further into the tree aspects of my character by disregarding some points of articulation in the motion capture data points so that the overall movement of the character more closely followed the design intent of having a character who moves in large, lumbering, and rigid movements. Given this was my first foray into the application of motion capture data on a real-time 3D outcome, this would have perhaps been out of scope for this project, however, it is something that can be kept into consideration for future projects that further explore the use of these technologies.
There are certainly other considerations that could be kept in mind when pursuing future motion capture projects, namely when concerned with the data clean-up process. This was something that consumed a lot of the time for myself and other team members as we worked through this project. For me personally, I ran into a lot of data clean-up issues with the legs and feet of my character, while my team members would report having issues with the fingers of their characters that they needed to clean up, with a prominent example being Will, wherein the beginning of his segment involves their character typing on a keyboard. This was something they found challenging to fix, and so he cleaned it up as best he could, with the camera angle of the shot being carefully angled so the intention of the motion could be conveyed to the viewer, but not reveal so much as to mitigate the issues present with it. This issue seems to be caused due to the fidelity of the capture of such fine movements on a tiny part of the human body. Van Opdenbosch (2022, p.32) details that “the type of motion capture technology used in practice, along with how it is set up and calibrated, is a key issue that can affect the balance of human movement”. Therefore, the easiest solution seems to be to increase the fidelity of the motion capture set up by introducing more cameras, however, the availability and cost of these systems means such additions are often not feasible. An article analysing the accuracy of optical motion capture (Eichelberger et al., 2016, p.8) details that increasing the number of cameras used in a motion capture system only has a noticeable improvement up to a point before they become unnecessary. Instead, increased capture accuracy can potentially be attained by ensuring the set up of the cameras is tailored to the exact activity that is being captured as for example, if we were using the same camera set up for two different performances the “system accuracy for measuring level walking without any obstacles occluding markers may be different than measuring a stair climbing task with potential marker visibility challenges” (Eichelberger et al., 2016, p.8). Therefore, changes could have been made to the setup of the camera array so that it was better suited to capture our dance performance. Again, while this unit acted as an introduction to the use of motion capture technologies, learnings from such articles could be applied to future motion capture projects to increase the quality of motion capture performances and reduce the amount of time needed to clean up the captured data.
Regardless of issues faced with motion capture data, it’s use in producing an animated work is fascinating. It’s been mentioned multiple times throughout this postmortem that this was my first time working with motion capture technologies, and so getting the opportunity to work with it was something new and exciting, with it’s potential to reduce the workload of animation being of particular interest to me. As someone who isn’t very experienced with animation, producing a similar work such as the one developed for this project through more conventional forms of animation such as key framing would have drastically increased the amount of time required to develop the final outcome, and so the role of motion capture technologies in this project is greatly appreciated. That is not to say that the role of the animator is no longer unnecessary, with an article on the very subject (Carter et al., p.4) highlighting that animators are still needed to preserve the intended outcome of the performance by getting rid of ticks and shakiness in the performance caused by the system, and in some cases add weight or life to the performance to better suit the needs of the final outcome. Therefore, through this unit, I have learned how motion capture as tool can be extremely useful in reducing the amount of work needing to be done in the pipeline, while also coming with its own set of challenges that need to be addressed.
In summary, this unit served as great introduction into the application and combination of motion capture and real-time 3D technologies. I have been able to observe multiple different aspects related its use, such as: how it’s use needs to be considered within the CGI pipeline, how it can be used to extend past the limitations and enhance real-life performance, and how motion capture plays a vital role in virtual production and animation. Through these observations, key considerations that can be made for myself or anyone else looking to undertake a similar project would be to consider how the motion capture camera system can be specifically tailored to the performance you are capturing, and to aid in that, plan beforehand how you intend to abstract the performance, as well as how you intend to use the space. Being aware of such observations and including such key considerations into a project early on should allow for teams and individuals to be able to make use of motion capture technologies more easily and efficiently. I therefore look forward to seeing how I can further apply these learnings, as well as make more use of motion capture and real-time 3D technologies in future projects.
Carter, C. P., Van Opdenbosch, P., Bennett, J., & Mohr, S. (2013). Motion capture and the future of virtual production.
Eichelberger, P., Ferraro, M., Minder, U., Denton, T., Blasimann, A., Krause, F., & Baur, H. (2016). Analysis of accuracy in optical motion capture–A protocol for laboratory setup evaluation. Journal of Biomechanics, 49(10), 2085–2088. https://doi.org/10.1016/j.jbiomech.2016.05.007
Hutchinson, S. & Vincs, K. (2013). Dancing In Suits: A performer’s perspective on the collaborative exchange between self, body, motion capture, animation and audience. in Cleland, K., Fisher, L. & Harley, R. (Eds.) Proceedings of the 19th International Symposium of Electronic Art, ISEA2013, Sydney. http://ses.library.usyd.edu.au/handle/2123/9475
Van Opdenbosch, P. M. (2022). Towards a Conceptual Framework for Abstracted Animation Derived from Motion Captured Movements. Animation : an Interdisciplinary Journal, 17(2), 244–261. https://doi.org/10.1177/17468477221102499