Research Question: How can photogrammetry be used to create a convincing 3D character for animation?
Original Plan: I will create a computer-based character ready for animation based on a pre-existing physical model. More specifically, I will make a short, animated clip of the great William Shakespeare reading a line or two from his famous work, The Tragedy of Hamlet, Prince of Denmark. To accomplish this, I will use photogrammetry to capture the surface geometry of a head bust of Shakespeare and convert it to a closed manifold object. Next, I will upload it into Maya and rig the face. I will then upload an audio clip into Maya and complete the lip sync for the model. Lastly, I will compile it into a video and upload it as an MP4 file.
Improvision: Instead of using 3D animation to animate Shakespeare's movements, due to complications with the rigging process, I opted to use cut-out animation to achieve a similar look.
a. Historical Precedents and Research That Relates Directly to Project: The Uses of Photogrammetry
Photogrammetry, the process of getting physical object dimensions from 2D images, has been utilized in a variety of different disciplines, such as forensics, gaming, mapping and digital recording terrains, and films and television. In games, like Star Wars Battlefront for example, photogrammetry has been utilized to efficiently capture some of the locations and props from the original series. Aside from being used to build the environment in a timely manner, photogrammetry was used to create the main assets of the game, like spaceships. In films, especially animated films, the photogrammetry has been used in a similar fashion, of taking a desired landscape, architecture, or environment and replicating it digitally in the form of a hyper realistic 3D model. An example of a TV series that utilized photogrammetry was Big Little Lies, in which instead of filming at the Monterey restaurant, they captured the inside of the building through photographs and built a digital version of it- this decision saved time, money and travel.
A stop-motion project that utilized photogrammetry for a different reason was a videogame titled Vokabulantis. The creators physically built the environment and characters and then utilized 3D scanning to maintain the stop-motion aesthetic. Most notably the game’s lead characters were created through this process to keep their congruent looks and to prevent them from looking too digitalized. Furthermore, photogrammetry has been used in VFX to create convincing doubles of actors and characters. The process includes taking hundreds of detailed pictures of the object and converting them into a convincing model that can be animated. In conclusion, photogrammetry allows for having realistic backgrounds and characters without having to reallocate resources in building them from scratch.
Resources:
https://www.artec3d.com/learning-center/what-is-photogrammetry
https://blog.frame.io/2021/06/14/photogrammetry-future-of-filmmaking/
https://www.autodesk.com/au/solutions/photogrammetry-software
b. Significance Of the Project of Animation
The significance of the project is that it will test the limitations of models created through photogrammetry (instead of traditional modeling) and determine if it is indeed a better alternative to traditional digital modeling and sculpting. If the final product is convincing and can be rigged and animated, it will prove that photogrammetry can serve as a powerful tool when it comes to building environment and characters for 3D animation in a timely and financially- efficient manner. Furthermore, the project will prove that AI, thought quite effective, lacks the hand-made quality of traditional artistic practices.
c. Criteria/Exemplars by Which the Project Will Be Compared
I would like my final product to be compared to the AI avatars that are available online and through apps and software like Adobe Firefly and Synthesia.
Description: The MP4 file to the left is an AI video of a painting talking on Vozo website, which was how I wanted my project to look after completion. However, instead of AI I created a digital replica of the bust and animating it through Maya and Adobe Photoshop. The YouTube video to the right is Hamlet's iconic soliloquy delivered by Andrew Scott.
Description: I took a series of photos of the Shakespeare bust from all sides and angles using my Canon camera, and then uploaded them into the photogrammetry app Polycam as an image sequence. After I was finally satisfied with how the model looked, I downloaded it as a glTF file, and using Blender, I converted it into an FBX file, unwrapped it, and uploaded it into Maya.
Original photograph of the Shakespeare Bust.
How Shakespeare Bust Looked in Maya After Exporting from Polycam
Description: Before I had to improvise, the original plan was to rig the model in Maya for animation. First, I needed to convert the object's mesh from triangles to quads to make it animatable in Maya- Polycam only produces and exports models as triangular meshes. To accomplish this, I exported the triangular mesh to Blender as a FBX file, duplicated it, and remeshed the duplicate to have a cleaner mesh made up of quads. I then unfolded the UV map of the remeshed object and baked the texture from the original object onto the new one. I exported the final product as an FBX file, as well as the new UV map as a PNG, and imported it back to Maya.
How Mesh Looked in Maya Before Remesh
Remeshing in Blender (Object in Wireframe Mode )
Baking Textures from Original Object (hidden) to Remeshed Object (in viewport) in Blender
How Mesh Looked in Maya After Remesh
Description: Rigging a scanned character, especially the face, can be very tricky, as I have learned with this project. It is a reverse process of traditional 3D modeling and rigging because the texture is already created and ready to use, but depending on how detailed the scan is, the mesh itself still may need to get sculpted. After remeshing in Blender, I tried to insert and rearrange the edge loops on the model's facial features in Maya, only to realize that the scan itself was not sculpted very much. To elaborate, although the model looks like it has eyes, a mustache, and a mouth, without the texture it does not have any of these. I tried to redefine them in ZBrush using the sculpt tools, but had complications with the texture and feared that if I distorted the model too much that it would not resemble the original bust, which would destroy the purpose of the entire project. I did not want to in any way lose the original texture, which already was slightly distorted from the earlier baking process in Blender. After spending a couple weeks trying to create an animatable mesh, I realized that I needed an alternative solution because of time constraints and obligations to other projects, like my animation capstone project.
How Mesh Looks Without Texture
Lack of Definition in Facial Features
Description: I first wanted to create a dark, gothic library that would like its from the 18th- 19th century, much like the one in the reference image to the left. I wanted it to look a bit creepy and mysterious, a place where a talking statue wouldn't look to irregular or strange. However, after looking at my family's living room- turned- library (the image to the right), I decided to create a room instead. This room, much like the featured living room, would consist of warm colors and a bookcase filled with books. Unlike living room, I made the room on the emptier side, the walls a grey lifeless color, and significantly darkened the scene's lighting; I still strived to achieve the dark mysterious aesthetic by introducing creepy music and a desaturated color palette.
I used some furniture that I found online and for the actual room I used textured polygon planes. After completing the modeling process, I created and animated a camera, and playblasted the scene at a high sample rate. Lastly, I compiled all the output files together in Adobe Premiere Pro.
Early Inspiration for a Library from Wednesday Series
Inspiration for a Room from my Family's Personal Library
Modeled Room in Maya
Modeled Room with Textures in Maya
Modeled Room with Textures in Maya
Modeled Room with Dark Atmospheric Lighting in Maya
Compilation of AVI files in Adobe Premiere Pro
Animated Clip of the Environment (Before and After Shakespeare Talking)
Description: After scratching my head for a few days figuring out alternative ways to animate the bust, I came to the conclusion of resorting to digital cut-out animation. I would still use Maya for rendering and animating the bust in its environment, but then I would take one frame from the video file and animate it in Adobe After Effects using the puppet pin tool. I later chose to manipulate and duplicate the frame in Adobe Photoshop instead to be able to manually insert the interior of the mouth as needed.
Cut-out animation, according to Toon Boom, consists of "... breaking down a puppet into pieces that are moved frame by frame to animate the character." In this case, I used the lasso tool in Photoshop to select and move certain parts of the face, like the upper and lower lips, in multiple layers. I then exported each and every frame and labeled them accordingly. To accomplish a realistic cut-out animated piece, I first created the main keyframes, and went back to create the in-betweeners. Some of the frames I took two, put both of them at 50% visibility, and combined them together to make the overall movement look more fluent. After weeks of work, I imported all the frames into Adobe Premiere Pro, adjusted the lighting so that they would match the clips of the environment.
The process was extremely tedious and difficult to keep track of my progress as I was going. Having to position the different mouths separately involved a great deal attention to lighting, and I used heavy shadows to help blend it all together. I tried to blend all the pieces together as smoothly as possible.
Example of Cut-out Animation Process in Photoshop
Example of Cut-out Animation Process in Photoshop Continuation
All Fifty Frames Exported from Photoshop
Video Compilation of All Manipulated Frames
Description: I didn't want the model to look like a ventriloquist dummy, so lip syncing was pretty important. To accomplish this, I created videos of myself talking exaggeratedly, as well as referred back to the Andrew Scott video. I also referred to the phoneme charts from the following link: https://fmspracticumspring2017.blogs.bucknell.edu/2017/04/18/odds-ends-lip-syncing/. I prioritized the keyframes, or when the mouth is completely open and closed because they would either make or break the animated clip. I wanted it to look like an AI video as closely as possible.
Thumbnails of Shakespeare Head Movement
Exaggerated Mouth Expressions for Lip Sync
Description: As previously mentioned, I imported all the frames into Premiere Pro and nested them into one large clip with the AVI files of the environment. I found some creepy music and a MP3 clip of Andrew Scott's voice. Finally, I exported it in video format from Premiere Pro and uploaded it to YouTube for easy access.
Placement and Adjustment of Cut-out Frames in Adobe Premiere Pro
Integration of AVI Files with Cut-out Frames and Music in Adobe Premiere Pro
Conclusion: In conclusion, photogrammetry can be used to create a convincing 3D character for animation in the majority of instances involving generic full body movement, like a walk cycle or flexing of the fingers. However, when preparing to animate a scanned character's face, one should be attentive to the amount of detail that was captured in the scan. For this project, I was ignorant of the lack of detail that was actually captured in the scan- when it came time to rig the figure I finally realized that the mesh didn't have any indication of the character's eyes, mouth, and mustache, even though they visible on the texture. To resolve this issue, one would theoretically sculpt the facial features using the sculpt tools in ZBrush or Blender, remesh it, and then bake the textures from the original model. Another solution is to utilize a laser scanner or a white light scanner, two scanners that measure distance and can achieve very high accuracy in relation to capturing minute, but important details. Then, they would remesh and retopologize it in Maya in preparation of the rigging process.
This project also demonstrates that traditional artistic practices, like cut-out and computer animation, can be effective in achieving similar looks as AI generated products, with the only downside being that animation process can be time-consuming, depending on the artist's experience and the complexity of the scene.