On a slightly related note, we had an idea for a possible project/experience to showcase with the VR and QTM tracking, that could eventually also take advantage of the multiplayer aspect. The idea relies on the premise of recording a live actor tracked with QTM within Unity, then having that playback as an instanced prefab, while still allowing the actor to be tracked live in the original avatar.
This idea coincided with the fact that we had a new work-study student from the School of Dance named Aspen. At first, I was just curious and brain storming with her ideas that could be beneficial for other dance students, that is, being in VR, and having a mirrored avatar, or recording and playing back a performance. With this, a dancer could careful examine their movements fully in 3D, or they could do something interesting like doing a waltz, tango, or some partnered dance with themselves.
Hopefully we can inspire future dance instructors to come in for a workshop, explore this technology and how it could help their students.
In addition, one of our main ideas would to create a small experience for maybe one or multiple guests. They would come into the space and put the headset on. In VR and in the space, we would have Aspen tracked in the suit, and also in VR. She would tour the guests around, interact with them, then pretend to do a repetitive looping motion, like tapping away on a computer. We would record that and playback, and her still active avatar would go invisible until she would do a new character. We could repeat this a couple times, and suddenly we have a fully populated virtual space with essentially videogame NPC's, all created with just 1 actor.
The script was adapted from a previous project Alan had worked on, it works somewhat simply by having a simple toggle for a "recording state". When recording, we get position and rotation from all the joints we want, set them up in a string array, then when not recording, it disables the QTM tracking, and plays the recorded transforms by parsing the list.
There's still some polish and maybe some functionality we could work on, like writing out the list to a text file to be saved and reused later, or allowing the QTM actor, in this case Aspen, control the recording herself with a button press of the Oculus controller, rather than me having to decide when to start and stop.
*Image here of Aspen in VR with the tracked avatar*
Originally, we were a little uncertain with how we'd tracked both the headset, and the mocap actor at the same time, as the QTM Skeleton requires a few markers on a head hat, which would be blocked by the headset they're wearing. I had a few ideas to work around for this, like trying to track them separately with the headset acting as a 6DOF rigid-body (which is typically what we've done until now) and then combine them in-engine, but in a somewhat rare circumstance, the first most simple solution I tried ended up working out better than I expected. That was to just forego the cap entirely, and place the trackers in roughly a similar position on the headset itself. (The AIM Model that QTM builds for skeletons allows a little bit of freedom with marker placement and tracking, especially compared to a rigid-body.) I was thinking that I'd have to rebuild a new AIM model specifically for these use cases, one for when Aspen was without the headset, and a different one for when she was in the headset, but to my surprise it recognizes and captures the current AIM model well enough on it's own. (This is only as of 9/27/22, so with more sessions, this may change but fingers crossed for now.)
With the VR correctly tracked by QTM, and Avatar Recording on the way, Aspen and I played with an experiment to have the avatar mirrored, and for her to do a partner performance with it. This is something that dancers may do in their own classes and performances, but here, your partner is a perfect mirror of you, so your able to be much for free flowing because you know exactly what you're going to do, and how your "partner" is going to respond. Aspen did a neat experiment moving around her mirror to play with negative space, which was fascinating to watch, from the split view as seen above, seeing both her perspective and also a third-person hovering camera angle. Much like watching playbacks of your performance in 3D space and being able to move around it and see it from all angles, being able to watch yourself with a full mirror has interesting use for performers to examine themselves and their movement.
Right now, we just need to adjust the script to be a bit more multi-purpose and polished. As it currently stands, we still have some unused public variables that were from when the original script was made to just record animations, but not necessarily with QTM.
You'll also see that the joints we capture the position and rotations of are just manually assigned in the prefab, rather than a smarter system that will find and assign them automatically.
Another issue is that this script is rudimentary, and only captures and uses the data from the current animated model. This means that with the current setup, when we record and stop, the model disables the QTM script, and then just begins the recorded playback, then instantiates the prefab again with QTM enabled ready to record again. This is inconvenient as I have to manually re-select the new prefab in order to record from it, and more importantly, when we implemented the OVR Prefab to reference the head joint, it only sticks to the original, which after recording is now doing the playback. The only way to change the VR's reference to the new instantiated model is to adjust it manually which is obviously a pain.
So, the current idea is to make some changes to the script so it's open to more functionality, with the main idea being to have one "Master" prefab that is always tracked by QTM, (and possibly invisible when not recording to allow for that multiple NPC idea), and when recorded, it will instantiate a new playback prefab that just plays the recorded animation only.