Another likely long-running major project is our own custom solution for standalone networked VR projects. This project will serve as a template and starting point for future projects and games.
The basic setup is a 3D scan of our studio space, with Unity Netcode players, and the Qualisys Mocap plugin. With this, we can build standalone projects to the Oculus Quest, and have visitors explore a virtual version of our space that aligns with the real space. From there, we have countless ideas for future projects which can expand off this base.
The starting point for this template project was to get Networking working in Unity, initially we started with the now deprecated UNet solution, which worked, but considering this would be a possible template for many projects in the future, we decided to go with the currently updated and in-development Unity Netcode. It took a bit of time figuring out how to transfer our current working solution from UNet to Netcode but ultimately it proved relatively simple.
Before long, we had simple cubes moving around, with it built to Oculus running standalone, connecting to a host built out on the computer.
That said, we would go on to have a lot of trouble getting the right IP address to connect to, as we only used an time-delayed auto connect, thus any change to the IP, or Wi-Fi, or any number of other small issues we encountered, the project would continuously have to be rebuilt onto the headsets.
Currently: We're working on getting buttons working using the templates given by Oculus, we'll then be able to allow players to connect on their own, or eventually type in the IP Address to connect to as to avoid needed to rebuild.
We also took a 360 capture of our studio space, as seen on the left, and used that both to create an HDRI for lighting, as well as using it to model the space 3D with accurate textures and space. This was more of a fun experiment then strictly necessary, though hosting everything in the space, I've had to create fake walls
Next, we implemented QTM tracking into the project, a thankfully easy and painless process due to the ready-to-go plugin Qualisys offers for free. This also uses the same auto-connect script mentioned above for the Networking, though this connects to the same computer every time, the only difference being if it's connecting through LAN or over Wi-Fi. Connecting over Wi-Fi is only necessary when building to the headsets, so thankfully there's no confusion with switching back and forth.
The primary purpose of using QTM tracking, besides tracking live performers in the space, is to track the Oculus HMD's as well.
We noticed during initial testing when we brought the virtual space in, that the Oculus Inside-Out tracking only works well at the origin point in Unity. When you walk around the virtual space, which we've closed lined up to match the actual space, the position and rotation tracking can start to get offset, especially as you move further away from the origin point. This isn't ideal obviously as we wouldn't want guests to think they can walk somewhere only to bump into a table or struct in real life. Also, the tracking will lose it's position occasionally and jump, meaning the user will periodically have to walk back to a pre-defined spot in the real space and reset their position/view.
All of this really just boils down to the fact that if we could use the QTM's positional and rotational tracking, we would be far better off and more accurate.
After butting heads with the code for a while, we finally got something working somewhat well, a basic concept of how it works can be seen in the illustration to the right. The Oculus position is based on the OVR Tracking Space, which using the Inside-Out tracking will give us a certain position in space, though it's not entirely accurate. We then get the real world position with QTM, and find where the player actually is, and what that offset is.
At first, I believe we were just setting the OVR Tracking Space to be whatever the QTM position was, which gave us double transforms and other weird issues. Eventually, we figured out we just needed to take the Inside Out position, and just add the offset of wherever the QTM position thought we were.
We're now working on adjustable sliders that should be available and usable by the player in standalone to adjust how much QTM Position and Rotation tracking is used. This way, each individual can adjust it based on what feels right. Ideally, with enough testing, we can find a good default that doesn't cause the user to become dizzy or disoriented due to the interpolation between 2 different reference points.
Implementing the controller tracking was quite simple, as Oculus provides a Prefab to easily implement. Hand tracking was a also simple, though I had over-complicated due to using one of their prefabs with hand tracking built in, which worked well on it's own, but we had issues getting it to work well with the QTM tracking. I would blame this more on Oculus though, as the prefab I attempted to use was very strangely setup, the hands themselves were essentially tied to 2 different parents, and moving the tracking space (which is how everything else is moved) caused double transforms due to it getting transforms from another game object in the hierarchy.
Either way, it turns out it was pretty simple to setup in the same way the controllers were tracked, and they both tracked well with the QTM and Inside-Out hybrid tracking.
This will obviously be important when we add more interactions for the users, simple things mentioned previously like choosing to connect to QTM or the MP server, rather than relying on an automated auto connect script, to more interesting things like physics interactions.
*Image of actor in VR with hand tracking*
On a slightly related note, we had an idea for a possible project/experience to showcase with the VR and QTM tracking, that could eventually also take advantage of the multiplayer aspect. The idea relies on the premise of recording a live actor tracked with QTM within Unity, then having that playback as an instanced prefab, while still allowing the actor to be tracked live in the original avatar.
This idea coincided with the fact that we had a new work-study student from the School of Dance named Aspen. At first, I was just curious and brain storming with her ideas that could be beneficial for other dance students, that is, being in VR, and having a mirrored avatar, or recording and playing back a performance. With this, a dancer could careful examine their movements fully in 3D, or they could do something interesting like doing a waltz, tango, or some partnered dance with themselves.
Hopefully we can inspire future dance instructors to come in for a workshop, explore this technology and how it could help their students.
In addition, one of our main ideas would to create a small experience for maybe one or multiple guests. They would come into the space and put the headset on. In VR and in the space, we would have Aspen tracked in the suit, and also in VR. She would tour the guests around, interact with them, then pretend to do a repetitive looping motion, like tapping away on a computer. We would record that and playback, and her still active avatar would go invisible until she would do a new character. We could repeat this a couple times, and suddenly we have a fully populated virtual space with essentially videogame NPC's, all created with just 1 actor.
The script was adapted from a previous project Alan had worked on, it works somewhat simply by having a simple toggle for a "recording state". When recording, we get position and rotation from all the joints we want, set them up in a string array, then when not recording, it disables the QTM tracking, and plays the recorded transforms by parsing the list.
There's still some polish and maybe some functionality we could work on, like writing out the list to a text file to be saved and reused later, or allowing the QTM actor, in this case Aspen, control the recording herself with a button press of the Oculus controller, rather than me having to decide when to start and stop...
Initially, more information and updates following this line of work continued on this page, and the Multiplayer aspect took a back seat for a while. For the sake of organization, I'll cut it off here, and instead re-direct you to a different page dedicated to this Avatar Recording project.
While the initial tests for getting the controller and hand tracking work in Unity was relatively simple, we also had ideas for the finger tracking built-in to the Oculus HMD to also manipulate the player avatar's fingers.
The prefabs given by the Oculus SDK use a detached hand model that floats in front of the user when doing hand tracking. So, both for the sake of the player's Final-IK VR avatar, as well as Aspen and her live performance avatar, we wanted to use that finger tracking data and implement it onto our own avatars/model.
Continuous issues with the Hybrid tracking solution, especially as the project has grown increasingly complex, from networking, to a viewable avatar for the player, to setting up different scenes and levels for a demo, and implementing a lot of this work back and forth between the the Avatar Recording project, and the Multiplayer VR...
It's become increasingly difficult to problem solve bugs when they arise, one of the constant ones being the core of all these projects, that is our implementation of this Hybrid Tracking approach that utilizes both Positional and Rotational data from the Oculus HMD and our Qualisys Motion Capture system.
Due to this, we've temporarily stepped away from the Hybrid Tracking approach until we can implement a more polished and working solution. Currently, the position tracking seems to work perfectly fine, however the hybrid rotation is giving us errors. (Note: The reason we even need to do this is because eventually, given enough time and rotations, the Oculus Tracking Space's idea of the player's rotation begins to drift, compared to there actual rotation in real-world space. This means that eventually 1 player's forward will not necessarily be another player's forward, leading to a possible collision. The Oculus' position tracking is much the same, so we must utilize this Hybrid method.)
The ideal scenario was to use full QTM position tracking (with some smoothing) to constantly track each user's position accurately. We were also originally using full QTM rotational tracking, but even with smoothing, it didn't feel right. Sometimes there was lag, or ghosting effects, or it caused a general sense of nausea. To get around this, we instead opted to just use Oculus' built-in rotational tracking which is buttery smooth and far less disorienting. Then we could do 1 of 2 things, either occasionally have the guests (or ourselves) reset there rotation to match what QTM is telling us, or to have a margin of error. This way, the Oculus would mostly do rotation tracking itself, but occasionally when there's too much drift/offset from what the QTM rotations are, it will either smoothly correct, or instant snap based on our error correction speed.
I still believe this margin of error solution is likely the best scenario, but unfortunately, as mentioned previously, we are still experiencing some bugs and issues with it. Sometimes the error magnitude (the amount of offset between QTM and Oculus), is constantly too high and fluctuating, and so it's always correcting. Sometimes if we try to manually re-align with a key-press, it might work, or it might offset on the roll a bit, or it might start spinning out of control. I can only imagine these errors are the result of strangeness with Quaternion math, and/or the fact that our QTM space in the software has an opposite XYZ coordinate system to Unity's, which has also caused various issues. Personally, I do not have the programming skill set to fix these issues, most of the advanced programming work and math is done by my boss Alan, and I'll fiddle around or make adjustments when I can, but Quaternions are way outside my wheel house.
The latest iteration of our hybrid QTM & Inside-Out tracking script. Options include toggling for position correction and rotation correct, using QTM. As well as an error threshold value (which is reached when the corresponding values of QTM are offset from what the Inside-Out tracking tells us), and a smoothing algorithm that can blend to the new QTM value, or snap to it if the smoothing is low enough.
One of the first projects spawned from this VR-QTM-MP template that we've been building over the course of the year.
Inspired by popular VR art making apps like Tiltbrush, Gravity, Openbrush and so on, we started creating our own VR art making app that included network functionality so multiple users could create together.
In the middle of working on this project, we decided to play around with the ability to draw and create art in VR, while using the hybrid tracking approach, and the new Netcode for GameObjects solution Unity is offering. This is only mentioned here as this current template in progress acts as the basis for this new project, more about it can be viewed on it's specific page.
Another idea for our rotation issues was to create a virtual target of sorts that tracks where the QTM thinks you're rotational forward direction is, the player would then align a crosshair with the center of target, and use the built-in Reset Tracking button in the Oculus menu.
We would have also like to have an easier way to do this manually through a button press in Unity on the developer's end/work-station computers for our users who aren't as experienced with VR and technology, but unfortunately this functionality doesn't seem to be readily available for implementation on the Oculus Quests anymore, though it did use to work with the Rift S. I can't say I'm too surprised when useful functions are taken away from developers in subsequent updates to software and hardware, but I sure am disappointed.
Either way, between having the users manually reset tracking to fix rotation, or dealing with the slightly awkwardness of the script that attempts to smooth it, I'd say we reached a somewhat decent spot to pause on this line of work and move back into networking.
When we first started this project, we were originally using Unity's old HLAPI system, which worked at a basic level, all we wanted to do was track players and network their positions and rotations to clients.
However, Unity recently came out with the new Netcode for GameObjects, conversion over was pretty simple. Essentially, outside of scripting, game objects just need a "NetworkObject" and "NetworkTransform" component, the first one syncs position, rotation, and scale on connection, while transform is what updates it every frame, at least as far as I understand it. The NetworkObject only needs to be on the top most parent object, while every child object that needs to be synced should have it's own NetworkTransform.
Additionally, we add a Network Manager object to the level, with the corresponding "NetworkManager", which contains options for the Player Prefab, the Network Prefabs, buttons for starting Host, Server, or Client, and the connection data (IP address and port).
I believe this is all that's really necessary to just have multiple players see each other spawn and move, anything else starts to need some scripting. Considering this page is just a template for other projects, I will end this section here, more about how we network for the VR Multiuser Art Making can be found on it's respective page.