One of the first projects spawned from this VR-QTM-MP template that we've been building over the course of the year.
Inspired by popular VR art making apps like Tiltbrush, Gravity, Openbrush and so on, we started creating our own VR art making app that included network functionality so multiple users could create together.
Currently, I don't believe there's any hard sort of goals for this speculative project/template, other than creating a multiplayer art making app that we would be able to showcase with our various headsets inside our CIM studio space. The CIM also occasionally hosts some Design or Illustration classes which will come in for a short time to demonstrate and test art making inside of VR using the various apps I've mentioned before, so it would be pretty cool to have them play around with our own in-house solution.
This started with some basic scripts and prefabs that my boss, Alan, had provided, either he just created them or had them laying around from an older project.
DrawLineRender - a simple script that creates a LineRenderer component and grabs a cursor's position. It updates a series of points based on a distance we can change, which is used to create the line.
DrawLineVR - the script that's responsible for actually getting input, as well as instancing the prefab that holds the Line Rendering component and script.
LineBrush - a prefab that holds the LineRenderer component as well as the scrip that updates it's position based on the cursor.
Essentially, how the script works is: the user will hold the trigger on the Oculus controller, that will instantiate the LineBrush prefab, which will start drawing a line based on a series of points which are placed periodically, determined by a distance threshold that can be changed (setting it really high will allow for straight lines to be dragged around, while really small values are obviously very smooth). Once the user lets go, it will stay at it's last position when the trigger was released, and repeat. (All of this is based on a small sphereical "cursor" that's parented to the right-hand controller.
Adjusting size of the LineBrush wasn't too complicated, I created an "UpdateCursorSize()" function that activates when the thumbstick on the right-hand controller.
The math was a bit abstract to figure (as someone who didn't pay much attention in math class in high school) but ultimately not too bad. Looking at it now, it's a bit hard to tell what's happening but essentially it just grabs the left/right axis value (-1 to 1) of the thumbstick, and adds or substracts its from the current cursor/brush size, while lerping between currently hard set values to ensure that the player doesn't make it too big or too small.Â
Script for updating the cursor size can be seen below:
The other component is the menu, which is parented to the left hand. This is pretty self explanatory, there's not much going on there, using Unity's UI system, a basic menu with an empty button that lights up when pressed using the OnClick() function. Eventually it'll be used in a similar way to apps like Tiltbrush where it contains various options and tools, for now its mainly a placeholder. It does have a button to toggle the "Target", which is a script I explained a bit more in the VRQTMMP project page, essentially it's just a floating target that's a set distance from the player, and it acts as a target for player's to face to realign their view by resetting the tracking.
After getting basic functionality up and running, we then attempted to get it working with networking, so user's could see each other's drawings, but we ran into a few hiccups that required changes to both the DrawLineRender and DrawLineVR scripts.
Specifically we created a new NetworkDrawLineVR which would handle all the changes for drawing in VR, and we keep the other one incase we only need the single-player functionality.
In order to get it to work, we only need to use the [ClientRPC] class in order to tell the host/server that the client was sending data and vice-versa. One of the initial issues we had to think about was how to handle sending data back and forth. We had gotten a basic implementation working, but only on the Host's side. The player running the host could draw, and a client would see it, but the client themselves couldn't draw. We had done some simple tests before with ClientRPC and Netcode, but I hadn't yet realized that everything we did relied on the Host sending data to the clients, while the clients had never sent data back, so this was a bit of a new area for me.
Essentially, in the end, we figured the best method for this to work was to just have the Host only worry about tracking the cursor's position, and the trigger press to indicate the instancing of the LineBrush prefab.
The client would press the trigger to indicate the start of drawing, the Host would track the client's cursor position, then do the drawing itself, then finally sending that brush stroke back to the respective client and all others.
The only checks we're really using is Netcode's "isLocalPlayer" function, which determines if the cursor is parenting itself to the local host, or the remote client.
It took a bit more fiddling but eventually we got it all working, mostly with Alan's help as networking is another subject that's outside my wheelhouse. This method should be fine for the future as long as we're only currently worried about networking brush strokes, which in this current case, is just a instanced prefab with a LineRenderer component.
At this point in time, networking was established and working, but the actual drawing aspect wasn't. We could draw a line, and change the size, and that was it. We need implement the ability to change brushes, change colors, and maybe even change material or texture.
The first thing was changing the color, which proved to be a bit harder than I expected. I was going to implement a simple HSV color picker that's commonly seen in most programs, yet I had no idea how to. I found some free examples around the internet that work, but after looking into the code, I realized I severely underestimated how much work and math would go into making one, even adapting the ones I downloaded to work in VR seemed quite a bit daunting. Alan at one point mentioned just using an image of an HSV picker, and using a raycast to just get the pixel color. I figured that would be a pretty rough solution, but I figured it would likely be easier, and worth attempting anyway, so I got to it.
Initially I just found some Unity forum posts about someone asking the same question essentially, "how do I get the pixel color of a texture using a raycast?", which as I found out wasn't too difficult, though I was initially quite confused. The forum posts was quite the help, as they typically are, I didn't have to make much modification at all to make it work.
I did have to re-research how raycasts work, as even though I've done them quite a few times, I still seem to always get tripped up. I got that setup, did a basic debug to make sure it was working. With that done, I got a nice high quality image of a color picker, and threw it into Unity, setting it up as a simple Unlit Texture material.
In my initial tests, I found myself confused, the raycast (as can seen in the image below) would only shoot in the forward direction, no matter which way the cursor was facing, I supposed it was a simple matter of global vs local space, as is often the case with a lot of things, but I still can't quite get it to work.
The code's pretty simple, fire a ray, check to make sure it's only looking for an object with the tag "ColorWheel".Â
Setup some temporary variables for the renderer and texture2D objects, as well as coordinates for the UV space.Â
The RaycastHit data can't output color, but it can output a texture coordinate, so we simply sample that, grab the texture and find out what pixel matches those coordinates, and set it accordingly to the brush (in this test instance, a simple cube).
And that's it, once again, not the most neat or professional method of color selection, but as an amateur at coding, this was much easier to get and, hey, it works.
I spent a bit more time with the script, attempting to change the line color to that which was picked. I ran into some issues in the beginning, mainly that the color picker would change the color of the last drawing, and occasionally if multiple differences drawing instances were connecting, they would all change colors.Â
This was pretty simple to figure out though. First I had to use ".sharedMaterial.color", because I was accessing the prefab rather than the instance itself. This would change the material for the prefab, and thus all instances... except that's not the case, it only change the last most recent instantiated prefab, including ones that were different instances, but connecting...
I'm not sure if that makes any sense, it was a strange bug. Either way, the first solution was to access the material property of the instance itself, rather than the prefab, and move the code from the FireRay() function, up to just after it's instanced when drawing.
Now, if a color isn't picked by default, it will color the line black, and only after picking a color will all new instances take the new color.
At some point, I'd like to be able to select line instances, and change the color after the fact (which I did by accident in the beginning, but that code is likely flawed and would break something else), but for now this basic functionality will work.
Color picking is now mostly in place and working... only partially because it works by selecting a pixel color from the HSV color wheel seen above, the problem is there's no dark values, so no grey or black values can actually currently be selected. I think I can fix this by simply adding some code to adjust the RGB values of the selected color value, adding or substracting to the current RGB values in a 0-1 range, through the tilting of the left hand analog stick.
Either way, I've grown bored of lines and wanted to add what I'll be calling "tools", that is essentially a new prefab/object that's instanced when "drawing". So far, we just have the Brush Tool, and I figured it would be quite simple to add a "Cube Tool", by just instantiating a prefab of a default cube, following the size and rotation of the cursor.
Setting up the code was pretty simple, all the brush instancing is handled in the DrawLineVR script, in this case, I just used a switch statement with a public integer that can be accessed via the left-hand menu for switching by the user.
int selectedTool = 1 sets the right-hand trigger to instantiate the LineBrush prefab, and set the appropriate settings
int selectedTool = 2 sets the right-hand trigger to instantiate the new CubeBrush prefab, which is essentially just a MeshRenderer with a cube, that uses the same Material as the LineBrush, that way it's easy to setup the color picking code.
Now that we have a working 2 different "tools", it's only appropriate to actually allow the player to switch the tool being used in the left-hand menu.
Considering we already have UI buttons working with the Color Picker, setting up the switching of tools is pretty simple.
The current way tool switching works is by, well using a switch statement with an integer assigning what the trigger will be instantiating, in this case, either the LineBrush tool or now the CubeBrush tool.
I've added 2 buttons, one for each tool, and they simple fire an OnClick() event which activates the functions in our left-hand menu script.
Pretty simple, when each function is called, it simply calls the main DrawLineVR script on the player, setting the public selected tool value.
We now have a pretty rough but functional overall demo. 2 different brushes to use, and a way to change the color and size of each "brush stroke".
That said, none of these new updates have yet been adjusted to use networking, I'm keeping the code pretty light however, so adjusting the NetworkDrawLineVR script shouldn't be too difficult. Just added RPC calls for brush instantiating, and color swapping. I figure that adding the functionality at a basic level in single-player is currently the top priority, and taking time to refactor for Networking can be done a bit later.
So, as mentioned a few panels above, the color customization is working at a basic level. It uses a raycast to get the pixel color of an image of an HSV color wheel, however that image of the color wheel doesn't actually contain any dark values, greys or blacks. So the player can only choose a color that's either a mix of full R, G, B, or all and make white.
Taking a first pass at this was about as straight forward as I was hoping, I mainly just copied the code from the UpdateCursorSize() function, and created an UpateColorValue() that did much of the same, but instead adjusting the pickedColor RGB values, rather than cursorSize XYZ scalar values.
It worked mostly well, but as you might be able to guess, something is a little off. I can adjust the value from black to white, however, when setting the color to be Red for example, after adjusting it changes it back to a greyscale value. I kind of expected this behavior, as I was setting the pickedColor variable to be a new RGB value based on the adjustments made by the left thumbstick. I contemplated how to best go about solving this before Alan mentioned that I should just be able to modify the HSV values directly, rather than the RGB values, and sure enough, that was the most simple fix.
Here you can see the whole function, surprisingly simple. The first 2 lines just copy the cursor size script with the necessary changes, the commented out line was the first attempt, I was setting the pickedColor variable to an entirely new RGB value based on the thumbstick. Instead, I created temporary floats for H, S, V.
Calling Color.RGBToHSV, I took the current pickedColor values and assigned them, then just re-assigned the new pickedColor to use the same Hue and Saturation, but a different value based on the thumbstick. Results were successful.
Now the value will keep the current Hue and Saturation of whatever the user picks, but change the value based on the left thumbstick.
With color selection working and out of the way, I've decided to go ahead and add options to adjust the material further, things like Opacity, Metalness, Smoothness/Roughness, and Emission.
Metalness and Smoothness were the easiest, they use the same code as the UpdateCursorSize function, it get's the left thumbstick's axis value, multiplies it by a smoothing factor and Time.deltatime, and sets it to the instance's material using material.SetFloat, an example can be seen below.
cubeColor.material.SetFloat("_Metallic", newMetalness);
In doing this, I also adjusted the Value option to be enabled/disabled in the menu with a button, that way all these material changes can be made in the same way using the left thumbstick, and simply by toggling the appropriate menu option.
Menu consists of the:
Color button for toggling this menu and the main one.
Value, Opacity, Metalness, and Smoothness buttons to toggle between which to control with the left thumbstick
The Color Wheel itself
An Emission toggle to turn on or off whether to use emission, which takes it's color from the albedo.
And an Adjustment text which changes to show what the current value is of whatever you're change, ie. Metalness: 0.5
Getting opacity to work was a bit more difficult at first, mostly because the color to convert RGBtoHSV and vice-versa does not support Opacity/Alpha value.
I banged my head against this for a bit until I realized I could just set a new Color to the pickedColor variable, using the same R, G, B values that it's already using, but updating the Alpha channel with the newOpacity.
The function and an example picture can be seen below:
Last thing on the list was Emission, and this one was a bit of a doozy. In hindsight, it really was much different from setting the color, mainly I just had trouble figuring out the correct things to write.
It wasn't clear to me exactly how to to toggle it on and off, since looking at the shader for the material, it exposes all the properties and their names (_Color, _Metallic, Smoothness and so on), but turning on Emission is a boolean and requires a tick, but it's not listed in the Standard Shader inspector.
Unity's Documentation was a bust but thankfully I found some forum posts that identifyed the correct way to control emission through code, which was pretty simple in the end. Setting the color is the same as it is for albedo, and to toggle it on or off, one just needs use material.EnableKeyword, and type "_EMISSION" for the name. I could not find this in the Unity editor, nor in the Unity Scripting API documentation, maybe I didn't look hard enough.
(Note from future Joe): I was wrong, it turns out the the _EMISSION keyword can indeed be found in the shader properties, it was hidden under the keywords section. Furthermore, Keywords and more documentation about them and their uses, including enabling material keywords through code is indeed in the Unity documentation, I simply just didn't know what the correct terminology was and thus couldn't find it originally.)
The function uses a basic toggle with if statements, and sets the appropriate settings, pulling in the color from the pickedColor variable, and enabling it. The function itself is really more used as a way to set the bool, and check it visually, the actual code for setting the color occurs when the tool of choice is instantiated.
A button that calls the function is setup in the menu script.
With that out of the way, we finally have a nice material options menu that allows the user to adjust various material properties for whatever tool/brush they're using.
The next tool to introduce I've decided to call the Cube Wall tool, which to be honest, isn't super accurate to what it does, but now it's firmly implanted in my brain as such, and will stay that way to save having to go back and refactor a couple of names. Worst case, I can just change the display name for users and keep everything the same under the hood.
Anyway, this tool was inspired by a AR demonstration that my boss, Alan, did a while back, where there was a grid of cubes, and people could come in, tap a side and spawn a new cube next to it, building in a Minecraft-esque fashion.
This tool will be much the same, the player will be able to press a button, and spawn a simple cube wherever they hold out their right hand. After spawning said cube, they can touch any side of it with the controller, and a new cube will spawn perfectly in place next to it.
This was actually a lot of fun to work on, due to the fact that it was something relatively simple that I could imagine how it's done, and do it mostly on my own without the need for a tutorial or butchering someone else's code to fit into this project. Though, despite the simplicity of it, I still had to do quite a bit of brain-storming and math in my head to fully wrap my head around what was happening and how to do it.
For this tool, I had 2 things I wanted to keep in mind.
First, the cubes should be able to be placed anywhere, without needing to restrict or snap it to a grid, and...
Secondly, the cubes should be any size (within reason) that the player wants.
This is really where the difficulty came from, as anyone who uses 3D DCC tools or game engines likely knows, if you have a simple default cube, at default 1.0 scale all on axis, then it's easy to place another cube next to it without space or intersection. If a cube is at (0, 0, 0), with a scale of 1, then another cube will just be (0, 1, 0), also with a scale of 1. This means the new cube will be right above the first one, at perfect distance as to not intersect or float above the first one.
When not working on a grid, or with any snapping, and allowing any uniform scale, this becomes a bit more difficult to do, math wise, though looking back, the math is quite simple, it's more difficult to wrap your head around it and figure out exactly what math to use. I'm getting ahead of myself though.
On my first approach, I brain-stormed for a while to figure out exactly how to achieve this effect. I needed to determine which side of the cube the player was hitting to instantiate a new one in the direction, and also some math to offset the new instance so it was perfectly align and distanced from the first one. First thing's first, the direction. I spent a little time looking up collisions and hit data in Unity, and eventually decided to create a prefab cube, with 6 empty child object's each with a box collider, and using these I could determine the direction the player wanted the new cube to be.
With this, I threw it into the scene, and placed it arbitrarily at (0, 1, 0.5) in the world, and I set the scale to 0.1 uniform. I didn't bother with implementing the tool into the toolkit and the player spawn of the first instance yet, as I just wanted to get the tool to work. I found that in this basic circumstance, simply instancing a new cube in any axis, only required adding 0.10 to it's current value (I will go more into this later as I spent a little too much time figuring this out).
I also spent an embarassing amount of time stuck because I thought my collisions/triggers weren't working at all, until I realized I was attempting to detect collision on the OVRCameraRig (where most of the other code is at currently), which isn't how collisions work. The code for this script needed to be on one of the colliding objects, either the cube wall prefab, or the controller cursor.
I decided on the cursor and created the code below, which simply checks the child game objects of the cube wall, to determine which box collider was hit, and instantiate a new cube on the current position, plus the offset for the corresponding axis.
The effect was... not quite what I expected, although looking back on the code now, I know why. I'm also now seeing multiple offsets used which is also wrong, not sure if I just didn't catch that the first time, or if I was attempting something else and just gave up.
Whenever the player hits one of the collider's, it instances a new cube based on the position of the most recent cube, which isn't what I wanted. If I right the right side of the bottom most cube, I wanted it to put a cube there, not all the way at the top of the tower.
Looking at the code, it's obvious, because I set a new position every time I instance a new cube, and use that position plus the offset for the following instance. In my head at the time, I figured the only way to get the correct direction for the correct cube was to have some sort of name or ID system for each cube, to keep tracking of which one the player was hitting, and working from there. I'm almost positive this isn't the case, and this code could be fixed for the proper effect, but either way, I've since scrapped this attempt and started afresh.
I kept this code around though since I think it could be neat for a snake type tool, where the player rapidly swipes on 1 cube, and it grows and spirals out continuously from one point.
My new idea was to just use math to figure everything out, that way I could have a very simple prefab of a just a cube, with 1 box collider, and put all the necessary code into that prefab as well.
First, I had to figure out what math would get the cubes to be perfectly distanced, independent of the scale of it. To do this, I just moved cubes around manually, trying to warp my head around the correct formula.
Like before, I spent an embarrassing amount of my time scratching these notes down and doing calculations until I realized that the correct distance is simply the current position on whatever axis plus/minus (depending on the direction) the uniform scale.
That said, math isn't my greatest strength, and as of today (1/12/2023), it's been maybe 5 or 6 years since I've been in a math class, so I don't blame myself too much. Really, the hardest part is wrapping my head around it in a theoretical sense, which is why I played around moving the cubes manually to get a practical idea of what I was talking about.
Anyway now I knew, no matter the scale of the cube, which I've set to be called scalarOffset, the new position will simply be the old position/the collided cube's position plus or minus the scalarOffset. Plus for the positive direction, and negative for the negative.
Now I just had to figure out how to determine which side of the cube the player was colliding with.
Here's an example for clarity, the cube on the left is at (0, 0, 2), with a uniform scale of 0.1
The cube next to it perfectly meets it without intersecting or floating, and it's simply at (0, 0, 2.1) due to the scale being 0.1
Figuring out the intended direction was a bit more difficult to figure out practically, in theory, it didn't seem so bad. I figured all I had to do was take the current position of the cube instance (the original), and compare it to the position of where the collision happens to get the direction.
So, at first, I simply checked to see if the original position was less than the position of the collision, if so, choose the positive direction of that axis, and if not, choose the negative direction of that axis.
The result of that can be seen below.
I was glad to see it working, and knew instantly what the problem was. Unless I touched the original instance perfectly centered, it was always going to add some offset to one of the other axis due to the fact that I was checking all of the axis at the same time. The result being this kind of checker pattern where all the vertices are connected, but there's always a cube's space between each new instance.
Instead, I had to figure out what the user's intended direction was, and I was stumped on this for quite a bit of time actually.
Eventually I was inspired by a post at: https://gamedev.stackexchange.com/questions/183469/mathf-approximately-should-i-use-it-for-what-cases
I saw this bit of code: isApprEqual = abs(x-y) <= marginOfError
It led me to believe that all I had to do was some kind of comparison of the different axis values, which is what I did. I began testing manually again and figured out that the intended axis was always going to have a greater value than the other axis, which can be seen below.
This example isn't the most clear thing, but you can see the original instance and the new instance. The original is still in the same spot as before, (0, 0, 2) with the same uniform scale.
The new instance is roughly half the distance it should be, Z = 2.05 instead of 2.1, and the axis of the X and Y are below that. So, all I had to do was check which axis value was greater than or equal to the "margin of error" which in this case was the scalarOffset divided by 2.
Here you can see the final code I used, which turns out to be quite a bit more simple than I thought it would be.Â
I check each axis to see which is greater than my margin of error, and if it is, it checks which direction to go based on whether collision position is greater than or less than the the original instance.
You can see the final working result below:
Implementation into the main scene was simple, following the same methodology as the other tools.
With that, the CubeWalls tool is now complete.
With all the current toolsets implemented in a usable fashion, and a looming deadline for a showcase/information event we were having in Feburary (currently it's 1/23/2023), it was time to once again come back and refocus on networking, so we could allow at least 2 users to be active in the same scene and creating art with each other.