Software - Unity, C#, shader code
Team size - 2 game developers
Duration - finished
Here you will see my experience in making a Depth of Field Shader.
We use this panel in the inspector to change how much blur there is, how far away the blur is in de background and how intense the blur is going to be.
Focus Distance - is used to change the distance of the blur between the Foreground and Background.
Focus Range - is used to ease the area of the blur effect.
Bokeh Radius - is used to change the intensity of the blur effect.
Visualize Focus - is used to see where the blur stops and how much of the map it covers (is only used while testing out new changes on the range, radius and distance).
Here you can see a little preview of how the depth of field shader works.
Keep in mind that it is just a prototype and is not the finished product yet.
These are the two rendertextures we use on the cameras we use for the Depth of Field Shader, one is to render everything on the background and the other is to render everything on the foreground(playable character included by the foreground texture).
we used two cameras for the depth of field shader, and with this script we merge the two render textures of the two cameras together to create on image that has the depth buffer, depth of field render texture and the view of the two cameras that are in use.
What is happening in the code is actually really simple, we are accessing the two render textures that we want to merge, after we have access to the render textures we look for the position of the objects that are rendered in the two render textures and the transform of the render textures that the objects are rendered by.
Lastly we access the render textures and look what objects are transparent and which are not, once that is done we drag the two render textures that you want to merge on a material and then the view of the 2 cameras are merged into one.
there was a problem where the the depth of field shader went through the foreground, platform and the player. [resolved]
I resolved it by changing the render queue order by changing the render event in the the custom render pass that we made.
there was a problem with the sprites and VFX not being included in the depth buffer because they had the wrong layer active.[resolved]
I resolved this by changing the layers in the prefabs of the the sprites and VFX effects .
bloom effect being cut out after de post processing. [not resolved yet]
the background stayed as a solid color on the foreground camera while that of the Main camera with the background and the depth buffer used the skybox. [resolved]
what we (colleague and I) have done is that we copied the uber.shader script and made on our own, because if we change something in the automatically generated uber shader script unity will revert those changes, so when we created our own uber shader script and we add the copied code in the script, once we are done with the first steps we change the code so that the alpha is not set to 1.0 to 0.0. We do this so that the solid color background on the foreground camera transparent is, after that we also made a custom stopNaN shader script and did the same thing as we did with uberpost shader script.
I first of all learned that making a depth of field shader was not as easy of a task, wich I did expect but not on the scale that the difficulty truly was, and that it takes a while to fully understand how it worked in the beginning.
secondly i learned how shader code works, wich was the first time I worked with shader code and shaders in general.
Lastly I learned how to acces render textures via C# scripts by using (Graphics.blit), and how the shader works by accessing it by a C# script and not by a material or by the engine it self to make it a little bit less heavy on the program.