Blending Water and Fog Effect

BLENDING:
 
Let Csrc be the color of the ijth pixel we are currently rasterizing (the source pixel), and let Cdst be the color of the ijth pixel currently on the back buffer (the destination pixel). Without blending, Csrc would overwrite Cdst (assuming it passes the depth/stencil test) and become the new color of the ijth back buffer pixel. But with blending, Csrc and Cdst are combined to get the new color C that will overwrite Cdst (i.e., the blended color C will be written to the ijth pixel of the back buffer). Direct3D uses the following blending equation to blend the source and destination pixel colors:

C = Csrc x Fsrc + Cdst + Fdst

The colors Fsrc (source blend factor) and Fdst (destination blend factor) may be any of the values described in DX10 SDK, and they allow us to modify the original source and destination pixels in a variety of ways, allowing for different effects to be achieved. The x operator may be any of the binary operators defined in DX10 SDK.

The above blending equation holds only for the RGB components of the colors. The alpha component is actually handled by a separate, similar equation:

A = Asrc x Fsrc + Adst x Fdst

The equation is essentially the same, but it is possible for the blend factors and binary operation to be different. The motivation for separating RGB from alpha is simply so that we can process them independently, and hence, differently.

The general advice to draw objects when blending is involved is the following:

Draw objects that do not use blending first (as terrain). Next, sort the objects that use blending by their distance from the camera. Finally, draw the objects that use blending in a back-to-front order.

The reason for the back-to-front draw order is so that objects are blended with the objects behind them. If an object is transparent, we can see through it to see the scene behind it. Therefore, it is necessary that all the pixels behind the transparent object be written to the back buffer first, so that we can blend the transparent source pixels with the destination pixels of the scene behind it.


FOG:

To simulate certain types of weather conditions in our games, we need to be able to implement a fog effect. In addition to the obvious purposes of fog, fog provides some fringe benefits. For example, it can mask distant rendering artifacts and prevent popping. Popping refers to an object that was previously behind the far plane all of a sudden coming in front of the frustum, due to camera movement, and thus becoming visible; it seems to "pop" into the scene abruptly. By having a layer of fog in the distance, the popping is hidden. Note that if your scene takes place on a clear day, you may still wish to include a subtle amount of fog at far distances, because, even on clear days, distant objects such as mountains appear hazy and lose contrast as a function of depth.

Our strategy for implementing fog works as follows: We specify a fog color, a fog start distance from the camera, and a fog range (i.e., the range from the fog start distance until the fog completely hides any objects). Then the color of a point on a triangle is a weighted average of its usual color and the fog color:

foggedColor = litColor + s(fogColor - litColor) = (1 - s) * litColor + s * fogColor

The parameter s ranges from 0 to 1 and is a function of the distance between the camera position and the surface point. As the distance between a surface point and the eye increases, the point becomes more and more obscured by the fog. The parameter s is defined as follows:

s = saturate ( (dist(p - E) - fogStart) / fogRange )

where dist(p, E) is the distance between the surface point p and the camera position E. The saturate function camlps the argument to the range [0, 1].

BIBLIOGRAPHY:
Introduction to 3D Game Programming with DirectX10, Frank Luna, 2008
VIDEO DEMOSTRATION:


SOURCE CODE: