Introduction

Many topics in image synthesis and involves rendering photorealistic scenes. However, the literature is full of many non-photorealistic techniques. There are many styles within NPR (non-photorealistic rendering). For example, there is painterly rendering, silhouette extraction, and cel or toon shading. For this project, we will focus on hatching, which is a style of pen-and-ink sketching comprised of many small strokes that are sometimes perpendicular to each other (this is called cross-hatching).  Hatching helps reveal the texture, curvature, and lighting of objects in the scene.



The paper by Emil Praun, et al. used in  this project attempts to create a hatching effect in real-time. Many real-time effects before this paper took a screen-space approach, in which the effect is done after rendering the entire scene. One issue with this approach is that it can cause a "shadow-door" effect in which the objects simply appear to be behind a stroked pattern. This paper opts for an object-space approach in which the strokes are placed on the geometry itself. This also has issues, unfortunately. First, the distance the object is from the camera can mess up the density and size of the strokes. Second, repeatedly creating strokes each time can lead to frame-to-frame incoherence. Praun's paper solves these by creating a tonal art map.



Tonal Art Map Generation

A tonal art map (TAM) is essentially a mipmap with an extra dimension: tone. Basically, as we go up and down a column in the TAM, we go through the individual mip levels of a single tone. We go through tones by going across. In order to preserve tone and maintain coherence between tones, the TAM employs a nesting structure. Any stroke that exists in a smaller mip level exists in larger mip levels. Furthermore, all strokes from one tonal column  exists in all darker tonal columns. Finally, the textures themselves are toroidal (donut-shaped), meaning that strokes will wrap around the edges. This prevents the user from detecting sudden jumps in textures which can reveal the underlying tessellation in the geometry.


At the beginning of the TAM generation algorithm, we start with a clean column of successively smaller white (or whatever color is your background) images. The size of the images must be a power of 2 and the size shrinks by half each level. In our example, the largest image is 256x256 followed by 128x128, 64x64, and 32x32. We mostly pick a power of 2 mostly  because of limitations of graphics library requiring textures to be a power of 2, especially if they are mipmaps.



We will start adding strokes to the smallest texture (32x32), with the strokes being randomly resized relative to the size of the texture (e.g. 0.3 to 1.0 times the texture size). When we add strokes to this texture, we add it to the the larger textures in their respective coordinates (i.e. we treat the coordinates as a value from 0 to 1). We keep doing this until the top texture reaches the desire tone. We calculate the tone by the percentage of texels that are not white, but the system allows the user to make his or her own way of calculating tone. Note that even though the top texture has reached the desired tone, the larger textures might not have. Hence, we go down one level and continue the process, leaving the first level alone. Once we are done with a column, we copy the entire contents over to the next column and restart at the smallest texture.

In order to avoid repeating strokes and/or to make sure each stroke contributes to tone, we do not simply add a stroke and add the next one, etc. Instead, the generator tries multiple candidate strokes and picks the best one. The generator allows the user to change the number of candidate strokes for each column. A rule of thumb is to have many strokes for the first column and gradually decrease the number across columns. To see how much progress to the final tone a stroke has, we subtract the tone of the image with the stroke by the image without it. In fact, we do this for all images affected by the stroke.



However, many real works tend to have a uniformity among the strokes, and this formulation does not exactly create this uniformity. Hence, for all images affected, we also look at the tone in its Gaussian pyramid. We essentially apply a Gaussian blur and shrink each image down. The number of times we do this depends on the level (0 for smallest, 1 for second smallest, 2 for third smallest, etc.). If a stroke is too close to another stroke, then at least one of the blurred images will see these two strokes as the same stroke, essentially destroying the progress to the desired tone of the candidate stroke. Hence, we sum the differences of every image and every blurred with and without the candidate stroke. We are still not done. Currently, the system favors long strokes. Once we find the sum, we normalize it by the length of the stroke.

To support cross-hatching, we simply have another stroke (called a down stroke). After a certain index, we start only applying this stroke. Then after another index, we pick between the regular stroke and the down stroke on the flip of a coin.

Rendering

Rendering with the TAM is fairly straight-forward. We perform a 6-way blend. Basically, in a shader, we calculate the diffuse contribution at a vertex. We multiply this by 6 (the number of tones), to get the tone of the vertex. We then find the floor and ceiling texture for the vertex and supply normalized weights for those two textures. Since the textures are grey-scaled, we pack all 6 into two textures and put the weights into two 3d vectors. We dot the values of the texture lookup with these vectors. Finally, we use any remaining weight on the implied pure white tone. Once we have the actual color, we put it in all three color channels for the fragment. Note that the blend is actually a 12-way blend because of the mipmapping. Furthermore, in order to do the mipmapping, we have to manually set each mip level with the tone columns.






Source Code and Executable Files

The TAM Generator was written in Processing.

The rendering requires DirectX 9. The source code is in a Visual Studio 2008 solution.
(Note: Even though it says teapot, the files actually render a sphere)

Controls:
O/L - Z location of light
I/K - Y location of light
U/J - X location of light
ESC - Close application. Do not press the x button.

Files


Reference
 
E. Praun, H. Hoppe, M. Webb, and A. Finkelstein. Real-time hatching. Proceedings of SIGGRAPH 2001, page 581, 2001.