Arts and Science

Hand-drawn images to 3D

We present a new approach to reconstruction of high-relief surface models from hand-made drawings. Our method is tailored to an interactive modeling scenario where the input drawing can be separated into a set of semantically meaningful parts of which relative depth order is known beforehand. For this kind of input, our technique allows inflating individual components to have a semi- elliptical profile, positioning them to satisfy prescribed depth order, and providing their seamless interconnection. Compared to previous methods, our approach is the first that formulates this reconstruction process as a single non-linear optimization problem. Because its direct optimization is computationally challenging, we propose an approximate solution which delivers comparable results orders of magnitude faster enabling an interactive user workflow (for more detail, please refer to this page).

From left to right in the figure above, the final result is shown with the overlaid sketch (input) on top of the generated geometry using our system. The rendering style is chosen arbitrary.

Maya Rendering Experience for Physics-based Simulation

I did this experiment as part of my colleagues' paper. The models and materials are selected based on their potentials for live motions. I used Maya Arnold renderer.

GPU based 3D Line Drawing

For this project, I used OpenGL geometry shader to render line drawing for 3D models interactively (way faster than cpu-based implementation).

It is rendered based on silhouette edges and sharp edges. Sharp edges are fixed in an object regardless of the view direction.

CPU Side

At first, I implemented the edge extraction section on CPU and it was super slow. In each iteration it goes through all triangles and determines

front face and back face triangles so that it can determine silhouette edges.

GPU Side - Geometry Shader

To implement the edge extraction part on GPU, I used geometry shader. The input format of the geometry shader is “triangle_adjacency” which is based on 6 vertices )4 triangles) passed from CPU for each triangle. The similar process of front face and back face computation is done in geometry shader. In vertex shader we transfer the vertices to camera space so that in the geometry shader we can determine if normal vector of a triangle is toward the view direction.

GPU Side - Hidden Line Removal

This part is done in the fragment shader. With one pass, the depth map of the object is rendered to a frame buffer. After second pass, the extracted edges are passed to the fragment buffer by the geometry shader. The z value of a fragment is compared with the corresponding position in the depth map to check whether the fragment should be hidden. Then, we can discard these fragments.

Automatic Cartoon Face Generation

Some results on my Automatic Cartoon face generator. Photos on the top row are some input face images, and the converted cartoon faces are shown on the bottom row.

As my Master's thesis, I set out a research on Cartoon Rendering of a face image with the most likeness to the individual faces, along with bringing the cartoon faces to life by Facial Expressions and Animation. Deformation techniques and Non-photorealistic rendering methods are utilized in this approach (some results: www.ovotovo.com).

Some examples of facial expression and animation are illustrated below. The leftmost column shows the source image, and the second column is the transferred expression. Moving form the neutral cartoon face (top row) to the expressive one (row 2 to 4) frames of the animation are interpolated.

Automatic Typography Art

As you can see above, these are computer-generated typography art using our automatic typography system. The input is just a silhouette image and some string as arbitrary words. Our system efficiently computes polygons which further being filled by random combination the input words. Computational geometry and mathematical optimization are employed in this system.