Team Members: Aayush Gupta, Ethan Yu, William Molnar-Brock, Yulia Nugroho
Bridging the gap between traditional artistry and computational graphics, this project outlines a novel approach to reconstructing textured impasto paintings using digital brushstrokes. The methodology revolves around generating a two-dimensional image of the resulting artwork alongside a height map that captures the intricate texture and depth carved by each brushstroke. From this basis, various 3D rendering techniques are examined, from applying straightforward heightmap-based shading to the complex assembly of multiple OBJ files to compose a unified surface model. The method explored basically involves the sequential addition of brushstroke layers to create a stratified texture that upholds individual color and opacity, requiring significant computational resources but offering a rendering that most accurately emulates the physical properties of paint layers. This process aims to deliver not only a visually compelling digital counterpart but also to preserve the topographical fidelity characteristic of impasto paintings.
3D brushstrokes, textured primitives
The digitization of art, specifically the traditional impasto paintings, requires a subtle interplay between 3D modeling techniques that faithfully replicate this style's physical brushwork and layered textures. While digital art has made significant strides, capturing the impasto's tactile depth and varied strokes presents unique challenges in computer graphics.
This project introduces a comprehensive method for simulating impasto paintings through a hybrid approach that combines 3D modeling and procedural generation by computer graphic coding. Starting with a simple 2D brush stroke, the process involves extrapolating this input into a three-dimensional model that carries the attributes of its real-world analog. This model is then manipulated within a Python-based environment that emulates the nuanced application of paint onto canvas, layer by layer. By strategically placing and orienting each 3D model based on the input height maps and employing alpha compositing algorithms, the code crafts an image that projects the illusion of a rich, textured surface.
By using these methods, the project aims to transcend the limitations of flat texture application and produce a digital painting that not only embodies the look and feel of impasto but also preserves the depth and material qualities that give such art its distinctive character. The introduction of heightfield maps provides a novel way to manage the placement and interaction of brush strokes, paving the way for a rendering that not just mimics but virtually realizes the impasto technique in a digital form.
This project seeks to establish a bridge between the tactile world of physical paint and the pixels of digital canvases, broaden the toolkit available to digital artists, and provide insights into the complexities of translating impasto's visceral experience into the digital realm.
A foundational understanding of the traditional paint simulation is vital in developing digital methods for stylized and textured primitive-based paintings. Previous research by Baxter, Wendt, and Lin (2004) pioneered the "IMPaSTo" system, an interactive model for paint that adeptly captures a broad spectrum of styles reminiscent of traditional oils or acrylics. Their work notably integrated numerical simulations for the physical flow of paint and an optical model for its appearance. Their contributions have set a high standard in simulating traditional painting mediums' rich textural qualities and dynamic behavior.
The similarity to our project lies in pursuing a richly textured, responsive painting environment that accommodates the intricacies of paint behavior, from brush dynamics to layering effects. While "IMPaSTo" adeptly simulates impasto's visual appearance and topographical dynamics, our model advances this by embedding actual 3D textured primitives into the painting system. This innovation not only aims to replicate the tactile feel and sculptural qualities of impasto with higher fidelity but also allows for the accurate rendering of light interplay on the textured strokes, further narrowing the gap between digital and traditional mediums.
Our anticipated contribution is a system that elevates people's sensory experience, enabling them to see and feel the depth and texture as if they were handling real thick paints. By simulating the precise way light interacts with the raised surfaces of impasto, we expect our model to deliver a closer approximation to the actual visual and tactile experience of impasto painting. This depth of simulation can be an alternative to provide artists with a digital canvas that is virtually indistinguishable from its physical counterpart, capturing the essence of Impasto's rich textures in a way that "IMPaSTo" and other systems have yet to achieve.
The advancements in our work align with the trajectory set by Baxter, Wendt, and Lin but are distinct in scope and execution. By leveraging their foundational model, we introduce advancements in digital brush dynamics and three-dimensional textural simulations that deepen the interactive painting experience, offering artists new opportunities for creative expression.
For 2D part, the algorithm begins with an empty canvas. Each step of the algorithm, we randomly place a simple brushstroke on the canvas.
3.1 3D Brushstrokes Model
Initially, the default shape used is a sphere, specifically a DynaMesh\_Sphere. To quickly approximate the 3D brush stroke shape, parts of the sphere are trimmed using 'trim rectangular' to create a flatter, less bulky surface. Subsequent sculpting employs commonly used brushes such as clay buildup, move, Damian knife, inflate, and others.
DynaMesh, a pivotal feature in ZBrush, facilitates dynamic geometric adaptation. It enables artists to maintain a uniform polygon distribution throughout the sculpting process by automatically remeshing the model, preventing excessive polygon stretching during extreme geometric transformations. This feature allows for freeform shape alterations without concern for the original topology.
Brush parameters like 'ZAdd', 'ZSub', and 'Focal Shift' replicate the physical experience of using traditional painting tools. 'ZAdd' adds material to the model's surface, 'ZSub' subtracts it, and 'Focal Shift' adjusts the core width of the brush stroke. In computer graphics terms, this translates to locally altering vertex data to add or remove detail without changing the overall shape.
To preserve topological consistency, the remeshing process via ZRemesher recalculates mesh topology to achieve a more even polygon distribution, crucial for maintaining details without unnecessarily increasing the polygon count. ZRemesher automatically creates meshes with cleaner topology and more efficient edge flows.
ZBrush sculpting tools embody various computer graphics concepts, such as computational geometry, 3D visualization, and interactive graphics. An in-depth understanding of these principles is essential for developing the algorithms that support these tools and ensuring they are practical and responsive for users.
3.2. 2D Painting Stylization and Reconstruction
Our algorithm for constructing the 2D painting based on an input image consists of adding 2D brushstroke primitives to a blank canvas to approximate that input image. The 2D brushstroke primitives are used as a single channel image denoting the height map of a 3D brushstroke. When we do alpha compositing, we add a brushstroke use the height at the given pixel (which is normalized between 0 and 1) as the alpha value.
The number of brushstrokes to use in total is a parameter passed in, and the process of adding in each one only references the previously added strokes insofar as instead of a blank canvas, the canvas resulting from all previously added strokes is used as the background. We use a loss function based on the L2 difference between the target image and the result of alpha compositing in our new brush stroke.
The color of each stroke is chosen to be strictly optimal, and this is efficiently done using numpy vectorization to compute a weighted average of the desired overlay to minimize the aforementioned error.
At each iteration, multiple workers are instantiated with each starting out with a randomly placed brush stroke, and then they use local hillclimbing to augment the translation and rotation of the stroke (picking a new optimal color at each mutation). Each worker picks the greatest error reducer among its random restarts. After each worker has found its optimal stroke, we then pick the best stroke among the workers, and only this is added to the image itself.
OpenCV is used to efficiently rotate, translate, and overlay the colorized brushstroke primitives onto the original canvas.
Github repository: https://github.com/fogleman/primitive
Github repository: https://github.com/fogleman/primitive
Original
Stylized