Interactive Texture Editing for Garment Line Drawings [CAVW 2022] [project page]
Tsukasa Fukusato, Ryohei Shibata, Seung-Tak Noh and Takeo Igarashi
Adding two-dimensional (2D) textures to garment line drawings (e.g., cartoon characters) remains challenging in the production pipeline of comics and illustrations since garment line drawings often have self-occluded wrinkles. Although several techniques that can automatically deform and map 2D texture patterns to 2D line drawings have been proposed, their qualities are insufficient for representing 3D-like realistic garment designs and manual editing of UV coordinates, which is labor-intensive. In this paper, we introduce an interactive tool to efficiently edit UV coordinates of 2D garment line drawings on the modeling panel with curve and point handles. Our algorithm is simple to integrate into existing image authoring tools. We conduct a user study with novice users and confirm that the proposed tool can effectively handle texture mapping envisioned by the users.
PPW curves: a C-2 interpolating spline with hyperbolic blending of rational Bézier curves [code]
Seung-Tak Noh*, Hiroki Harada*, Tsukasa Fukusato, Xi Yang, and Takeo Igarashi (*equally contributed)
It is important to consider curvature properties around the control points to produce natural-looking results in the vector illustration. C-2 interpolating splines satisfy point interpolation with local support. Unfortunately, they cannot control the sharpness of the segment because it utilizes trigonometric function as blending function that has no degree of freedom. In this paper, we alternate the definition of C-2 interpolating splines in both interpolation curve and blending function. For the interpolation curve, we adopt a rational Bezier curve that enables the user to tune the shape of curve around the control point. For the blending function, we generalize the weighting scheme of C-2 interpolating splines and replace the trigonometric weight to our novel hyperbolic blending function. By extending this basic definition, we can also handle exact non-C-2 features, such as cusps and fillets, without losing generality. In our experiment, we provide both quantitative and qualitative comparisons to existing parametric curve models and discuss the difference among them.
Inverse Free-form Deformation (FFD) for interactive UV map editing [SA 2021 TechComm] [project page] [pdf]
Seung-Tak Noh and Takeo Igarashi
Free-form deformation (FFD) is useful for manual 2D texture mapping in a 2D domain. The user first places a coarse regular grid in the texture space, and then adjusts the positions of the grid points in the image space. In this paper, we consider the inverse way of this problem, namely, inverse FFD. In this problem setting, we assume that an initial image-to-texture dense mapping is already obtained by some automatic method, such as a data-driven inference. However, this initial dense mapping may not be satisfactory, so the user may want to modify it. Nonetheless, it is difficult to manually edit the dense mapping due to its huge degrees of freedom. We thus convert the dense mapping to a coarse FFD mapping to facilitate manual editing of the mapping. Inverse FFD is formulated as a least-squares optimization, so one can solve it very efficiently.
Interactive Meshing of User-Defined Point Sets [JCGT 2020]
Tsukasa Fukusato, Seung-Tak Noh, Takeo Igarashi, and Daichi Ito
This paper introduces an interactive framework to design low-poly 3D models from 2D model sheets, which shows how the model (e.g., sketched characters) looks from the front and side. First, we made a prototype tool for 2D artists, without 3D modeling skill, to generate 3D point sets from 2D model sheets. This tool is simple but still useful for artists to manually scan their 2D designs one by one. We also implement a novel meshing tool as an alpha-shape mechanism, plus a painting metaphor where the user assigns spatially varying alpha-shapes to the point set while observing the intermediate results. We conducted a user study and confirmed that all participants could create 3D models within approximately 20 minutes.
Parametric Fur from an Image [video] [TVC 2020] [code]
Seung-Tak Noh, Kenichi Takahashi, Masahiko Adachi, Takeo Igarashi
Parametric fur is a powerful tool for content creation in computer graphics. However, setting parameters to realize the desired result is difficult. To address this problem, we propose a method to automatically estimate appropriate parameters from an image. We formulate the process as an optimization problem wherein the system searches for parameters such that the appearance of the rendered parametric fur is as similar as possible to the appearance of the real fur. In each optimization step, we render an image using an off-the-shelf fur renderer and measure image similarity using a pre-trained deep convolutional neural network (CNN) model. We demonstrate that the proposed method can estimate fur parameters appropriately for a wide range of fur types.
SkelSeg: Segmentation and Rigging of Raw-Scanned 3D Volume with User-Specified Skeleton [GI 2019] [CAG 2020]
Seung-Tak Noh, Kenichi Takahashi, Masahiko Adachi, Takeo Igarashi
RGB-D camera-based scanning has increased in popularity; however, raw-scanned three-dimensional (3D) models contain several issues, such as fused arms and legs, that hinder animation. Here, we describe a semiautomatic method that allows generation of a rigged 3D mesh from a raw-scanned 3D volume with simple annotations. The method allows a user to annotate a skeleton structure on registered photographs captured in the scanning step, followed by automatic cutting of fused body parts using the skeleton information to beautify the cut surface by applying mesh smoothing. The system then generates a skinned 3D mesh based on the user-specified 3D skeleton. We tested our method using several raw-scanned 3D plush toy models and successfully generated plausible animations.
Retouch Transfer for 3D Printed Face Replica with Automatic Alignment [CGI 2017-short]
Seung-Tak Noh, Takeo Igarashi
We present a system to automate the retouching process by transferring a retouched result by an expert to an arbitrary target face model. Our method first identifies facial features, such as the eyes and nose of the 3D models by exploiting 2D face detection. For each facial part, the method establishes dense correspondences between the exemplar and the target using the detection result, and then transfers the geometry of the exemplar to the target by coating transfer. We show that our method improves not only the geometry of model but also the 3D printed replica.
(*this period was a supplemental service, which is a substitute for military service in South Korea.)
TunnelSlice: Freehand Subspace Acquisition for Wearable AR [video] [IEEE THMS]
Hyeongmook Lee, Seung-Tak Noh, Woontack Woo
In this paper, we propose TunnelSlice, which enables natural acquisition of subspace in an augmented scene from an egocentric view. The proposed TunnelSlice effectively determines a cuboid transform, excluding unnecessary areas of a user-defined tunnel via two-handed pinch-based procedural slicing from an egocentric view. Compared with two existing approaches, TunnelSlice was preferred by the subjects and showed greater stability for all scenarios, and outperformed the other approaches in a scenario involving strong occlusion without a central object.
HMD-Based Telepresence MR System [video] [author's copy] [ICAT-EGVE 2015] [part of code, demo]
Seung-Tak Noh, Hui-Shyong Yeo, Woontack Woo
We present a novel system for mixed reality based remote collaboration system, which enables a local user to interact and collaborate with another user from remote space using natural hand motion. Unlike conventional system where the remote user appears only inside of the screen, our system is able to summon the remote user into the local space, which appears as a virtual avatar in the real world view seen by the local user. To support our avatar-mediated remote collaboration concept, we derive a systematic framework design that consists of the hardware and software configuration with various devices. We explore novel techniques for calibrating and managing the coordinate system in asymmetric setup, sensor fusion between devices and generating human-like motion for the avatar.
Mirror Mirror [project page] [CHI 2016 Note] [SIGGRAPH 2015 Studio]
Daniel Saakes, Hui-Shyong Yeo, Seung-Tak Noh, Gyeol Han, Woontack Woo
Virtual fitting rooms equipped with magic mirrors let people evaluate fashion items without actually putting them on. The mirrors superimpose virtual clothes on the user’s reflection. We contribute the Mirror Mirror system, which not only supports mixing and matching of existing fashion items but also lets users design new items in front of the mirror and export designs to fabric printers. While much of the related work deals with interactive cloth simulation on live user data, we focus on collaborative design activities and explore various ways of designing on the body with a mirror.
3D Finger CAPE: Clicking Action and Position Estimation [project page] [video] [IEEE VR 2015/IEEE TVCG]
Youngkyoon Jang, Seung-Tak Noh, Hyung Jin Chang, Tae-Kyun Kim, Woontack Woo
In this paper, we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD.
Lighty: A Painting Interface for Room Illumination by Robotic Light Array [project page] [video] [ISMAR 2012 poster] [TVC 2014]
Seung-Tak Noh, Sunao Hashimoto, Daiki Yamanaka, Youichi Kamiyama, Masahiko Inami, Takeo Igarashi
We propose an AR-based painting interface that enables users to design an illumination distribution for a real room using an array of computer-controlled lights. Users specify an illumination distribution of the room by painting on the image obtained by a camera mounted in the room. The painting result is overlaid on the camera image as contour lines of the target illumination intensity. The system runs an optimization interactively to calculate light parameters to deliver the requested illumination condition. In this implementation, we used actuated lights that can change the lighting direction to generate the requested illumination condition more accurately and efficiently than static lights. We built a miniature-scale experimental environment and ran a user study to compare our method with a standard direct manipulation method using widgets. The results showed that the users preferred our method for informal light control.
Dynamic Comics for Hierarchical Abstraction of 3D Animation Data [project page] [video] [PG 2013/CGF]
Myung Geol Choi, Seung-Tak Noh, Taku Komura, Takeo Igarashi
In this paper, we propose a system that allows users to interactively browse an animation and produce a comic sequence out of it. Each snapshot in the comic optimally visualizes a duration of the original animation, taking into account the geometry and motion of the characters and objects in the scene. This is achieved by a novel algorithm that automatically produces a hierarchy of snapshots from the input animation. Our user interface allows users to arrange the snapshots according to the complexity of the movements by the characters and objects, the duration of the animation and the page area to visualize the comic sequence. Our system is useful for quickly browsing through a large amount of animation data and semi-automatically synthesizing a storyboard from a long sequence of animation.
Augmented-Virtuality based CPR Training System (2015~2016) [I.M.LAB, South Korea]
I also participated in the development of an Augmented-Virtuality based CPR (cardiopulmonary resuscitation) training system as the main programmer.
Programming Environment for GPU Programming (2011~2012) (Japanese only) [Mitou-ipedia page]
[this will be updated soon...]