Sang Ge-Nan

Researcher of Alltuu Zhitu Laboratory

MCS. Hangzhou Normal University

Email: sangenan1@gmail.com



Commercialization Of Rearch

Real time image enhancement based on deep learning

Although deep convolutional neural networks have achieved gratifying results in the field of image enhancement, the existing research often only focuses on the specific environment of image enhancement, such as weak contrast, underexposure, etc., and the processing speed is slow. Therefore, I proposed a new image enhancement neural network model and proposed a loss function with local and global feature constraints and prior conditions. I collected a data set of 108,000 image pairs. Through these methods, compared with previous methods, my experimental results have more natural colors, more attractive colors, better contrast, smaller differences with the images restored by professional retouchers, no artifacts, and shorter reasoning time. It can run in real-time on mobile phones and other devices. Benchmarking with other advanced methods on the MIT-Adobe FiveK dataset shows that my network is better than existing methods.

High precision human skin segmentation

It can achieve the refinement of an area of skin divided images in all. The segmentation algorithm has the ability to resist occlusion and adapt to various complex scenes, such as the front and side of a portrait, different color temperatures, overlapping human bodies, etc.

High-definition portrait beauty based on skin tone segmentation

Combined with a variety of advanced AI algorithms. It can achieve the main functions, including skin grinding, whitening, black removing, clear, three-dimensional facial features, freckle removing, acne removing, law lines removing and so on. Achieve the ultimate natural beauty. The effect of the algorithm I implemented has reached the commercial level

Automatic image rotation correction by convolution neural network

I designed a new convolutional neural network model to find the correct horizontal angle of the image and to correct the image. It allows ignoring the metadata and image format of the image, and using potential image semantic features and object recognition to predict the correct horizontal angle of the image. Extensively assessing the surface, my method can process real-world data sets and achieve similar accuracy to humans.

Automatic portrait beauty

Based on accurate face key point detection and efficient image deformation algorithm, it can process multi-face beauty and arbitrary resolution images of a single image in real-time. It can realize functions such as the thin face, chin retraction, face shrinking, small face, narrow face, big eyes, eye distance and eye angle adjustment, thin nose adjustment, etc., providing a star-like all-round face beauty changing experience.

Patent

  • Applications

    • A joint task learning method based on single image super segmentation and perceptual image enhancement,Chinese Patent Application,2020

    • Saliency aware image clipping method based on potential region pairs,Chinese Patent Application,2020

    • An image enhancement method based on deep learning,Chinese Patent Application,2020

    • A method and device for generating jigsaw puzzle,Chinese Patent Application,2020

    • A photo screening system based on image saliency detection and eye state detection,Chinese Patent Application,2019

    • A fuzzy photo detection method based on deep learning,Chinese Patent Application,2019

Paper

  • Xu, Y., Xu, W., Wang, M., Li, L., Sang, G., Wei, P., & Zhu, L. (2021). Saliency aware image cropping with latent region pair. Expert Systems with Applications, 171, 114596.

  • Xu, Y., Zhang, N., Li, L., Sang, G., Zhang, Y., Wang, Z., & Wei, P. (2021). Joint Learning of Super-Resolution and Perceptual Image Enhancement for Single Image. IEEE Access, 9, 48446-48461.

  • Xu, Y., Zhang, N., Wei, P., Sang, G., Li, L., & Yuan, F. (2020). Deep neural framework with visual attention and global context for predicting image aesthetics. IEEE Access.

  • 桑葛楠, & 韩筱璞. (2019). 科学家科研合作关系的均衡性特征. 电子科技大学学报, 48(5), 786-793.