Computer Science ›› 2022, Vol. 49 ›› Issue (4): 195-202.doi: 10.11896/jsjkx.210300140

• Computer Graphics & Multimedia • Previous Articles     Next Articles

Sketch Colorization Method with Drawing Prior

DOU Zhi, WANG Ning, WANG Shi-jie, WANG Zhi-hui, LI Hao-jie   

  1. College of Software Technology, Dalian University of Technology, Dalian, Liaoning 116000, China
  • Received:2021-03-12 Revised:2021-07-03 Published:2022-04-01
  • About author:DOU Zhi,born in 1996,postgraduate.His main research interests include computer vision and image generation.LI Hao-jie,born in 1972,professor,Ph.D supervisor,is a member of China Computer Federation.His main research interests include computer vision and image processing.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China(61772108,61932020,61976038).

Abstract: Automatic sketch colorization has become an important research topic in computer vision.Previous methods intent to improve the colorization quality with advanced network architecture or innovative pipeline.However, they usually generate results with concentrated hue, unreasonable saturation and gray distribution.To alleviate these problems, this paper proposes a sketch colorization method with drawing priors.Inspired by the actual coloring process, this method learns the widely used drawing priors (such as hue variation, saturation contrast, and gray contrast) to improve the quality of automatic sketch colorization.Speci-fically, it incorporates pixel-level loss in the HSV color space to gain more natural results with less artifacts.Meanwhile, three heuristic loss functions that introduce the drawing priors such as hue variation, saturation and gray contrast are used to train our method to generate results with harmonious color composition.We compare our method with current state-of-the-art methods on test dataset constructed by real sketch images.Fréchet inception distance (FID) and mean opinion score (MOS) are adopted to measure the similarity between the distribution of real and generated images and the visual quality, respectively.Compared to the second-best method, the experimental results show that the FID of our method decreases by 21.00 and the MOS increases by 0.96, respectively.All the experimental results prove that the proposed method effectively improves the visual quality of automa-tic sketch colorization.

Key words: Automatic sketch colorization, Deep learning, Drawing prior, Generative adversarial networks(GAN), HSV color space

CLC Number: 

  • TP391
[1] QU Y G,WONG T T,HENG P A.Manga colorization[J].ACM Transactions on Graphics,2006,25(3):1214-1220.
[2] SYKORA D,DINGLIANA J,COLLINS S.LazyBrush:Flexible Painting Tool for Hand-drawn Cartoons[J].Computer Graphics Forum,2009,28(2):599-608.
[3] GOODFELLOW I J,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial net-works[C]//Advances in Neural Information Processing Systems.2014:2672-2680.
[4] TAIZAN.Paintschainer canna[EB/OL].https://petalica-pa-int.pixiv.dev/index_en.html.
[5] TAIZAN.Paintschainer tanpopo[EB/OL].https://petalica-pa-int.pixiv.dev/index_en.html.
[6] TAIZAN.Paintschainer satsuki[EB/OL].https://petalica-pa-int.pixiv.dev/index_en.html.
[7] ZHANG L,LI C,WONG T T,et al.Two-stage sketch colorization[J].ACM Transactions on Graphics,2018,37(6):1-14.
[8] CI Y Z,MA X Z,WANG Z H,et al.User-guided deep anime line art colorization with conditional adversarial networks[C]//ACM Multimedia Conference on Multimedia Conference.2018:1536-1544.
[9] YOO S J,BAHNG H J,CHUNG S H,et al.Coloring with limi-ted data:Few-shot colorization via memory augmented networks[C]//IEEE Conference on Computer Vision and Pattern Recognition.Long Beach,CA,USA,2019:11283-11292.
[10] SMITH A R.Color gamut transform pairs[C]//Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques,SIGGRAPH 1978.Atlanta,GA,USA,1978:12-19.
[11] KIM E J,SUK H J.Hue extraction and Tone match:Genera-ting a Theme Color to Enhance the Emotional Quality of an Image[J/OL].ACM Siggraph.https://dl.acm.org/doi/abs/10.1145/2787626.2787657.
[12] LYNCH D K,LIVINGSTON W.Color and light in nature[J].Optometry and Vision,2001,74(6):1342-1343.
[13] JAMES G.Color and Light:A Guide for the Realist Painter[M].Andrews Mcmeel Publishing,2010.
[14] HUANG Y C,TUNG Y S,CHEN J C,et al.An adaptive edge detection based colorization algorithm and its applica-tions[C]//Proceedings of the 13th Annual ACM International Conference on Multimedia.ACM,2005:351-354.
[15] LEVIN A,LISCHINSKI D,WEISS Y.Colorization using optimization[J].ACM Transactions on Graphics,2004,23:689-694.
[16] ISOLA P,ZHU J Y,ZHOU T H,et al.Image-to-Image Translation with Conditional Adversarial Networks[C]//IEEE Confe-rence on Computer Vision and Pattern Recognition.Honolulu,HI,2017:5967-5976.
[17] WANG T C,LIU M Y,ZHU J Y,et al.High-resolution image synthesis and semantic manipulation with conditional gans[C]//IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA,2018:8798-8807.
[18] CHEN W,HAYS J.SketchyGAN:Towards Diverse and Realistic Sketch to Image Synthesis[C]//IEEE Conference on Computer Vision and Pattern Recognition,2018.Salt Lake City,UT,USA:9416-9425.
[19] ZHU J Y,PARK T,ISOLA P,et al.Unpaired image-to-imagetranslation using cycle-consistent adversarial networks[C]//IEEE International Conference on Computer Vision(ICCV 2017).Venice,Italy,2017:2242-2251.
[20] YI Z,ZHANG H,TAN P.Dualgan:unsupervised dual learning for image-to-image translation[C]//IEEE International Confe-rence on Computer Vision(ICCV 2017).Venice,Italy,2017:2868-2876.
[21] FURUSAWA C,HIROSHIBA K,OGAKI K,et al.Comi-colorization:Semi-Automatic Manga Colorization[J].SIGGRAPH Asia 2017 Technical Briefs,2017,12:1-4.
[22] FRANS K.Outline colorization through tandem adversarial networks[J].arXiv:1704.08834,2017.
[23] LIU Y F,QIN Z C,WAN T,et al.Auto-painter:cartoon image generation from sketch by using conditional wasserstein generative adversarial networks[J].Neurocomputing,2018,311:78-87.
[24] MEHDI M,SIMON O.Conditional Generative AdversarialNets[J].arXiv:1411.1784,2014.
[25] HYUNSU K,HO Y J,EUNHYEOK P,et al.Tag2Pix:LineArt Colorization Using Text Tag With SECat and Changing Loss[C]//2019 IEEE/CVF International Conference on Computer Vision.Seoul,Korea (South),2019:9055-9064.
[26] GULRAJANI I,AHMED F,ARJOVSKY M,et al.Improvedtraining of wasserstein gans[C]//Advances in Neural Information Processing Systems 30:Annual Conference on Neural Information Processing Systems,2017.Long Beach,CA,USA:5767-5777.
[27] XIE S,GIRSHICK R,DOLLÁR P,et al.Aggregated ResidualTransformations for Deep Neural Networks[C]//IEEE Confe-rence on Computer Vision and Pattern Recognition.Honolulu,HI,USA,2017:5987-5995.
[28] YU F,KOLTUN V,FUNKHOUSER T.Dilated residual net-works[C]//IEEE Conference on Computer Vision and Pattern Recognition.Honolulu,HI,USA,2017:5987-5995.
[29] SHI W,CABALLERO J,FERENC H,et al.Real-time singleimage and video super-resolution using an efficient sub-pixel con-volutional neural network[C]//IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas,2016:1874-1883.
[30] SAITO M,MATSUI Y.Illustration2Vec:a semantic vector re-presentation of illustrations[C]//SIGGRAPH Asia 2015 Technical Briefs.Kobe,Japan,2015,5:1-4.
[31] LEDIG C,THEIS L,HUSZAR F,et al.Cunningham A andAcosta A.Photo-realistic single image super-resolution using a generative adversarial network[C]//IEEE Conference on Computer Vision and Pattern Recognition.Honolulu,HI,USA,2016:105-114.
[32] KARRAS T,AILA T,LAINE S,et al.Progressive growing of gans for improved quality,stability,and variation[J].arXiv:1710.10196.
[33] OTSU N.A threshold selection method from gray-level histograms[J].IEEE Transactions on Systems,Man,and Cyberne-tics,1979,9(1):62-66.
[34] JUSTIN J,ALEXANDRE A,LI F F.Perceptual losses for real-time style transfer and super-resolution[C]//Computer Vision(ECCV 2016).14th European Conference.Amsterdam,The Netherlands,2016:694-711.
[35] WINNEMOELLER H,KYPRIANIDIS J E,OLSEN S C.Xdog:an extended difference-of-gaussians compendium including advanced image stylization[J].Computers and Graphics,2012,36(6):740-753.
[36] HEUSEL M,RAMSAUER H,UNTERTHINER T,et al.Gans trained by a two time-scale update rule converge to a local nash equilibrium[C]//Advances in Neural Information Processing Systems 30:Annual Conference on Neural Information Proces-sing Systems,2017.Long Beach,CA,USA:6626-6637.
[1] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[2] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[3] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[4] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[5] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[6] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[7] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[8] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[9] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[10] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[11] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[12] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[13] LIU Wei-ye, LU Hui-min, LI Yu-peng, MA Ning. Survey on Finger Vein Recognition Research [J]. Computer Science, 2022, 49(6A): 1-11.
[14] SUN Fu-quan, CUI Zhi-qing, ZOU Peng, ZHANG Kun. Brain Tumor Segmentation Algorithm Based on Multi-scale Features [J]. Computer Science, 2022, 49(6A): 12-16.
[15] KANG Yan, XU Yu-long, KOU Yong-qi, XIE Si-yu, YANG Xue-kun, LI Hao. Drug-Drug Interaction Prediction Based on Transformer and LSTM [J]. Computer Science, 2022, 49(6A): 17-21.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!