Computer Science ›› 2023, Vol. 50 ›› Issue (7): 129-136.doi: 10.11896/jsjkx.220700008

• Computer Graphics & Multimedia • Previous Articles     Next Articles

Arbitrary Image Style Transfer with Consistent Semantic Style

YAN Mingqiang, YU Pengfei, LI Haiyan, LI Hongsong   

  1. School of Information,Yunnan University,Kunming 650000,China
  • Received:2022-07-01 Revised:2022-11-17 Online:2023-07-15 Published:2023-07-05
  • About author:YAN Mingqiang,born in 1996,postgraduate.His main research interests include pattern recognition and image style transfer.YU Pengfei,born in 1974,Ph.D,asso-ciate professor.His main research in-terests include pattern recognition and image processing.
  • Supported by:
    National Natural Science Foundation of China(62066046).

Abstract: The goal of image style transfer is to synthesize an output image by transferring the style of the target image to a given content image.There are a large number of image style transfer works,but the stylization results ignore the manifold distribution of different semantic regions of the content image.At the same time,most methods use global statistics(for example,Gram matrix or covariance matrix) to achieve the matching of style feature to content feature.There are inevitable issues of content loss,style leakage,and the presence of artifacts,resulting in inconsistent stylized results.Aiming at the above problems,a self-attention mechanism-based progressive manifold feature mapping module(MFMM-AM) is proposed to coordinately match features between related content and style manifolds.Exact histogram matching(EHM) is applied to achieve higher-order distribution ma-tching of style and content feature maps,reducing the loss of image information.Finally,two contrastive losses are introduced to learn human beings using the external information of large-scale style datasets perceived style information that makes the color distribution and texture patterns of stylized images more reasonable.Experimental results show that,compared with the existing typical arbitrary image style transfer methods,the proposed network greatly bridges the gap between human-created artworks and AI-created artworks,and can generate visually more harmonious and satisfying artistic images.

Key words: Image style transfer, Manifold distribution, Self-attention mechanism, Feature mapping, Higher-order distribution matching

CLC Number: 

  • TP391
[1]LI W S,ZHAO P,YIN L Z,et al.Regional diversified image style transfer method based on Gaussian sampling [J].Journal of Computer-Aided Design & Computer Graphics,2022,34(5):8.
[2]GATYS L A,ECKER A S,BETHGE M.Image style transferusing convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2414-2423.
[3]YIN W,YIN H,BARAKA K,et al.Dance style transfer with cross-modal transformer[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2023:5058-5067.
[4]LI Y,FANG C,YANG J,et al.Universal style transfer via feature transforms[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.2017:385-395.
[5]LI X,LIU S,KAUTZ J,et al.Learning linear transformations for fast image and video style transfer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:3809-3817.
[6]SAMUTH B,TSCHUMPERLÉ D,RABIN J.A Patch-BasedApproach for Artistic Style Transfer via Constrained Multi-Scale Image Matching[C]//2022 IEEE International Confe-rence on Image Processing(ICIP).IEEE,2022:3490-3494.
[7]SHENG L,LIN Z,SHAO J,et al.Avatar-net:Multi-scale zero-shot style transfer by feature decoration[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:8242-8250.
[8]KıNLı F,ÖZCAN B,KıRAÇ F.Patch-wise contrastive stylelearning for instagram filter removal[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:578-588.
[9]LIAO J,YAO Y,YUAN L,et al.Visual attribute transferthrough deep image analogy[J].arXiv:1705.01088,2017.
[10]GU S,CHEN C,LIAO J,et al.Arbitrary style transfer withdeep feature reshuffle[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:8222-8231.
[11]KIM H,CHOI Y,KIM J,et al.Exploiting spatial dimensions of latent in gan for real-time image editing[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:852-861.
[12]SANAKOYEU A,KOTOVENKO D,LANG S,et al.A style-aware content loss for real-time hd style transfer[C]//Procee-dings of the European Conference on Computer Vision(ECCV).2018:698-714.
[13]ZHENG X A,MICHAEL WILBER B,CHEN F C,et al.Adversarial training for fast arbitrary style transfer[J].Computers & Graphics,2020,87:1-11.
[14]ZHANG Y,LI M,LI R,et al.Exact feature distribution ma-tching for arbitrary style transfer and domain generalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:8035-8045.
[15]AN J,HUANG S,SONG Y,et al.Artflow:Unbiased imagestyle transfer via reversible neural flows[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:862-871.
[16]LIU S,LIN T,HE D,et al.Adaattn:Revisit attention mechanism in arbitrary neural style transfer[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:6649-6658.
[17]WU Z,ZHU Z,DU J,et al.CCPL:Contrastive Coherence Preserving Loss for Versatile Style Transfer[J].arXiv:2207.04808,2022.
[18]ZHANG Y,TANG F,DONG W,et al.Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning[J].arXiv:2205.09542,2022.
[19]HUO J,JIN S,LI W,et al.Manifold alignment for semantically aligned style transfer [C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:14861-14869.
[20]DING X,ZHANG X,MA N,et al.Repvgg:Making vgg-styleconvnets great again[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:13733-13742.
[21]LI P,ZHAO L,XU D,et al.Optimal transport of deep feature for image style transfer[C]//Proceedings of the 2019 4th International Conference on Multimedia Systems and Signal Proces-sing.2019:167-171.
[22]ZHANG Y,LI M,LI R,et al.Exact feature distribution ma-tching for arbitrary style transfer and domain generalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:8035-8045.
[23]CHEN H,WANG Z,ZHANG H,et al.Artistic Style Transfer with Internal-external Learning and Contrastive Learning[C]//NeurIPS.2021.
[24]KAMMOUN A,SLAMA R,TABIA H,et al.Generative Adversarial Networks for face generation:A survey[J].ACM Computing Surveys,2022,55(5):1-37.
[25]PARK D Y,LEE K H.Arbitrary style transfer with style-attentional networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:5880-5888.
[26]LIN T Y,MAIRE M,BELONGIE S,et al.Microsoft coco:Common objects in context[C]//European Conference on Computer Vision.2014:740-755.
[27]PHILLIPS F,MACKINTOSH B.Wiki Art Gallery,Inc.:A case for critical thinking[J].Issues in Accounting Education,2011,26(3):593-608.
[28]WANG T C,LIU M Y,ZHU J Y,et al.High-resolution image synthesis and semantic manipulation with conditional gans[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:8798-8807.
[29]LU L H.Simulation physics-informed deep neural network byadaptive Adam optimization method to perform a comparative study of the system[J].Engineering with Computers,2022,38(Suppl 2):1111-1130.
[1] LI Fan, JIA Dongli, YAO Yumin, TU Jun. Graph Neural Network Few Shot Image Classification Network Based on Residual and Self-attention Mechanism [J]. Computer Science, 2023, 50(6A): 220500104-5.
[2] DOU Zhi, HU Chenguang, LIANG Jingyi, ZHENG Liming, LIU Guoqi. Lightweight Target Detection Algorithm Based on Improved Yolov4-tiny [J]. Computer Science, 2023, 50(6A): 220700006-7.
[3] WANG Xianwang, ZHOU Hao, ZHANG Minghui, ZHU Youwei. Hyperspectral Image Classification Based on Swin Transformer and 3D Residual Multilayer Fusion Network [J]. Computer Science, 2023, 50(5): 155-160.
[4] YANG Bin, LIANG Jing, ZHOU Jiawei, ZHAO Mengci. Study on Interpretable Click-Through Rate Prediction Based on Attention Mechanism [J]. Computer Science, 2023, 50(5): 12-20.
[5] ZHANG Jingyuan, WANG Hongxia, HE Peisong. Multitask Transformer-based Network for Image Splicing Manipulation Detection [J]. Computer Science, 2023, 50(1): 114-122.
[6] JIN Fang-yan, WANG Xiu-li. Implicit Causality Extraction of Financial Events Integrating RACNN and BiLSTM [J]. Computer Science, 2022, 49(7): 179-186.
[7] ZHANG Jia-hao, LIU Feng, QI Jia-yin. Lightweight Micro-expression Recognition Architecture Based on Bottleneck Transformer [J]. Computer Science, 2022, 49(6A): 370-377.
[8] ZHAO Dan-dan, HUANG De-gen, MENG Jia-na, DONG Yu, ZHANG Pan. Chinese Entity Relations Classification Based on BERT-GRU-ATT [J]. Computer Science, 2022, 49(6): 319-325.
[9] HU Yan-li, TONG Tan-qian, ZHANG Xiao-yu, PENG Juan. Self-attention-based BGRU and CNN for Sentiment Analysis [J]. Computer Science, 2022, 49(1): 252-258.
[10] WANG Xi, ZHANG Kai, LI Jun-hui, KONG Fang, ZHANG Yi-tian. Generation of Image Caption of Joint Self-attention and Recurrent Neural Network [J]. Computer Science, 2021, 48(4): 157-163.
[11] ZHOU Xiao-shi, ZHANG Zi-wei, WEN Juan. Natural Language Steganography Based on Neural Machine Translation [J]. Computer Science, 2021, 48(11A): 557-564.
[12] ZHANG Peng-fei, LI Guan-yu, JIA Cai-yan. Truncated Gaussian Distance-based Self-attention Mechanism for Natural Language Inference [J]. Computer Science, 2020, 47(4): 178-183.
[13] MIAO Yong-wei, LI Gao-yi, BAO Chen, ZHANG Xu-dong, PENG Si-long. Image Localized Style Transfer Based on Convolutional Neural Network [J]. Computer Science, 2019, 46(9): 259-264.
[14] ZHANG Yi-jie, LI Pei-feng, ZHU Qiao-ming. Event Temporal Relation Classification Method Based on Self-attention Mechanism [J]. Computer Science, 2019, 46(8): 244-248.
[15] FAN Zi-wei, ZHANG Min, LI Zheng-hua. BiLSTM-based Implicit Discourse Relation Classification Combining Self-attention
Mechanism and Syntactic Information
[J]. Computer Science, 2019, 46(5): 214-220.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!