Computer Science ›› 2021, Vol. 48 ›› Issue (1): 241-246.doi: 10.11896/jsjkx.200700187

• Artificial Intelligence • Previous Articles     Next Articles

Conditional Generative Adversarial Network Based on Self-attention Mechanism

YU Wen-jia, DING Shi-fei   

  1. School of Computer Science and Technology,China University of Mining and Technology,Xuzhou,Jiangsu 221116,China
  • Received:2020-07-29 Revised:2020-09-22 Online:2021-01-15 Published:2021-01-15
  • About author:YU Wen-jia,born in 1994,postgradua-te,is a student member of China Computer Federation.His main research interests include deep learning and computer vision.
    DING Shi-fei,born in 1963,Ph.D,professor,Ph.D supervisor,is a director of China Computer Federation.His main research interests include artificial intelligence,machine learning,pattern recognition and data mining.
  • Supported by:
    National Natural Science Foundation of China (61672522,61976216).

Abstract: In recent years,more and more generative adversarial networks appear in various fields of deep learning.Conditional generative adversarial networks(cGAN) are the first to introduce supervised learning into unsupervised GANs,which makes it possible for adversarial networks to generate labeled data.Traditional GAN generates images through multiple convolution operations to simulate the dependency among different regions.However,cGAN only improves the objective function of GAN,but does not change its network structure.Therefore,the problem also exists in cGAN that when the distance between features in thegene-rated image is long,features have relatively less relationship,resulting in unclear details of the generated image.In order to solve this problem,this paper introduces Self-attention mechanism to cGAN and proposes a new model named SA-cGAN.The model generates consistent objects or scenes by using features in the long distance of the image,so that the generative ability of conditional GAN is improved.SA-cGAN is experimented on the CelebA and MNIST handwritten datasets and compared with several commonly used generative models such as DCGAN,cGAN.Results prove that the proposed model has made some progress in the field of image generation.

Key words: Deep learning, Generative adversarial network, cGAN, Self-attention, SA-cGAN

CLC Number: 

  • TP391
[1] GOODFELLOW I J,POUGET A J,MIRZA M,et al.Generative Adversarial Nets[J].arXiv:1406.2661.
[2] CAO Y J,JIA L L,CHEN Y X,et al.Review of computer vision based on generative adversarial networks[J].Journal of Image and Graphics,2018,23(10):1433-1449.
[3] WANG K F,GOU C,DUAN Y J,et al.Generative Adversarial Networks:The State of the Art and Beyond[J].ACTA Automatica Sinica,2017,43(3):321-332.
[4] LECUN Y,BENGIO Y,HINTON G.Deep learning[J].Nature,2015,521(7553):436.
[5] JÜRGEN S.Deep learning in neural networks:An overview[J].Neural Netw,2015,61:85-117.
[6] CHENG J,WANG P S,LI G,et al.Recent advances in efficient computation of deep convolutional neural networks[J].Frontiers of Information Technology & Electronic Engineering,2018,19(1):67-80.
[7] KOZIARSKI M,CYGANEK B.Impact of Low Resolution on Image Recognition with Deep Neural Networks:An Experimental Study[J].International Journal of Applied Mathematics and Computer Science,2018,28(4):735-744.
[8] RADFORD A,METZ L,CHINTALA S.Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks[J].arXiv:1511.06434v2,2016.
[9] MIRZA M,OSINDERO S.Conditional Generative AdversarialNets[J].arXiv:Learning,2014.
[10] ARJOVSKY M,CHINTALA S,BOTTOU L.Wasserstein GAN[J].arXiv:1701.07875v3,2017.
[11] FUGLEDE B,TOPSOE F.Jensen-Shannon divergence and Hilbert space embedding[C]//International Symposium on Information Theory.IEEE,2004:31.
[12] LU B,HANCOCK E R.Graph Kernels from the Jensen-Shannon Divergence[J].Journal of Mathematical Imaging and Vision,2013,47(1):60-69.
[13] GULRAJANI I,AHMED F,ARJOVSKY M,et al.ImprovedTraining of Wasserstein GANs[J].arXiv:1704.00028v3,2017.
[14] LAWRENCE S,GILES C L,TSOI A C,et al.Face recognition:a convolutional neural-network approach[J].IEEE Transactions on Neural Networks,1997,8(1):98-113.
[15] VRHEL M,SABER E,TRUSSELL H J.Color image generation and display technologies[J].IEEE Signal Processing Magazine,2005,22(1):23-33.
[16] BODLA N,GANG H,CHELLAPPA R.Semi-supervisedFusedGAN for Conditional Image Generation[C]//Computer Vision and Pattern Recognition.2018:669-683.
[17] STEFAN D,RUSSO R,DAVID M,et al.Disjunction Category Labels[C]//Nordic Conference on Information Security Technology for Applications.Springer-Verlag,2011.
[18] GOLDSTONE R L,LIPPA Y,SHIFFRIN R M.Altering object representations through category learning[J].Cognition,2001,78(1):27-43.
[19] ZHANG N,DING S F,ZHANG J.Multi Layer ELM-RBF for Multi-Label Learning[J].Applied Soft Computing,2016,43(6):535-545.
[20] STOCKMAN,GEORGE C.Computer vision[M].PrenticeHall,2001.
[21] CAO K,WU,LUO L Z,et al.Face completion algorithm based on condition generation adversarial network[J].Transducer and Microsystem Technologie,2019,38(6):129-132.
[22] TANG X L,DU Y M,LIU Y W,et al.Image Recognition With Conditional Deep Convolutional Generative Adversarial Networks[J].ACTA Automatica Sinica,2018,44(5):855-864.
[23] ZHANG H,GOODFELLOW I,METAXAS D,et al.Self-Attention Generative Adversarial Networks[J].arXiv:1805.08318v2,2019.
[24] VASWANI A,SHAZEER N,PARMAR N,et al.Attention is All you Need[C]//Neural Information Processing Systems.2017:5998-6008.
[25] LU J J,GONG Y.Text sentiment classification model based on self-attention and expanded convolutional neural network[J].Computer Engineering and Design,2020,41(6):1645-1651.
[26] COLLOBERT R,WESTON J,BOTTOU L,et al.Natural Language Processing (Almost) from Scratch[J].Journal of Machine Learning Research,2011,12:2493-2537.
[27] LIU Z W,LUO P,WANG X G,et al.Large-scale celebfaces attributes (celeba) dataset[J].Retrieved August,2018,15.
[28] LI D.The MNIST Database of Handwritten Digit Images for Machine Learning Research[J].IEEE Signal Processing Magazine,2012,29(6):141-142.
[29] KINGMA D P,BA J.Adam:A Method for Stochastic Optimization[J].arXiv:1412.6980v9,2014.
[1] ZHANG Yang, MA Xiao-hu. Anime Character Portrait Generation Algorithm Based on Improved Generative Adversarial Networks [J]. Computer Science, 2021, 48(1): 182-189.
[2] WANG Rui-ping, JIA Zhen, LIU Chang, CHEN Ze-wei, LI Tian-rui. Deep Interest Factorization Machine Network Based on DeepFM [J]. Computer Science, 2021, 48(1): 226-232.
[3] TONG Xin, WANG Bin-jun, WANG Run-zheng, PAN Xiao-qin. Survey on Adversarial Sample of Deep Learning Towards Natural Language Processing [J]. Computer Science, 2021, 48(1): 258-267.
[4] DING Yu, WEI Hao, PAN Zhi-song, LIU Xin. Survey of Network Representation Learning [J]. Computer Science, 2020, 47(9): 52-59.
[5] HE Xin, XU Juan, JIN Ying-ying. Action-related Network:Towards Modeling Complete Changeable Action [J]. Computer Science, 2020, 47(9): 123-128.
[6] YE Ya-nan, CHI Jing, YU Zhi-ping, ZHAN Yu-liand ZHANG Cai-ming. Expression Animation Synthesis Based on Improved CycleGan Model and Region Segmentation [J]. Computer Science, 2020, 47(9): 142-149.
[7] DENG Liang, XU Geng-lin, LI Meng-jie, CHEN Zhang-jin. Fast Face Recognition Based on Deep Learning and Multiple Hash Similarity Weighting [J]. Computer Science, 2020, 47(9): 163-168.
[8] BAO Yu-xuan, LU Tian-liang, DU Yan-hui. Overview of Deepfake Video Detection Technology [J]. Computer Science, 2020, 47(9): 283-292.
[9] MENG Li-sha, REN Kun, FAN Chun-qi, HUANG Long. Dense Convolution Generative Adversarial Networks Based Image Inpainting [J]. Computer Science, 2020, 47(8): 202-207.
[10] YUAN Ye, HE Xiao-ge, ZHU Ding-kun, WANG Fu-lee, XIE Hao-ran, WANG Jun, WEI Ming-qiang, GUO Yan-wen. Survey of Visual Image Saliency Detection [J]. Computer Science, 2020, 47(7): 84-91.
[11] XIE Yuan, MIAO Yu-bin, XU Feng-lin, ZHANG Ming. Injection-molded Bottle Defect Detection Using Semi-supervised Deep Convolutional Generative Adversarial Network [J]. Computer Science, 2020, 47(7): 92-96.
[12] WANG Wen-dao, WANG Run-ze, WEI Xin-lei, QI Yun-liang, MA Yi-de. Automatic Recognition of ECG Based on Stacked Bidirectional LSTM [J]. Computer Science, 2020, 47(7): 118-124.
[13] LIU Yan, WEN Jing. Complex Scene Text Detection Based on Attention Mechanism [J]. Computer Science, 2020, 47(7): 135-140.
[14] ZHANG Zhi-yang, ZHANG Feng-li, TAN Qi, WANG Rui-jin. Review of Information Cascade Prediction Methods Based on Deep Learning [J]. Computer Science, 2020, 47(7): 141-153.
[15] JIANG Wen-bin, FU Zhi, PENG Jing, ZHU Jian. 4Bit-based Gradient Compression Method for Distributed Deep Learning System [J]. Computer Science, 2020, 47(7): 220-226.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] LIU Qin. Study on Data Quality Based on Constraint in Computer Forensics[J]. Computer Science, 2018, 45(4): 169 -172 .
[2] LUO Xiao-yang, HUO Hong-tao, WANG Meng-si and CHEN Ya-fei. Passive Image-splicing Detection Based on Multi-residual Markov Model[J]. Computer Science, 2018, 45(4): 173 -177 .
[3] GUO Shuai, LIU Liang and QIN Xiao-lin. Spatial Keyword Range Query with User Preferences Constraint[J]. Computer Science, 2018, 45(4): 182 -189 .
[4] GUO Jun-xia, GUO Ren-fei, XU Nan-shan and ZHAO Rui-lian. Study on Construction of EFSM Model for Web Application Based on Session[J]. Computer Science, 2018, 45(4): 203 -207 .
[5] HOU Yan-e, KONG Yun-feng and DANG Lan-xue. Greedy Randomized Adaptive Search Procedure Algorithm Combining Set Partitioning for Heterogeneous School Bus Routing Problems[J]. Computer Science, 2018, 45(4): 240 -246 .
[6] QIN Ke-yun and LIN Hong. Relationships among Several Attribute Reduction Methods of Decision Formal Context[J]. Computer Science, 2018, 45(4): 257 -259 .
[7] JIN Rui, LIU Zuo-xue. Synchronization Protocol of TDMA Ad hoc Network Based on Time Slot Alignment[J]. Computer Science, 2018, 45(6): 84 -88 .
[8] ZHANG Shu-nan, CAI Ying, FAN Yan-fang, XIA Hong-ke. Chinese Data Encryption Scheme of Efficient Ciphertext Retrieving in Cloud Storage[J]. Computer Science, 2018, 45(6): 124 -129 .
[9] ZHANG Pan-pan, PENG Chang-gen, HAO Chen-yan. Privacy Protection Model and Privacy Metric Methods Based on Privacy Preference[J]. Computer Science, 2018, 45(6): 130 -134 .
[10] SHEN Xia-jiong, ZHANG Jun-tao, HAN Dao-jun. Short-term Traffic Flow Prediction Model Based on Gradient Boosting Regression Tree[J]. Computer Science, 2018, 45(6): 222 -227 .