Computer Science ›› 2022, Vol. 49 ›› Issue (4): 215-220.doi: 10.11896/jsjkx.210200174

• Computer Graphics & Multimedia • Previous Articles     Next Articles

Infrared and Visible Image Fusion Network Based on Optical Transmission Model Learning

YAN Min1, LUO Xiao-qing1, ZHANG Zhan-cheng2   

  1. 1 School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China;
    2 School of Electronic & Information Engineering, Suzhou University of Science and Technology, Suzhou, Jiangsu 215009, China;
  • Received:2021-02-26 Revised:2021-07-13 Published:2022-04-01
  • About author:YAN Min,born in 1996,master.His main research interests include image fusion and so on.LUO Xiao-qing,born in 1980,associate professor.Her main research interests include image fusion and computer vision.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China(61772237) and Six Talent Peak Projects in Jiangsu Province(XYDXX-030).

Abstract: The fusion of infrared and visible images can obtain more comprehensive and rich information.Because there is no ground truth reference image, existing fusion networks simply try to find a balance between the two modes as much as possible.Due to the lack of ground truth label in existing data sets, supervised learning methods can not be directly applied to image fusion.In this paper, a multimode image synthesizing method based on the ambient light transmission model is proposed.Based on the NYU-Depth labeled data set and its depth annotation information, a set of infrared and visible multi-mode pairs with their ground truth fusion images is synthesized.The edge loss function and detail loss function are introduced into the conditional GAN, and the network is trained with end-to-end manner over the synthesized multi-modal image data set.Finally a fusion network is obtained.The trained network can make the fused image retain the details of the visible image and the characteristics of the infrared image, and sharpen the boundary of thermal targets in the infrared image.Compared with the state-of-the-art methods including IFCNN, DenseFuse, and FuionGAN on open TNO benchmark data set, the effectiveness of the proposed method is verified with subjective and objective image quality evalution.

Key words: GAN, Image fusion, Optical transmission model, Synthesized dataset

CLC Number: 

  • TP18
[1] MA J Y,MA Y.Infrared and visible image fusion methods and applications:A survey[J].Information Fusion,2019,45:153-178.
[2] LIU Y,CHEN X,CHENG J,et al.Infrared and visible image fusion with convolutional neural networks[J].International Journal of Wavelets,Multiresolution and Information Processing,2018,16(3):1.
[3] TOET A.Image fusion by a ratio of low-pass pyramid[J].Pattern Recognition Letters,1989,9(4):245-253.
[4] LIU Y,JING J,WANG Q,et al.Region level based multi-focus image fusion using quaternion wavelet and normalized cut-Science Direct[J].Signal Processing,2014,97(7):9-30.
[5] WANG J,PENG J,FENG X,et al.Fusion method for infrared and visible images by using non-negative sparse representation[J].Infrared Physics & Technology,2014,67:477-489.
[6] LI H,WU X J.Infrared and visible image fusion using LatentLow-Rank Representation[J].arXiv:1804.08992.
[7] GOODFELLOW I J,POUGET-ABADIE J,MIRZA M,et al.Generative Adversarial Networks[J].Advances in Neural Information Processing Systems,2014,3:2672-2680.
[8] WANG H,LI S,SONG L,et al.A novel convolutional neural network based fault recognition method via image fusion of multi-vibration-signals[J].Computers in Industry,2019,105:182-190.
[9] MA J Y,WEI Y,LIANG P W,et al.FusionGAN:A generative adversarial network for infrared and visible image fusion[J].Information Fusion,2019,48:11-26.
[10] MA J Y,LIANG P W,et al.Infrared and visible image fusion via detail preserving adversarial learning[J].Information Fusion,2020,54:85-98.
[11] MA J Y,HAN X.DDcGAN:A dual-discriminator conditionalgenerative adversa-rial network for multi-resolution image fusion[J].IEEE Transactions on Image Processing,2020,29:4980-4995.
[12] ZHANG Y,LIU Y.IFCNN:a general image fusion framework based on convolutional neural network[J].Information Fusion,2020,54:99-118.
[13] NARASIMHAN S G,NAYAR S K.Vision and the Atmosphere[J].International Journal of Computer Vision,2002,48(3):233-254.
[14] HUANG S C,YE J H,CHEN B H.An Advanced Single ImageVisibility Restorat-ion Algorithm for Real World Hazy Scenes[J].IEEE Transactions on Industrial Electronics,2015,62(5):2962-2972.
[15] HE K M,TANG X.Single image haze removal using dark channel prior[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(12):2341-2353.
[16] SILBERMAN N,HOIEM D,KOHLI P,et al.Indoor Segmentation and Support Inference from RGBD Images[J] Lecture Notes in Computer Science,2012,7576(1):761-774.
[17] MIRZA M,OSINDERO S.Conditional Generative AdversarialNets[J].Computer Science,2014,3:2672-2680.
[18] PATHAK D,KRAHENBUHL P,DONA-HUE J,et al.Context Encoders:Feature Learning by Inpainting[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Washington:IEEE Computer Society,2016:2536-2544.
[19] HE K M,ZHANG X Y.Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Washington:IEEE Computer Society,2016:770-778.
[20] HE K M,ZHANG X.Delving Deep into Rectifiers:Surpassing Human-Level Perfo-rmance on ImageNet Classification[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision Workshops.Washington:IEEE Computer Society,2015:1026-1034.
[21] LI H,WU X J.DenseFuse:A Fusion Approach to Infrared and Visible Images[J].IEEE Transactions on Image Processing,2018,28(5):2614-2623.
[22] LI H,WU X J.Infrared and Visible Image Fusion using a Deep Learning Framework[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Washington:IEEE Computer Society,2018:2705-2710.
[1] ZHAO Ming-hua, ZHOU Tong-tong, DU Shuang-li, SHI Zheng-hao. Single Backlit Image Enhancement Based on Virtual Exposure Method [J]. Computer Science, 2022, 49(6A): 384-389.
[2] LAI Teng-fei, ZHOU Hai-yang, YU Fei-hong. Real-time Extend Depth of Field Algorithm for Video Processing [J]. Computer Science, 2022, 49(6A): 314-318.
[3] GAO Yuan-hao, LUO Xiao-qing, ZHANG Zhan-cheng. Infrared and Visible Image Fusion Based on Feature Separation [J]. Computer Science, 2022, 49(5): 58-63.
[4] DOU Zhi, WANG Ning, WANG Shi-jie, WANG Zhi-hui, LI Hao-jie. Sketch Colorization Method with Drawing Prior [J]. Computer Science, 2022, 49(4): 195-202.
[5] TANG Yu-xiao, WANG Bin-jun. Research Progress of Face Editing Based on Deep Generative Model [J]. Computer Science, 2022, 49(2): 51-61.
[6] GUAN Zheng, DENG Yang-lin, NIE Ren-can. Non-negative Matrix Factorization Based on Spectral Reconstruction Constraint for Hyperspectral and Panchromatic Image Fusion [J]. Computer Science, 2021, 48(9): 153-159.
[7] HUANG Xiao-sheng, XU Jing. Multi-focus Image Fusion Method Based on PCANet in NSST Domain [J]. Computer Science, 2021, 48(9): 181-186.
[8] TIAN Song-wang, LIN Su-zhen, YANG Bo. Multi-band Image Self-supervised Fusion Method Based on Multi-discriminator [J]. Computer Science, 2021, 48(8): 185-190.
[9] CUI Wen-hao, JIANG Mu-rong, YANG Lei, FU Peng-ming, ZHU Ling-xiao. Combining MCycleGAN and RFCNN to Realize High Resolution Reconstruction of Solar Speckle Image [J]. Computer Science, 2021, 48(6A): 38-42.
[10] ZHANG Man, LI Jie, ZHU Xin-zhong, SHEN Ji, CHENG Hao-tian. Augmentation Technology of Remote Sensing Dataset Based on Improved DCGAN Algorithm [J]. Computer Science, 2021, 48(6A): 80-84.
[11] LIU Han-qing, KANG Xiao-dong, LI Bo, ZHANG Hua-li, FENG Ji-chao, HAN Jun-ling. Comparative Study on Classification and Recognition of Medical Images Using Deep Learning Network [J]. Computer Science, 2021, 48(6A): 89-94.
[12] CHENG Xue-lin, YANG Xiao-hu, ZHUO Chong-kui. Research and Implementation of Data Authority Control Model Based on Organization [J]. Computer Science, 2021, 48(6A): 558-562.
[13] WANG Li-fang, WANG Rui-fang, LIN Su-zhen, QIN Pin-le, GAO Yuan, ZHANG Jin. Multimodal Medical Image Fusion Based on Dual Residual Hyper Densely Networks [J]. Computer Science, 2021, 48(2): 160-166.
[14] ZHOU Xiao-shi, ZHANG Zi-wei, WEN Juan. Natural Language Steganography Based on Neural Machine Translation [J]. Computer Science, 2021, 48(11A): 557-564.
[15] YANG Chun-de, JIA Zhu, LI Xin-wei. Study on ECG Signal Recognition and Classification Based on U-Net++ [J]. Computer Science, 2021, 48(10): 121-126.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!