计算机科学 ›› 2022, Vol. 49 ›› Issue (4): 215-220.doi: 10.11896/jsjkx.210200174

• 计算机图形学&多媒体 • 上一篇    下一篇

基于光传输模型学习的红外和可见光图像融合网络设计

颜敏1, 罗晓清1, 张战成2   

  1. 1 江南大学人工智能与计算机学院 江苏 无锡 214122;
    2 苏州科技大学电子与信息工程学院 江苏 苏州 215009
  • 收稿日期:2021-02-26 修回日期:2021-07-13 发布日期:2022-04-01
  • 通讯作者: 罗晓清(xqluo@jiangnan.edu.cn)
  • 作者简介:(amin_biu@163.com)
  • 基金资助:
    国家自然科学基金(61772237); 江苏省六大人才高峰项目(XYDXX-030)

Infrared and Visible Image Fusion Network Based on Optical Transmission Model Learning

YAN Min1, LUO Xiao-qing1, ZHANG Zhan-cheng2   

  1. 1 School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China;
    2 School of Electronic & Information Engineering, Suzhou University of Science and Technology, Suzhou, Jiangsu 215009, China;
  • Received:2021-02-26 Revised:2021-07-13 Published:2022-04-01
  • About author:YAN Min,born in 1996,master.His main research interests include image fusion and so on.LUO Xiao-qing,born in 1980,associate professor.Her main research interests include image fusion and computer vision.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China(61772237) and Six Talent Peak Projects in Jiangsu Province(XYDXX-030).

摘要: 红外和可见光图像的融合可以获得更为全面、丰富的信息。由于没有真实融合图像作参考,现有的融合图像数据集缺少融合图像作为监督条件,基于监督学习的训练方法无法应用于图像融合,现有的融合网络都是尽可能地在两个模态间找到平衡,因此提出一种基于环境光传输模型的多模态图像合成方法。基于NYU-Depth有标签数据集和其深度标注信息合成一组带有参考融合图像的红外和可见光多模态数据集,在条件GAN中设计边缘和细节损失函数,用合成的多模态图像数据集以端到端的方式训练该网络,最终获得一个融合网络。该网络可以使融合图像较好地保留可见光图像的细节和红外图像的目标特征,锐化红外图像热辐射目标的边界。在TNO公开数据集上与主流的IFCNN,DenseFuse,FusionGAN等方法对比,通过主观和客观的图像质量评价检验了该方法的有效性。

关键词: GAN, 光传输模型, 合成数据集, 图像融合

Abstract: The fusion of infrared and visible images can obtain more comprehensive and rich information.Because there is no ground truth reference image, existing fusion networks simply try to find a balance between the two modes as much as possible.Due to the lack of ground truth label in existing data sets, supervised learning methods can not be directly applied to image fusion.In this paper, a multimode image synthesizing method based on the ambient light transmission model is proposed.Based on the NYU-Depth labeled data set and its depth annotation information, a set of infrared and visible multi-mode pairs with their ground truth fusion images is synthesized.The edge loss function and detail loss function are introduced into the conditional GAN, and the network is trained with end-to-end manner over the synthesized multi-modal image data set.Finally a fusion network is obtained.The trained network can make the fused image retain the details of the visible image and the characteristics of the infrared image, and sharpen the boundary of thermal targets in the infrared image.Compared with the state-of-the-art methods including IFCNN, DenseFuse, and FuionGAN on open TNO benchmark data set, the effectiveness of the proposed method is verified with subjective and objective image quality evalution.

Key words: GAN, Image fusion, Optical transmission model, Synthesized dataset

中图分类号: 

  • TP18
[1] MA J Y,MA Y.Infrared and visible image fusion methods and applications:A survey[J].Information Fusion,2019,45:153-178.
[2] LIU Y,CHEN X,CHENG J,et al.Infrared and visible image fusion with convolutional neural networks[J].International Journal of Wavelets,Multiresolution and Information Processing,2018,16(3):1.
[3] TOET A.Image fusion by a ratio of low-pass pyramid[J].Pattern Recognition Letters,1989,9(4):245-253.
[4] LIU Y,JING J,WANG Q,et al.Region level based multi-focus image fusion using quaternion wavelet and normalized cut-Science Direct[J].Signal Processing,2014,97(7):9-30.
[5] WANG J,PENG J,FENG X,et al.Fusion method for infrared and visible images by using non-negative sparse representation[J].Infrared Physics & Technology,2014,67:477-489.
[6] LI H,WU X J.Infrared and visible image fusion using LatentLow-Rank Representation[J].arXiv:1804.08992.
[7] GOODFELLOW I J,POUGET-ABADIE J,MIRZA M,et al.Generative Adversarial Networks[J].Advances in Neural Information Processing Systems,2014,3:2672-2680.
[8] WANG H,LI S,SONG L,et al.A novel convolutional neural network based fault recognition method via image fusion of multi-vibration-signals[J].Computers in Industry,2019,105:182-190.
[9] MA J Y,WEI Y,LIANG P W,et al.FusionGAN:A generative adversarial network for infrared and visible image fusion[J].Information Fusion,2019,48:11-26.
[10] MA J Y,LIANG P W,et al.Infrared and visible image fusion via detail preserving adversarial learning[J].Information Fusion,2020,54:85-98.
[11] MA J Y,HAN X.DDcGAN:A dual-discriminator conditionalgenerative adversa-rial network for multi-resolution image fusion[J].IEEE Transactions on Image Processing,2020,29:4980-4995.
[12] ZHANG Y,LIU Y.IFCNN:a general image fusion framework based on convolutional neural network[J].Information Fusion,2020,54:99-118.
[13] NARASIMHAN S G,NAYAR S K.Vision and the Atmosphere[J].International Journal of Computer Vision,2002,48(3):233-254.
[14] HUANG S C,YE J H,CHEN B H.An Advanced Single ImageVisibility Restorat-ion Algorithm for Real World Hazy Scenes[J].IEEE Transactions on Industrial Electronics,2015,62(5):2962-2972.
[15] HE K M,TANG X.Single image haze removal using dark channel prior[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(12):2341-2353.
[16] SILBERMAN N,HOIEM D,KOHLI P,et al.Indoor Segmentation and Support Inference from RGBD Images[J] Lecture Notes in Computer Science,2012,7576(1):761-774.
[17] MIRZA M,OSINDERO S.Conditional Generative AdversarialNets[J].Computer Science,2014,3:2672-2680.
[18] PATHAK D,KRAHENBUHL P,DONA-HUE J,et al.Context Encoders:Feature Learning by Inpainting[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Washington:IEEE Computer Society,2016:2536-2544.
[19] HE K M,ZHANG X Y.Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Washington:IEEE Computer Society,2016:770-778.
[20] HE K M,ZHANG X.Delving Deep into Rectifiers:Surpassing Human-Level Perfo-rmance on ImageNet Classification[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision Workshops.Washington:IEEE Computer Society,2015:1026-1034.
[21] LI H,WU X J.DenseFuse:A Fusion Approach to Infrared and Visible Images[J].IEEE Transactions on Image Processing,2018,28(5):2614-2623.
[22] LI H,WU X J.Infrared and Visible Image Fusion using a Deep Learning Framework[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops.Washington:IEEE Computer Society,2018:2705-2710.
[1] 来腾飞, 周海洋, 余飞鸿.
视频流的实时景深延拓算法
Real-time Extend Depth of Field Algorithm for Video Processing
计算机科学, 2022, 49(6A): 314-318. https://doi.org/10.11896/jsjkx.201100187
[2] 赵明华, 周童童, 都双丽, 石争浩.
基于虚拟曝光方法的单幅逆光图像增强
Single Backlit Image Enhancement Based on Virtual Exposure Method
计算机科学, 2022, 49(6A): 384-389. https://doi.org/10.11896/jsjkx.210400243
[3] 高元浩, 罗晓清, 张战成.
基于特征分离的红外与可见光图像融合算法
Infrared and Visible Image Fusion Based on Feature Separation
计算机科学, 2022, 49(5): 58-63. https://doi.org/10.11896/jsjkx.210200148
[4] 窦智, 王宁, 王世杰, 王智慧, 李豪杰.
结合绘画先验的线稿上色方法
Sketch Colorization Method with Drawing Prior
计算机科学, 2022, 49(4): 195-202. https://doi.org/10.11896/jsjkx.210300140
[5] 官铮, 邓扬琳, 聂仁灿.
光谱重建约束非负矩阵分解的高光谱与全色图像融合
Non-negative Matrix Factorization Based on Spectral Reconstruction Constraint for Hyperspectral and Panchromatic Image Fusion
计算机科学, 2021, 48(9): 153-159. https://doi.org/10.11896/jsjkx.200900054
[6] 黄晓生, 徐静.
基于PCANet的非下采样剪切波域多聚焦图像融合
Multi-focus Image Fusion Method Based on PCANet in NSST Domain
计算机科学, 2021, 48(9): 181-186. https://doi.org/10.11896/jsjkx.200800064
[7] 田嵩旺, 蔺素珍, 杨博.
基于多判别器的多波段图像自监督融合方法
Multi-band Image Self-supervised Fusion Method Based on Multi-discriminator
计算机科学, 2021, 48(8): 185-190. https://doi.org/10.11896/jsjkx.200600132
[8] 崔雯昊, 蒋慕蓉, 杨磊, 傅鹏铭, 朱凌霄.
结合MCycleGAN与RFCNN实现太阳斑点图高分辨重建
Combining MCycleGAN and RFCNN to Realize High Resolution Reconstruction of Solar Speckle Image
计算机科学, 2021, 48(6A): 38-42. https://doi.org/10.11896/jsjkx.201000160
[9] 张曼, 李杰, 朱新忠, 沈霁, 成昊天.
基于改进DCGAN算法的遥感数据集增广方法
Augmentation Technology of Remote Sensing Dataset Based on Improved DCGAN Algorithm
计算机科学, 2021, 48(6A): 80-84. https://doi.org/10.11896/jsjkx.200700185
[10] 刘汉卿, 康晓东, 李博, 张华丽, 冯继超, 韩俊玲.
利用深度学习网络对医学影像分类识别的比较研究
Comparative Study on Classification and Recognition of Medical Images Using Deep Learning Network
计算机科学, 2021, 48(6A): 89-94. https://doi.org/10.11896/jsjkx.201000116
[11] 王丽芳, 王蕊芳, 蔺素珍, 秦品乐, 高媛, 张晋.
基于双残差超密集网络的多模态医学图像融合
Multimodal Medical Image Fusion Based on Dual Residual Hyper Densely Networks
计算机科学, 2021, 48(2): 160-166. https://doi.org/10.11896/jsjkx.200400095
[12] 张扬, 马小虎.
基于改进生成对抗网络的动漫人物头像生成算法
Anime Character Portrait Generation Algorithm Based on Improved Generative Adversarial Networks
计算机科学, 2021, 48(1): 182-189. https://doi.org/10.11896/jsjkx.191100092
[13] 于文家, 丁世飞.
基于自注意力机制的条件生成对抗网络
Conditional Generative Adversarial Network Based on Self-attention Mechanism
计算机科学, 2021, 48(1): 241-246. https://doi.org/10.11896/jsjkx.200700187
[14] 叶亚男, 迟静, 于志平, 战玉丽, 张彩明.
基于改进CycleGan模型和区域分割的表情动画合成
Expression Animation Synthesis Based on Improved CycleGan Model and Region Segmentation
计算机科学, 2020, 47(9): 142-149. https://doi.org/10.11896/jsjkx.190900203
[15] 朱珍, 黄锐, 臧铁钢, 卢世军.
基于加权近红外图像融合的单幅图像除雾方法
Single Image Defogging Method Based on Weighted Near-InFrared Image Fusion
计算机科学, 2020, 47(8): 241-244. https://doi.org/10.11896/jsjkx.200300068
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!