计算机科学 ›› 2024, Vol. 51 ›› Issue (10): 311-319.doi: 10.11896/jsjkx.230800069

• 计算机图形学&多媒体 • 上一篇    下一篇

基于联合增强图像对的红外可见光深度展开图像融合网络

袁天蕙, 干宗良   

  1. 南京邮电大学通信与信息工程学院 南京 210003
  • 收稿日期:2023-08-10 修回日期:2024-01-19 出版日期:2024-10-15 发布日期:2024-10-11
  • 通讯作者: 干宗良(ganzl@njupt.edu.cn)
  • 作者简介:(1021010503@njupt.edu.cn)
  • 基金资助:
    国家自然科学基金(61471201)

Infrared and Visible Deep Unfolding Image Fusion Network Based on Joint Enhancement ImagePairs

YUAN Tianhui, GAN Zongliang   

  1. School of Communication and Information Engineering,Nanjing University of Posts and Telecommunications,Nanjing 210003,China
  • Received:2023-08-10 Revised:2024-01-19 Online:2024-10-15 Published:2024-10-11
  • About author:YUAN Tianhui,born in 1999,postgra-duate,is a member of CCF(No.P5168G).Her main research interests include computer vision,image fusion and deep neural network.
    GAN Zongliang,born in 1979,Ph.D,associate professor,master supervisor.His main research interests include vi-deo analysis,image processing and neural networks.
  • Supported by:
    National Natural Science Foundation of China(61471201).

摘要: 受到采集环境的影响,红外可见光融合图像有时会存在亮度不足、细节信息不够的问题。为此,提出了一种基于联合增强图像对的红外可见光深度展开图像融合网络,同时将原始红外-可见光图像对和红外-可见光图像增强对作为输入,提高网络信息融合能力。文中首先提出了一种残差展开模块,在此基础上构建了基于迭代的残差展开卷积网络用于特征提取,使其根据不同的初始化参数提取对应图像的背景和细节信息。此外,在特征融合卷积融合网络中引入了维度拼接操作和上下采样卷积块,实现联合红外-可见光图像增强对的特性汇聚,最大限度地保留源图像的差异特征。同时,优化了损失函数权重设计,以获得最佳的融合结果。在多个数据库上进行了大量实验,结果表明,与现有典型的融合方法相比,所提算法的融合图像在主观视觉和客观指标评价上均具有较好性能,在暗照度环境下优于其他方法。

关键词: 图像融合, 深度算法展开网络, 图像增强, 特征提取, 特征融合

Abstract: Under unfavorable circumstances,the fused image of the infrared and visible images sometimes suffers from low brightness and insufficient details.Therefore,a novel infrared and visible deep unfolding image fusion network based on joint enhancement image pairs is proposed.To increase input information,both the original infrared/visible image pair and their enhancement pair are used as deep network's input.Firstly,an iterative residual unfolding convolutional network based on deep residual unfolding module is developed to obtain the background features or detail features according to different initialization network parameters.Then,concatenate operation and up-down sampling pair are introduced to the convolutional feature fusion network,where features of the corresponding enhancement image pairs can be added to fusion task and the discrepant features of raw images are maximumly retained.Meanwhile,the loss function is optimized to obtain better results.Numerous experiments on multiple datasets demonstrate that the proposed method can get competitive fusion images both in terms of subjective evaluation and objective metrics,and have better performance under low light environments.

Key words: Image fusion, Deep algorithm unrolling network, Image enhancement, Feature extraction, Feature fusion

中图分类号: 

  • TP391
[1]LI S,KANG X,HU J,et al.Image matting for fusion of multi-focus images in dynamic scenes[J].Information Fusion,2013,14(2):147-162.
[2]MA J Y,LIANG P W,YU W,et al.Infrared and visible imagefusion via detail preserving adversarial learning[J].Information Fusion,2020,54:85-98.
[3]WANG C C,ZANG Y S,ZHOU D M,et al.An interactive deepmodel combined with Retinex for low-light visible and infrared image fusion[J].Neural Computing and Applications,2023,35(16):11733-11751.
[4]JIANG S,WANG P L,DENG Z J,et al.Image fusion algorithm for traffic accident rescue based on deep learning[J].Journal of Jilin University(Engineering and Technology Edition),2023,53(12):3472-3480.
[5]TANG L F,ZHANG H,XU H,et al.Deep Learning-basedImage Fusion:A survey [J].Journal of Image and Graphics,2023,28(1):3-36.
[6]KONG J,ZHENG K Y,ZHANG J B,et al.Multi-focus Image Fusion Using Spatial Frequency and Genetic Algorithm[J].International Journal of Computer science & Network Security,2008,8(2):220-224.
[7]ASLANTAS V,KURBAN R.Fusion of multi-focus imagesusing differential evolution algorithm[J].Expert Systems with Applications,2010,37(12):8861-8870.
[8]LI S T,KANG X D,HU J W.Image Fusion With Guided Filtering[J].IEEE Transactions on Image Processing,2013,22(7):2864-2875.
[9]BAVIRISETTI D P,DHULI R.Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform[J].IEEE Sensors Journal,2016,16(1):203-209.
[10]SHREYAMSHA KUMAR B K.Image fusion based on pixelsignificance using cross bilateral filter[J].Signal,Image and Video Processing,2015,9:1193-1204.
[11]ZHOU Z Q,DONG M J,XIE X Z,et al.Fusion of infrared and visible images for night-vision context enhancement[J].Applied Optics,2016,55(23):6480-6490.
[12]LI H,WU X J.Infrared and visible image fusion using latentlow-rank representation[EB/OL].(2018-04-24)[2022-11-11].https://arxiv.org/abs/1804.08992v4.
[13]BAVIRISETTI D P,DHULI R.Two-scale image fusion of visible and infrared images using saliency detection[J].Infrared Physics &Technology,2016,76:52-64.
[14]LI H F,CEN Y L,LIU Y,et al.Different Input Resolutions and Arbitrary Output Resolution:A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion[J].IEEE Transactions on Image Processing,2021,30:4070-4083.
[15]TANG L F,YUAN J T,MA J Y.Image fusion in the loop of high-level vision tasks:A semantic-aware real-time infrared and visible image fusion network[J].Information Fusion,2022,82:28-42.
[16]TANG L F,YUAN J T,ZHANG H,et al.PIAFusion:A progressive infrared and visible image fusion network based on illumination aware[J].Information Fusion,2022(83/84):79-92.
[17]LI H,WU X J.DenseFuse:A Fusion Approach to Infrared and Visible Images[J].IEEE Transactions on Image Processing,2018,28(5):2614-2623.
[18]WANG J Z,XU H N,WANG H F.Infrared and Visible Image Fusion Based on Residual Dense Block and Auto-Encoder Network[J].Transactions of Beijing Institute of Technology,2021,41(10):1077-1083.
[19]YU L X,CUI Q,CHE J.Image fusion model based on structure reparameterization method and spatial attention mechanism[J].Application Research of Computers,2022,39(5):1573-1578.
[20]XU H,WANG X Y,MA J Y.DRF:Disentangled Representation for Visible and Infrared Image Fusion[J].IEEE Transactions on Instrumentation and Measurement,2021,70:1-13.
[21]MA J Y,YU W,LIANG P W,et al.FusionGAN:A generative adversarial network for infrared and visible image fusion[J].Information Fusion,2019,48:11-26.
[22]MA J Y,XU H,JIANG J J,et al.DDcGAN:A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion[J].IEEE Transactions on Image Proces-sing,2020,29:4980-4995.
[23]MA J Y,LIANG P W,YU W C,et al.Infrared and visible image fusion via detail preserving adversarial learning[J].Information Fusion,2020,54:85-98.
[24]LI J,HUO H T,LI C M,et al.AttentionFGAN:Infrared andVisible Image Fusion using Attention-based Generative Adversarial Networks[J].IEEE Transactions on Multimedia,2020,23:1383-1396.
[25]MONGA V,LI Y,ELDAR Y C.Algorithm Unrolling:Interpre-table,Efficient Deep Learning for Signal and Image Processing[J].IEEE Signal Processing Magazine,2021,38(2):18-44.
[26]TANG J Q,MUKHERJEE S,CAROLA-BIBIANE S.Operator Sketching for Deep Unrolling Networks[J].arXiv:2203.11156,2022.
[27]ZHANG X F,YAN H.Medical image fusion and noise suppression with fractional-order total variation and multi-scale decomposition[J].IET Image Processing,2021,15(8):1688-1701.
[28]ZHAO Z X,XU S,ZHANG J S,et al.Efficient and Model-Based Infrared and Visible Image Fusion via Algorithm Unrolling[J].IEEE Transactions on Circuits and Systems for Video Technology,2022,32:1186-1196.
[29]SHI Y F,ZHAO B T.Low-light enhancement algorithm based on retinex theory[J].Journal of Chongqing Technology and Business University(Natural Science Edition),2023,40(6):61-67.
[30]LEE S,KIM D,KIM C.Ramp Distribution-Based Image En-hancement Techniques for Infrared Images[J].IEEE Signal Processing Letters,2018,25(7):931-935.
[31]WU X M,KAWANISHI T,KASHINO K.Reflectance-Guided Histogram Equalization and Comparametric Approximation[J].IEEE Transactions on Circuits and Systems for Video Techno-logy,2021,31(3):863-876.
[32]XU H,MA J Y,JIANG J J,et al.U2Fusion:A Unified Unsupervised Image Fusion Network[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,44(1):502-518.
[33]TANG L F,DENG Y X,MA J Y,et al.SuperFusion:A Versatile Image Registration and Fusion Network with Semantic Awareness[J].IEEE/CAA Journal of Automatica Sinica,2022,9(12):2121-2137.
[34]LIU J Y,FAN X,HUANG Z B,et al.Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection[C]//Conference on Computer Vision and Pattern Recognition.2022:5802-5811.
[35]TANG L,ZHANG H,XU H,et al.Rethinking the necessity of image fusion in high-level vision tasks:A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity[J].Information Fusion,2023,99:101870.
[36]ZHANG X,YE P,XIAO G.VIFB:A visible and infrared image fusion benchmark[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition Workshops.2020:104-105.
[37]HAN Y,CAI Y Z,CAO Y,et al.A new image fusion perfor-mance metric based on visual information fidelity[J].Information Fusion,2013,14(2):127-135.
[38]SU S L,YAN Q S,ZHU Y,et al.Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network[C]//Computer Vision and Pattern Recognition(CVPR).2020:3667-3676.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!