Computer Science ›› 2022, Vol. 49 ›› Issue (5): 58-63.doi: 10.11896/jsjkx.210200148

• Computer Graphics & Multimedia • Previous Articles     Next Articles

Infrared and Visible Image Fusion Based on Feature Separation

GAO Yuan-hao1,2, LUO Xiao-qing1,2, ZHANG Zhan-cheng3   

  1. 1 School of Artificial Intelligence and Computer,Jiangsu University,Wuxi,Jiangsu 214122,China
    2 Pattern Recognition and Computational Intelligence Engineering Laboratory,Wuxi,Jiangsu 214122,China
    3 School of Electronics and Information Engineering,Suzhou University of Science and Technology,Suzhou,Jiangsu 215009,China
  • Received:2021-02-23 Revised:2021-07-10 Online:2022-05-15 Published:2022-05-06
  • About author:GAO Yuan-hao,born in 1995,postgra-duate.His main research interests include image fusion and deep learning.
    LUO Xiao -qing,born in 1980,Ph.D,associate professor,is a member of China Computer Federation.Her main research interests include image fusion and computer vision.
  • Supported by:
    National Natural Science Foundation ofChina(61772237) and Six Talent Peaks Project in Jiangsu Province(XYDXX-030).

Abstract: Although a pair of infrared and visible images captured in the same scene have different modes,they also have shared public information and complementary private information.A complete fusion image can be obtained by learning and integrating above information.Inspired by residual network,in the training stage,each branch is forced to map a label with global features through the interchange and addition of feature-levels among network branches.What’s more,each branch is encouraged to learn the private features of corresponding images.Directly learning the private features of images can avoid designing complex fusion rules and ensure the integrity of feature details.In the fusion stage,the maximum fusion strategy is adopted to fuse the private features,add them to the learned public features at the decoding layer and finally decode the fused image.The model is trained over a multi-focused data set that is synthesized from the NYU-D2 and tested over the real-world TNO data set.Experimental results show that compared with the current mainstream infrared and visible fusion algorithms,the proposed algorithm achieves better results in subjective effects and objective evaluation indicators.

Key words: Feature extraction, Image fusion, Private feature, Public feature, Residual learning

CLC Number: 

  • TP301.6
[1]YU X C,GAO G Y,XU J D,et al.Remote sensing image fusion based on sparse representation[C]//2014 IEEE Geoscience and Remote Sensing Symposium.2014:2858-2861.
[2]ZHAO W D,LU H C.Medical Image Fusion and Denoising withAlternating Sequential Filter and Adaptive Fractional Order Total Variation[J].IEEE Transactions on Instrumentation and Measurement,2017,66(9):2283-2294.
[3]LI Y S,TAO C,TAN Y H,et al.Unsupervised Multilayer Feature Learning for Satellite Image Scene Classification[J].IEEE Geoscience and Remote Sensing Letters,2016,13(2):157-161.
[4]GUAN Z,DENG Y L,NIE R C.Non-negative Matrix Factorization Based on Spectral Reconstruction Constraint for Hyperspectral and Panchromatic Image Fusion[J].Computer Science,2021,48(9):153-159.
[5]JIN X,JIANG Q,YAO S W,et al.A survey of infrared and vi-sual image fusion methods[J].Information Fusion,2017,85:478-501.
[6]BAI X Z,ZHANG Y,ZHOU F G,et al.Quadtree-based multi-focus image fusion using a weighted focus-measure[J].Information Fusion,2015,22:105-118.
[7]WANG W C,CHANG F L.A multi-focus image fusion method based on laplacian pyramid[J].Journal of Computers,2011,6(12),2559-2566.
[8]RONNEBERGER O,FISCHER P,BROX T,et al.U-net:Con-volutional networks for biomedical image segmentation[J].Lecture Notes in Computer Science,2015,9351:234-241.
[9]PAUL V,JONATHON L,PHILIP H S,et al.Siam R-CNN:Visual Tracking by Re-Detection[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).2020.
[10]LIU Y,CHEN X,PENG H,et al.Multi-focus image fusion with a deep convolutional neural network[J].Information Fusion,2017,36:191-207.
[11]LI H,WU X J.Densefuse:A fusion approach to infrared and vi-sible images[J].IEEE Transactions on Image Processing,2019,28(5):2614-2623.
[12]ZHANG Y,LIU Y,SUN P,et al.IFCNN:A general image fusion framework based on convolutional neural network[J].Information Fusion.2020,54:99-118.
[13]MA J Y,XU H,JIANG J J,et al.DDcGAN:A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion[J].IEEE Transactions on Image Proces-sing,2020,29:4980-4995.
[14]FU Y,WU X J.Image fusion based on generative adversarial network consistent with perception[J].Information Fusion,2021,72:110-125.
[15]HE K M,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2016:770-778.
[16]LI H,WU X J.Infrared and visible image fusion with ResNet and zero-phase component analysis[OL].https://arxiv.org/pdf/1806.07119.pdf.
[17]LIU Y,LIU S P,WANG Z F,et al.A General Framework for Image Fusion Based on Multi-scale Transform and Sparse Representation[J].Information Fusion.2015,24:147-164.
[18]MA J Y,CHEN C,LI C,et al.Infrared and visible image fusion via gradient transfer and total variation minimization[J].Information Fusion,2016,31(C):100-109.
[19]PRABHAKAR K R,SRIKAR V S,BABU R V,et al.Deepfuse:A deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//IEEE International Conference on Computer Vision.2017:4724-4732.
[20]MA J Y,YU W,LIANG P W,et al.Fusiongan:A generativeadversarial network for infrared and visible image fusion[J].Information Fusion,2019,48:11-26.
[21]LIU Z,BLASCH E,XUE Z,et al.Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision:A comparative study[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2019,34(1):94-109.
[22]WANG Z,BOVIK A C.A universal image quality index[J].IEEE Signal Processing Letters,2002,9(3):81-84.
[1] ZHANG Yuan, KANG Le, GONG Zhao-hui, ZHANG Zhi-hong. Related Transaction Behavior Detection in Futures Market Based on Bi-LSTM [J]. Computer Science, 2022, 49(7): 31-39.
[2] ZENG Zhi-xian, CAO Jian-jun, WENG Nian-feng, JIANG Guo-quan, XU Bin. Fine-grained Semantic Association Video-Text Cross-modal Entity Resolution Based on Attention Mechanism [J]. Computer Science, 2022, 49(7): 106-112.
[3] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[4] ZHAO Ming-hua, ZHOU Tong-tong, DU Shuang-li, SHI Zheng-hao. Single Backlit Image Enhancement Based on Virtual Exposure Method [J]. Computer Science, 2022, 49(6A): 384-389.
[5] LIU Wei-ye, LU Hui-min, LI Yu-peng, MA Ning. Survey on Finger Vein Recognition Research [J]. Computer Science, 2022, 49(6A): 1-11.
[6] LAI Teng-fei, ZHOU Hai-yang, YU Fei-hong. Real-time Extend Depth of Field Algorithm for Video Processing [J]. Computer Science, 2022, 49(6A): 314-318.
[7] YAN Min, LUO Xiao-qing, ZHANG Zhan-cheng. Infrared and Visible Image Fusion Network Based on Optical Transmission Model Learning [J]. Computer Science, 2022, 49(4): 215-220.
[8] ZUO Jie-ge, LIU Xiao-ming, CAI Bing. Outdoor Image Weather Recognition Based on Image Blocks and Feature Fusion [J]. Computer Science, 2022, 49(3): 197-203.
[9] REN Shou-peng, LI Jin, WANG Jing-ru, YUE Kun. Ensemble Regression Decision Trees-based lncRNA-disease Association Prediction [J]. Computer Science, 2022, 49(2): 265-271.
[10] GUAN Zheng, DENG Yang-lin, NIE Ren-can. Non-negative Matrix Factorization Based on Spectral Reconstruction Constraint for Hyperspectral and Panchromatic Image Fusion [J]. Computer Science, 2021, 48(9): 153-159.
[11] HUANG Xiao-sheng, XU Jing. Multi-focus Image Fusion Method Based on PCANet in NSST Domain [J]. Computer Science, 2021, 48(9): 181-186.
[12] ZHANG Shi-peng, LI Yong-zhong. Intrusion Detection Method Based on Denoising Autoencoder and Three-way Decisions [J]. Computer Science, 2021, 48(9): 345-351.
[13] FENG Xia, HU Zhi-yi, LIU Cai-hua. Survey of Research Progress on Cross-modal Retrieval [J]. Computer Science, 2021, 48(8): 13-23.
[14] TIAN Song-wang, LIN Su-zhen, YANG Bo. Multi-band Image Self-supervised Fusion Method Based on Multi-discriminator [J]. Computer Science, 2021, 48(8): 185-190.
[15] ZHANG Li-qian, LI Meng-hang, GAO Shan-shan, ZHANG Cai-ming. Summary of Computer-assisted Tongue Diagnosis Solutions for Key Problems [J]. Computer Science, 2021, 48(7): 256-269.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!