计算机科学 ›› 2023, Vol. 50 ›› Issue (11A): 230200072-7.doi: 10.11896/jsjkx.230200072

• 图像处理&多媒体技术 • 上一篇    下一篇

双分支注意力网络的遥感图像融合

李贺, 聂仁灿, 杨小飞, 张谷铖   

  1. 云南大学信息学院 昆明 650091
  • 发布日期:2023-11-09
  • 通讯作者: 聂仁灿(rcnie@ynu.edu.cn)
  • 作者简介:(19960225lh@mail.ynu.edu.cn)
  • 基金资助:
    国家自然科学基金(61966037,61463052);中国博士后科学基金面上项目(2017M621586);云南省智能系统与计算重点实验室建设项目(202205AG070003);云南大学研究生科研创新项目(KC-22221956,2021Y263

Remote Sensing Image Fusion with Dual-branch Attention Network

LI He, NIE Rencan, YANG Xiaofei, ZHANG Gucheng   

  1. School of Information Science and Engineering,Yunnan University,Kunming 650091,China
  • Published:2023-11-09
  • About author:LI He,born in 1996,master candidate.His main research interests include deep learning and image processing.
    NIE Rencan,born in 1982,Ph.D,professor,doctoral supervisor.His main research interests include neural networks,image processing and machine learning.
  • Supported by:
    )National Natural Science Foundation of China(61966037,61463052),China Postdoctoral Science Foundation(2017M621586),Program of Yunnan Key Laboratory of Intelligent Systems and Computing(202205AG070003),Postgraduate Science Foundation of Yunnan University(KC-22221956,2021Y263).

摘要: 在遥感图像中,PAN图像具有较高的空间分辨率,而MS图像包含了更多的光谱信息,因此,将它们进行融合得到高分辨率的多光谱图像是一项重要的技术。由于CNN往往无法准确获取远距离的空间特征,因而限制了全色锐化的空间细节。为了充分提取全色图像的空间信息和多光谱图像的光谱信息,文中提出了一种双分支注意力网络用于遥感图像融合任务。与以往利用纯卷积神经网络提取空间和光谱信息的方法不同,该方法在卷积块中引入空间注意力模块和通道注意力模块,分别用于关注空间和光谱信息,并在层级之间进行信息交互,以充分提取空间信息和光谱信息;同时,以Transformer为基础架构,搭建Transformer全局分支用于充分学习图像中的空间特征和光谱特征,最后经过解码得到高空间分辨率的多光谱图像。该方法在IKONOS和WorldView-2数据集上进行了全分辨率实验和降低分辨率实验,实验结果表明,该方法相比于对比方法在客观指标和主观视觉上均取得了更好的结果。

关键词: 深度学习, 卷积神经网络, 多光谱与全色图, 注意力机制, 图像融合

Abstract: In remote sensing imaging,PAN images have higher spatial resolution,while MS images contain more spectral information.Therefore,it is an important technique to fuse them to obtain high-resolution multispectral images.The spatial details of panchromatic sharpening are limited because CNNS often fail to accurately capture long-range spatial features.In order to fully extract the spatial information of panchromatic images and the spectral information of multispectral images,this paper proposes a dual-branch attention network for remote sensing image fusion tasks.Different from the previous methods that use pure convolutional neural networks to extract spatial and spectral information,this method introduces a spatial attention module and a channel attention module into the convolutional block to focus on spatial and spectral information respectively,and performs information interaction between layers to fully extract spatial information and spectral information.At the same time,based on the Transformer architecture,this paper builds the global branch of the Transformer to fully learn the spatial and spectral features in the image,and finally obtains the multispectral image with high spatial resolution after decoding.Full-resolution and reduced-resolution experiments are carried out on the IKONOS and WorldView-2 datasets.Experimental results show that the proposed method achieves better results than other methods in terms of objective indicators and subjective vision.

Key words: Deep learning, Convolutional neural network, Multispectral and panchromatic image, Attention mechanism, Image fusion

中图分类号: 

  • TP391
[1]CHEN J.Vegetation classification based on high-resolution sa-tellite image[J].J Remote Sens,2007,11:221-227.
[2]XIAOYAN W,YONG L,ZHIYONG J.An IHS Fusion Method based on Structural Similarity[J].Remote Sensing Technology and Application,2011,26(5):670-676.
[3]ZENG Y Y,HE J N.Remote Sensing Image Fusion MethodBased on Regional Wavelet Statistical Features[J].Jisuanji Gongcheng/ Computer Engineering,2011,37(19):198-200.
[4]PING S,LEI D,JUAN N.Multi-scale Remote Sensing ImageFusion Method based on Region Segmentation[J].Remote Sensing Technology and Application,2013,27(6):844-849.
[5]LI S,YANG B.A new pan-sharpening method using a com-pressed sensing technique[J].IEEE Transactions on Geoscience and Remote Sensing,2010,49(2):738-746.
[6]YIN H.Sparse representation based pansharpening with details injection model[J].Signal Processing,2015,113:218-227.
[7]GILLESPIE A R,KAHLE A B,WALKER R E.Color enhancement of highly correlated images.II.Channel ratio and “chromaticity” transformation techniques[J].Remote Sensing of Environment,1987,22(3):343-365.
[8]LABEN C A,BROWER B V.Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening[Z].Google Patents.2000
[9]TU T M,SU S C,SHYU H C,et al.A new look at IHS-likeimage fusion methods[J].Information Fusion,2001,2(3):177-186.
[10]LIU J.Smoothing filter-based intensity modulation:A spectral preserve image fusion technique for improving spatial details[J].International Journal of Remote Sensing,2000,21(18):3461-3472.
[11]WALD L,RANCHIN T.Liu’ Smoothing filter-based intensitymodulation:A spectral preserve image fusion technique for improving spatial details’[J].International Journal of Remote Sensing,2002,23(3):593-597.
[12]VIVONE G,RESTAINO R,DALLA MURA M,et al.Contrast and error-based fusion schemes for multispectral image pansharpening[J].IEEE Geoscience and Remote Sensing Letters,2013,11(5):930-934.
[13]AIAZZI B,ALPARONE L,BARONTI S,et al.Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis[J].IEEE Transactions on Geoscience and Remote Sensing,2002,40(10):2300-2312.
[14]MASI G,COZZOLINO D,VERDOLIVA L,et al.Pansharpening by convolutional neural networks[J].Remote Sensing,2016,8(7):594.
[15]SCARPA G,VITALE S,COZZOLINO D.Target-adaptiveCNN-based pansharpening[J].IEEE Transactions on Geos-cience and Remote Sensing,2018,56(9):5443-5457.
[16]YANG J,FU X,HU Y,et al.PanNet:A deep network architecture for pan-sharpening[C]//Proceedings of the Proceedings of the IEEE International Conference on Computer Vision.2017.
[17]WEI Y,YUAN Q,SHEN H,et al.Boosting the accuracy ofmultispectral image pansharpening by learning a deep residual network[J].IEEE Geoscience and Remote Sensing Letters,2017,14(10):1795-1799.
[18]ZHANG Y,LIU C,SUN M,et al.Pan-sharpening using an efficient bidirectional pyramid network[J].IEEE Transactions on Geoscience and Remote Sensing,2019,57(8):5549-5463.
[19]DENG L J,VIVONE G,JIN C,et al.Detail injection-based deep convolutional neural networks for pansharpening[J].IEEE Transactions on Geoscience and Remote Sensing,2020,59(8):6995-7010.
[20]WANG L,YANG X M.Panchromatic sharpening feedback network of remote sensing image based on perception loss[J].Journal of Computer Science,2019,48(8):91-98.
[21]XIANG Z,XIAO L,LIAO W,et al.MC-JAFN:Multilevel contexts-based joint attentive fusion network for pansharpening[J].IEEE Geoscience and Remote Sensing Letters,2021,19:1-5.
[22]ZHANG W,LI J,HUA Z.Attention-Based Multistage FusionNetwork for Remote Sensing Image Pansharpening[J].IEEE Transactions on Geoscience and Remote Sensing,2021,60:1-16.
[23]DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.Animage is worth 16x16 words:Transformers for image recognition at scale[J].arXiv:2010.11929,2020.
[24]MENG X,WANG N,SHAO F,et al.Vision Transformer for Pansharpening[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60:1-11.
[25]GONG M,ZHANG H,XU H,et al.Multi-patch ProgressivePansharpening with Knowledge Distillation[J].IEEE Transactions on Geoscience and Remote Sensing,2023,61:1-15.
[26]WALD L,RANCHIN T,MANGOLINI M.Fusion of satelliteimages of different spatial resolutions:Assessing the quality of resulting images[J].Photogrammetric Engineering and Remote Sensing,1997,63(6):691-699.
[27]ALPARONE L,BARONTI S,GARZELLI A,et al.A globalquality measurement of pan-sharpened multispectral imagery[J].IEEE Geoscience and Remote Sensing Letters,2004,1(4):313-317.
[28]GARZELLI A,NENCINI F.Hypercomplex Quality Assessment of Multi/Hyperspectral Images[J].Geoscience and Remote Sensing Letters,2009,6(4):662-665.
[29]ALPARONE L,WALD L,CHANUSSOT J,et al.Comparisonof Pansharpening Algorithms:Outcome of the 2006 GRS-S Data-Fusion Contest[J].IEEE Transactions on Geoscience and Remote Sensing,2007(10):45.
[30]FANG F,LI F,SHEN C,et al.A Variational Approach for Pan-Sharpening[J].IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society,2013,22(7):2822-2834.
[31]KANEKO S,SATOH Y,IGARASHI S.Using selective correlation coefficient for robust image registration[J].Pattern Recognition,2003,36(5):1165-1173.
[32]WANG Z,BOVIK A C.A universal image quality index[J].IEEE Signal Processing Letters,2002,9(9):81-84.
[33]ALPARONE L,AIAZZI B,BARONTI S,et al.Multispectral and Panchromatic Data Fusion Assessment Without Reference[J].Photogrammetric Engineering & Remote Sensing,2008,74(2):193-200.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!