计算机科学 ›› 2023, Vol. 50 ›› Issue (11A): 220900264-6.doi: 10.11896/jsjkx.220900264

• 图像处理&多媒体技术 • 上一篇    下一篇

联合边缘检测与参数自适应PCNN的遥感图像融合方法

石影, 贺新光, 刘滨瑞   

  1. 湖南师范大学地理科学学院 长沙 410081
    地理空间大数据挖掘与应用湖南省重点实验室 长沙 410081
  • 发布日期:2023-11-09
  • 通讯作者: 贺新光(xghe@hunnu.edu.cn)
  • 作者简介:(shiying4560@163.com)
  • 基金资助:
    湖南省自然资源厅科技项目(2021-45)

Remote Sensing Image Fusion Method Combining Edge Detection and Parameter-adaptive PCNN

SHI Ying, HE Xinguang, LIU Binrui   

  1. School of Geographic Sciences,Hunan Normal University,Changsha 410081,China
    Key Laboratory of Geospatial Big Data Mining and Application,Hunan Province,Changsha 410081,China
  • Published:2023-11-09
  • About author:SHI Ying,born in 1998,postgraduate.Her main research interests include remote sensing image fusion and classification.
    HE Xinguang,born in 1973,Ph.D,professor.His main research interests include geographic spatiotemporal big data mining and application.
  • Supported by:
    Science and Technology Project from Department of Natural Resources of Hunan Province(2021-45).

摘要: 为了提高全色与多光谱图像的融合质量,解决脉冲耦合神经网络(PCNN)参数调整困难和融合图像边缘特征保存不完整的问题,提出了一种联合Canny算子和参数自适应PCNN的遥感图像融合方法。首先对多光谱图像进行HSV颜色空间变换,获取多光谱的V亮度分量,再利用Canny算子提取全色图像边缘特征,并根据边缘特征因子对全色图像与多光谱的V分量进行边缘特征融合,得到边缘加强的全色图像。然后对新的全色图像和多光谱V分量分别进行非下采样剪切波变换(NSST),获得相应的高频和低频系数子带。其高频子带采用参数自适应PCNN模型进行融合,其中所有PCNN参数均由输入频段自适应估计,得到具有最优参数的PCNN模型;而低频子带则采用有选择性的加权求和规则进行融合。最后由NSST逆变换得到新的V分量,再经HSV逆变换获得最终的融合图像。将所提方法与其他新近提出的方法进行对比实验,选取7种客观评价指标对融合图像的空间细节和光谱信息进行评价。实验结果表明,所提融合算法在视觉质量以及客观指标评价方面上更有优势,获得了更好的融合性能。

关键词: 图像融合, 脉冲耦合神经网络, Canny算子, 剪切波变换, 参数优化

Abstract: In order to improve the fusion quality of panchromatic(PAN) and multispectral(MS) images,and to solve the pro-blems of difficulty in parameter adjustment ofpulse coupled neural network(PCNN)and incomplete preservation of edge features of fused images,this paper proposes a remote sensing image fusion method by combining Canny operator and parameter-adaptive.Firstly,the MS image is converted into HSV color space to obtain thevalue(V) component,and the edge information of PAN image is distinguished to the non-edge by Canny operator.The edge of PAN image is enhanced by fusing the PAN image and V-component of MS image according to the characteristics of edge distribution.Then,the new PAN image and the V-component of MS image are respectively decomposed into their corresponding high-frequency and low-frequency coefficient bands by the nonsubsampled shearlet transform(NSST).The high-frequency bands are fused by a parametric-adaptive PCNN model,in which all the PCNN parameters can be estimated adaptively by the input frequency bands to obtain a PCNN model with optimal parameters.The low-frequency bands are fused by the method of selective weighted summation.Finally,the new V-component is obtained by inverse transform of NSST,and then the final fused image is achieved by inverse transform of HSV.The proposed method is compared with other recent methods,and seven objective evaluation indicators are selected to evaluate the spatial details and spectral information of the fusion image.Experimental results show that the proposed method can obtain better fusion performance with more advantages in visual quality and objective index evaluation.

Key words: Image fusion, Pulse coupled neural network, Canny operator, Shearlet transform, Parameter optimization

中图分类号: 

  • TP751
[1]ULLAH H,ZHAO Y Q,ABDALLA F,et al.Fast local Laplacian filtering based enhanced medical image fusion using para-meter-adaptive PCNN and local features-based fuzzy weighted matrices[J].Applied Intelligence,2021,2021,52(7):7965-7984.
[2]XU D D,WANG Y C,XU S Y,et al.Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network[J].Applied Sciences-Basel,2020,10(2):554.
[3]WANG C,ZHAO Z Y,REN Q Q,et al.A novel multi-focusimage fusion by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy[J].Applied Soft Computing,2020,91:106253.
[4]PAN Y T,LIU D F,WANG L G,et al.A Pan-SharpeningMethod with Beta-Divergence Non-Negative Matrix Factorization in Non-Subsampled Shear Transform Domain[J].Remote Sensing,2022,14(12):2921.
[5]LI S T,KANG X D,FANG L Y,et al.Pixel-level image fusion:A survey of the state of the art[J].Information Fusion,2017,33:100-112.
[6]LI H F,QIU H M,YU Z T,et al.Multifocus image fusion viafixed window technique of multiscale images and non-local means filtering[J].Signal Processing,2017,138:71-85.
[7]BAI X Z,LIU M M,CHEN Z G,et al.Multi-Focus Image Fusion Through Gradient-Based Decision Map Construction and Mathematical Morphology[J].IEEE Access,2016,4(99):4749-4760.
[8]FU J,LI W S,DU J,et al.Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy[J].Computers in Biology and Medicine,2020,126:104048.
[9]SINGH R,KHARE A.Fusion of multimodal medical imagesusing Daubechies complex wavelet transform-A multiresolution approach[J].Information Fusion,2014,19(1):49-60.
[10]WANG X H,BAI S F,LI Z,et al.The PAN and MS Image Pansharpening Algorithm Based on Adaptive Neural Network and Sparse Representation in the NSST Domain[J].IEEE Access,2019,7:52508-52521.
[11]SULAIMAN A G,ELASHMAWI W H,ELTAWEEL G S.IHS-based pan-sharpening technique for visual quality improvement using KPCA and enhanced SML in the NSCT domain[J].International Journal of Remote Sensing,2021,42(2):537-566.
[12]SMADI A A,YANG S Y,ABUGABAH A,et al.A Pansharpe-ning Based on the Non-Subsampled Contourlet Transform and Convolutional Autoencoder:Application to QuickBird Imagery[J].IEEE Access,2022,10:44778-44788.
[13]REINHARD E,HERBERT J R,ARNDT M,et al.FeatureLinking via Synchronization among Distributed Assemblies:Simulations of Results from Cat Visual Cortex[J].Neural Computation,1990,2(3):293-307.
[14] ZHANG K,HUANG Y D,ZHAO C.Remote sensing image fusion via RPCA and adaptive PCNN in NSST domain[J].International Journal of Wavelets,Multiresolution and Information Processing,2018,16(5):1850037.
[15]LI Y,ZHENG M Y,QI G Q,et al.A Novel Image FusionFramework Based on Sparse Representation and Pulse Coupled Neural Network[J].IEEE Access,2019,7:98290-98305.
[16]ZHANG L X,ZENG G P,WEI J J,et al.Multi-Modality Image Fusion in Adaptive-Parameters SPCNN Based on Inherent Cha-racteristics of Image[J].IEEE Sensors Journal,2020,20(20):11820-11827.
[17]CANNY J.A Computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1986,PAMI-8(6):679-698.
[18]GU Z P,HE X G.Multiscale remote sensing image fusion me-thod coupling edge detection and optimization[J].Computer Engineering and Applications,2017,53(11):192-198.
[19]CHEN Y L,PARK S K,MA Y,et al.A New Automatic Para-meter Setting Method of a Simplified PCNN for Image Segmentation[J].IEEE Transactions on Neural Networks and Learning Systems,2011,22(6):880-892.
[20]YIN M,LIU X N,LIU Y,et al.Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain[J].IEEE Transactions on Instrumentation and Measurement,2019,68(1):49-64.
[21] BAO C H,ZHU K,HE X G.Remote Sensing Image FusionMethod Based on Local Feature of Nonsubsampled Contourlet Coefficients[J].Computer Science,2014,41(03):310-313,319.
[22]BALASUBRAMANIAM P,ANANTHI V P.Image fusionusing intuitionistic fuzzy sets[J].Information Fusion,2014,20:21-30.
[23]JIN X,RENCAN N,ZHOU D M,et al.Multifocus Color Image Fusion Based on NSST and PCNN[J].Journal of Sensors,2016,2016:8359602.
[24]LI S T,LI C Y,KANG X D.Development status and future prospects of multi-source remote sensing image fusion[J].National Remote Sensing Bulletin,2021,25(1):148-166.
[25] TAN W,TIWARI P,PANDEY H M,et al.Multimodal medicalimage fusion algorithm in the era of big data[J].Neural Computing and Applications,2020,3:1-21.
[26]CHENG F F,FU Z T,HUANG L,et al.Non-subsampled shearlet transform remote sensing image fusion combined with parameter-adaptive PCNN[J].Acta Geodaetica et Cartographica Sinica,2021,50(10):1380-1389.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!