计算机科学 ›› 2021, Vol. 48 ›› Issue (2): 160-166.doi: 10.11896/jsjkx.200400095

所属专题: 医学图像

• 计算机图形学&多媒体 • 上一篇    下一篇

基于双残差超密集网络的多模态医学图像融合

王丽芳, 王蕊芳, 蔺素珍, 秦品乐, 高媛, 张晋   

  1. 中北大学大数据学院山西省生物医学成像与影像大数据重点实验室 太原030051
  • 收稿日期:2020-04-22 修回日期:2020-07-06 出版日期:2021-02-15 发布日期:2021-02-04
  • 通讯作者: 王蕊芳(724266891@qq.com)
  • 作者简介:727690392@qq.com
  • 基金资助:
    山西省自然科学基金(201901D111152)

Multimodal Medical Image Fusion Based on Dual Residual Hyper Densely Networks

WANG Li-fang, WANG Rui-fang, LIN Su-zhen, QIN Pin-le, GAO Yuan, ZHANG Jin   

  1. The Key Laboratory of Biomedical Imaging and Imaging on Big Data,College of Big Data,North University of China,Taiyuan 030051,China
  • Received:2020-04-22 Revised:2020-07-06 Online:2021-02-15 Published:2021-02-04
  • About author:WANG Li-fang,born in 1977,Ph.D,professor,master supervisor,is a member of China Computer Federation.Her main research interests includecompu-ter vision,big data processing and medical image fusion.
    WANG Rui-fang,born in 1995,postgraduate.Her main research interests include medical image fusion and machine learning.
  • Supported by:
    The Natural Science Foundation of Shanxi Province,China(201901D111152).

摘要: 针对基于残差网络和密集网络的图像融合方法存在网络中间层的部分有用信息丢失和融合图像细节不清晰的问题,提出了基于双残差超密集网络(Dual Residual Hyper-Densely Networks,DRHDNs)的多模态医学图像融合方法。DRHDNs分为特征提取和特征融合两部分。特征提取部分通过将超密集连接与残差学习相结合,构造出双残差超密集块,用于提取特征,其中超密集连接不仅发生在同一路径的层之间,还发生在不同路径的层之间,这种连接使特征提取更充分,细节信息更丰富,并且对源图像进行了初步的特征融合。特征融合部分则进行最终的融合。通过实验将其与另外6种图像融合方法对4组脑部图像进行了融合比较,并根据4种评价指标进行了客观比较。结果显示,DRHDNs在保留细节、对比度和清晰度等方面都有很好的表现,其融合图像细节信息丰富并且清晰,便于疾病的诊断。

关键词: 超密集连接, 多模态, 卷积网络(CNN), 双残差学习, 医学图像融合

Abstract: The image fusion method based on residual network and dense network has the problem of losing some useful information in the middle layer of network and unclear details of fusion image.Therefore,a multi-modal medical image fusion based on the Dual Residual Hyper-Densely Networks (DRHDNs) is proposed.DRHDNs is divided into two parts:feature extraction and feature fusion.In the feature extraction part,a dual residual hyper dense blocks is constructed by combining hyper dense connection and residual learning.The hyper dense connection not only occurs between layers in the same path,but also between layers in different paths.This connection makes the feature more sufficient,the detail information more abundant,and the initial feature fusion of the source image is carried out .Feature fusion part is for final fusion.Compared with the other six image fusion methods,four groups of brain images are fused and compared,and an objective comparison is made from four evaluation indexes.Results show that DRHDNs has good performance in detail retention and contrast.The fusion image has rich and clear detail information,which conforms to human visual.

Key words: Convolutional neural network, Dual residual learning, Hyper dense connection, Medical image fusion, Multi-modal

中图分类号: 

  • TP391
[1] GAI D,SHEN X J,CHENG H,et al.Medical Image Fusion via PCNN Based on Edge Preservation and Improved Sparse Representation in NSST Domain[J].IEEE Access,2019,7:85413-85429.
[2] WANG L F,SHI C Y,LIN S Z,et al.Multi-modal Medical Image Fusion Based on Joint Patch Clustering of Adaptive Dictio-nary Learning[J].Computer Science,2019,46(7):238-245.
[3] BISWAJIT B,BIPLAB K S.Medical image fusion using PCNN and Poisson-hidden Markov model[J].International Journal of Signal and Imaging Systems Engineering,2018,11(2):73-84.
[4] GOODFELLOW I J,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems.Cambridge:MIT Press,2014:2672-2680.
[5] HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[J].IEEE Conf.Comput.Vision and Pattern Recognit,2016:770-778.
[6] HUANG G,LIU Z,LAURENS V D M,et al.Densely connected convolutional networks[C]//IEEE Conference on Computer Vision and Pattern Recognition.2017:1063-6919.
[7] LIU S Q,WANG J,LU Y C,et al.Multi-Focus Image Fusion Based on Residual Network in Non-Subsampled Shearlet Domain[J].IEEE Access,2019,7:152043-152063.
[8] QIU K,YI B S,XIANG M.Fusion of hyperspectral and multispectral image by dual residual dense networks[J].Optical Engineering,2019,58(2):023110.
[9] LI H,WU X J.DenseFuse:A Fusion Approach to Infrared and Visible Images[J].IEEE Transactions on Image Processing,2019,28(5):2614-2623.
[10] DOLZ J,GOPINATH K,YUAN J,et al.HyperDense-Net:AHyper-Densely Connected CNN for Multi-Modal Image Segmentation[J].IEEE Transactions on Medical Imaging,2019,38(5):1116-1126.
[11] HE K M,ZHANG X Y,REN S Q,et al.Delving deep into rectifiers:surpassing human-level performance on ImageNet classification[C]//IEEE International Conference on Computer Vision.2015:2380-7504.
[12] HUANG G,SUN Y,LIU Z,et al.Deep networks with stochastic depth[C]//Proc.ECCV.Cham,Switzerland:Springer,2016:646-661.
[13] LARSSON G,MAIRE M,SHAKHNAROVICH G.Fractalnet:Ultra-deep neural networks without residuals [EB/OL].https://arxiv.org/abs/1605.07648.
[14] JOHNSON K A,BECKER J A.The Whole Brain Atlas of Harvard Medical School[EB/OL].http:∥www.med.harvard.edu/AANLIB/.
[15] LIU Y,CHEN X,RABAB K W,et al.Medical image fusion via convolutional sparsity based morphological component analysis[J].IEEE Signal Processing Letters,2019,26(3):485-489.
[16] LIU Y,WANG Z F.Simultaneous image fusion and denoisingwith adaptive sparse representation[J].IET Image Processing,2015,9(5):347357.
[17] KIM M,HAN D K,KO H.Joint patch clustering-based dictio-nary learning for multimodal image fusion[J].Information Fusion,2016,27:198214.
[18] YIN M,LIU X N,LIU Y,et al.Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain[J].IEEE Transactions on Instrumentation and Measurement,2019,68(1):49-64.
[19] LIU Y,CHEN X,CHENG J,et al.A medical image fusionmethod based on convolutional neural networks[C]//20th International Conferenceon Information Fusion (ICIF).Xi'an,IEEE.2017:1070-1076.
[1] 聂秀山, 潘嘉男, 谭智方, 刘新放, 郭杰, 尹义龙.
基于自然语言的视频片段定位综述
Overview of Natural Language Video Localization
计算机科学, 2022, 49(9): 111-122. https://doi.org/10.11896/jsjkx.220500130
[2] 周旭, 钱胜胜, 李章明, 方全, 徐常胜.
基于对偶变分多模态注意力网络的不完备社会事件分类方法
Dual Variational Multi-modal Attention Network for Incomplete Social Event Classification
计算机科学, 2022, 49(9): 132-138. https://doi.org/10.11896/jsjkx.220600022
[3] 常炳国, 石华龙, 常雨馨.
基于深度学习的黑色素瘤智能诊断多模型算法
Multi Model Algorithm for Intelligent Diagnosis of Melanoma Based on Deep Learning
计算机科学, 2022, 49(6A): 22-26. https://doi.org/10.11896/jsjkx.210500197
[4] 李浩东, 胡洁, 范勤勤.
基于并行分区搜索的多模态多目标优化及其应用
Multimodal Multi-objective Optimization Based on Parallel Zoning Search and Its Application
计算机科学, 2022, 49(5): 212-220. https://doi.org/10.11896/jsjkx.210300019
[5] 赵亮, 张洁, 陈志奎.
基于双图正则化的自适应多模态鲁棒特征学习
Adaptive Multimodal Robust Feature Learning Based on Dual Graph-regularization
计算机科学, 2022, 49(4): 124-133. https://doi.org/10.11896/jsjkx.210300078
[6] 袁景凌, 丁远远, 盛德明, 李琳.
基于视觉方面注意力的图像文本情感分析模型
Image-Text Sentiment Analysis Model Based on Visual Aspect Attention
计算机科学, 2022, 49(1): 219-224. https://doi.org/10.11896/jsjkx.201000074
[7] 刘创, 熊德意.
多语言问答研究综述
Survey of Multilingual Question Answering
计算机科学, 2022, 49(1): 65-72. https://doi.org/10.11896/jsjkx.210900003
[8] 陈志毅, 隋杰.
基于DeepFM和卷积神经网络的集成式多模态谣言检测方法
DeepFM and Convolutional Neural Networks Ensembles for Multimodal Rumor Detection
计算机科学, 2022, 49(1): 101-107. https://doi.org/10.11896/jsjkx.201200007
[9] 周新民, 胡宜桂, 刘文洁, 孙荣俊.
基于多模态多层级数据融合方法的城市功能识别研究
Research on Urban Function Recognition Based on Multi-modal and Multi-level Data Fusion Method
计算机科学, 2021, 48(9): 50-58. https://doi.org/10.11896/jsjkx.210500220
[10] 张晓宇, 王彬, 安卫超, 阎婷, 相洁.
基于融合损失函数的3D U-Net++脑胶质瘤分割网络
Glioma Segmentation Network Based on 3D U-Net++ with Fusion Loss Function
计算机科学, 2021, 48(9): 187-193. https://doi.org/10.11896/jsjkx.200800099
[11] 孙圣姿, 郭炳晖, 杨小博.
用于多模态语义分析的嵌入共识自动编码器
Embedding Consensus Autoencoder for Cross-modal Semantic Analysis
计算机科学, 2021, 48(7): 93-98. https://doi.org/10.11896/jsjkx.200600003
[12] 张少钦, 杜圣东, 张晓博, 李天瑞.
融合多模态信息的社交网络谣言检测方法
Social Rumor Detection Method Based on Multimodal Fusion
计算机科学, 2021, 48(5): 117-123. https://doi.org/10.11896/jsjkx.200400057
[13] 武阿明, 姜品, 韩亚洪.
基于视觉和语言的跨媒体问答与推理研究综述
Survey of Cross-media Question Answering and Reasoning Based on Vision and Language
计算机科学, 2021, 48(3): 71-78. https://doi.org/10.11896/jsjkx.201100176
[14] 王树徽, 闫旭, 黄庆明.
跨媒体分析与推理技术研究综述
Overview of Research on Cross-media Analysis and Reasoning Technology
计算机科学, 2021, 48(3): 79-86. https://doi.org/10.11896/jsjkx.210200086
[15] 钱胜胜, 张天柱, 徐常胜.
多媒体社会事件分析综述
Survey of Multimedia Social Events Analysis
计算机科学, 2021, 48(3): 97-112. https://doi.org/10.11896/jsjkx.210200023
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!