计算机科学 ›› 2021, Vol. 48 ›› Issue (9): 187-193.doi: 10.11896/jsjkx.200800099

• 计算机图形学&多媒体 • 上一篇    下一篇

基于融合损失函数的3D U-Net++脑胶质瘤分割网络

张晓宇1, 王彬1, 安卫超1, 阎婷2, 相洁1   

  1. 1 太原理工大学信息与计算机学院 太原030606
    2 山西医科大学转化医学研究中心 太原030606
  • 收稿日期:2020-08-16 修回日期:2020-09-21 出版日期:2021-09-15 发布日期:2021-09-10
  • 通讯作者: 王彬(wangbin01@tyut.edu.cn)
  • 作者简介:957659144@qq.com
  • 基金资助:
    国家自然科学基金(81702449);国家重点研发计划(2018AAA0102604)

Glioma Segmentation Network Based on 3D U-Net++ with Fusion Loss Function

ZHANG Xiao-yu1, WANG Bin 1, AN Wei-chao1, YAN Ting2, XIANG Jie1   

  1. 1 College of Information and Computer,Taiyuan University of Technology,Taiyuan 030606,China
    2 Shanxi Key Laboratory of Carcinogenesis and Translational Research,Shanxi Medical University,Taiyuan 030606,China
  • Received:2020-08-16 Revised:2020-09-21 Online:2021-09-15 Published:2021-09-10
  • About author:ZHANG Xiao-yu,born in 1996,postgraduate.His main research interests include deep learning and medical imaging analysis.
    WANG Bin,born in 1983,Ph.D,asso-ciate professor,is a member of China Computer Federation.His main research interests include medical imaging analysis and neuroimaging.
  • Supported by:
    National Natural Science Foundation of China(81702449) and National Key R&D Program of China(2018AAA0102604)

摘要: 胶质瘤是大脑和脊髓胶质细胞癌变所产生的、最常见的原发性颅脑肿瘤。从多模态MRI中对胶质瘤组织进行可靠的分割具有很重要的临床价值,但是由于脑胶质瘤本身及周边组织较为复杂以及浸润性导致的边界模糊等,导致对脑胶质瘤的自动分割有一定的难度。文中构建了使用融合损失函数的3D U-Net++网络来对脑胶质瘤的不同区域进行分割,该网络使用不同层级的U-Net模型进行密集嵌套连接,使用网络的4个分支的输出结果作为深度监督以更好地结合深层和浅层的特征进行分割,并结合了Dice损失函数和交叉熵损失函数作为融合损失函数来提升小区域的分割精度。在2019年多模态脑肿瘤分割挑战赛(BraTs)的公共数据集划分的独立测试集中,采用Dice系数、95% Hausdorff距离、平均交并比(mIoU)、查准率(PPV)指标对所提方法进行了评估。结果表明,全肿瘤区域、肿瘤核心区域和增强肿瘤区域的Dice系数分别为0.873,0.814,0.709;其95% Hausdorff距离分别为15.455,12.475,12.309;其mIoU分别为0.789,0.720,0.601;其PPV分别为0.898,0.846,0.735。与基础的3D U-Net以及带深度监督的3D U-Net相比,所提方法可以有效地利用多模态的深层和浅层的信息,有效利用了空间信息,同时使用了Dice系数和交叉熵的融合损失函数,从而有效提升了对肿瘤各区域的分割精度,尤其是对小面积的增强肿瘤区域的分割精度。

关键词: 多模态MRI, 胶质瘤, 肿瘤分割, 3D U-Net++, 融合损失函数

Abstract: Glioma is the most common primary brain tumor caused by cancerous glial cells in the brain and spinal cord.Reliable segmentation of glioma tissue from multi-mode MRI is of great clinical value.However,due to the complexity of glioma itself and surrounding tissues and the blurring of boundary caused by invasion,automatic segmentation of glioma is difficult.In this paper,a 3D U-Net++ network using the fusion loss function is constructed to segment different areas of glioma.The network uses different levels of U-Net models for densely nested connections,and uses the output results of the four branches of the network as depth supervision so that the combination of deep and shallow features can be better used for segmentation,and combines Dice loss function and cross entropy loss function as a fusion loss function to improve the segmentation accuracy of small regions.In the independent test set divided by the public data set of the 2019 Multimodal Brain Tumor Segmentation Challenge (BraTs),the proposed method is evaluated with Dice coefficient,95% Hausdorff distance,mIoU(mean intersection over union),and PPV(precision) indicators.The whole tumor,the core region and the enhancing tumor region of Dice coefficient are 0.873,0.814,0.709;the 95% Hausdorff coefficient are 15.455,12.475,12.309 respectively;the mIoU are 0.789,0.720,0.601 respectively;the PPV are 0.898,0.846 and 0.735 respectively.Compared with the basis of 3D U-Net and 3D U-Net with depth of supervision,the proposed method can make use of more effective modal of the deep and shallow information,effectively use the space information.And the fusion loss function combined by the dice coefficient and the cross-entropy loss function can effectively enhance tumor segmentation accuracy of each area,especially the segmentation accuracy of small tumor areas such as enhancing tumor.

Key words: Multimodal magnetic resonance imaging, Glioma, Tumor segmentation, 3D U-Net++, Fusion loss function

中图分类号: 

  • TP391
[1]China Society for Radiation Oncology Expert Consensus of Glioma Radiotherapy in China (2017)[J].Chinese Journal of Radiation Oncology,2018(2):123-131.
[2]LEE C,HUH S,KETTER T A,et al.Unsupervised connectivity-based thresholding segmentation of midsagittal brain MR ima-ges[J].Computers in Biology and Medicine,1998,28(3):309-338.
[3]STADLBAUER A,MOSER E,GRUBER S,et al.Improved delineation of brain tumors:an automated method for segmentation based on pathologic changes of 1H-MRSI metabolites in glio-mas[J].Neuroimage,2004,23(2):454-461.
[4]DENG W K,XIAO W,DENG H,et al.MRI brain tumor segmentation with region growing method based on the gradients and variances along and inside of the boundary curve[C]//International Conference on Biomedical Engineering & Informatics.IEEE,2010.
[5]JAYADEVAPPA D,KUMAR S S,MURTY D S.A HybridSegmentation Model based on Watershed and Gradient Vector Flow for the Detection of Brain Tumor[J].International Journal of Signal Processing Image Processing & Pattern Recognition,2009,2(3):29-42.
[6]ZHAO Z,YANG G,LIN Y,et al.Automated glioma detection and segmentation using graphical models[J].PLOS ONE,2018,13(8):e0200745.
[7]GEREMIA E,CLATZ O,MENZE B H,et al.Spatial Decision Forests for Glioma Segmentation in Multi-Channel MR Images[J].NeuroImage,2011,57(2):378-390.
[8]SÉRGIO P,PINTO A,ALVES V,et al.Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images[J].IEEE Transactions on Medical Imaging,2016,35(5):1240-1251.
[9]RIVERA L C,CASTILLO L,DAZA L A,et al.Volumetric multimodality neural network for brain tumor segmentation[C]//13th International Symposium on Medical Information Proces-sing and Analysis.2017.
[10]LONG J,SHELHAMER E,DARRELL T.Fully Convolutional Networks for Semantic Segmentation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,39(4):640-651.
[11]JESSON A,ARBEL T.Brain Tumor Segmentation Using a 3D FCN with Multi-scale Loss[M]//Brainlesion:Glioma,Multiple Sclerosis,Stroke and Traumatic Brain Injuries.2018.
[12]RONNEBERGER O,FISCHER P,BROX T,et al.U-Net:Convolu-tional Networks for Biomedical Image Segmentation[C]//Medical Image Computing and Computer Assisted Intervention.2015:234-241.
[13]WANG C,SMEDBY O.Automatic brain tumor segmentationusing 2.5 D U-nets[C]//6th MICCAI BraTS Challenge.2017:292-296.
[14]ZHOU Z W,RAHMAN S M M.UNet++:Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation[J].IEEE Transactions on Medical Imaging,2020,39(6):1856-1867.
[15]KAMNITSAS K,LEDIG C,NEWCOMBE V F J,et al.Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation[J].Medical Image Analysis,2016,36:61.
[16]MENZE B H,JAKAB A,BAUER S,et al.The MultimodalBrain Tumor Image Segmentation Benchmark (BRATS)[J].IEEE Transactions on Medical Imaging,2015,34(10):1993-2024.
[17]BAKAS S,REYES M,JAKAB A,et al.Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation,Progression Assessment,and Overall Survival Prediction in the BRATS Challenge[J].arXiv:1811.02629,2018.
[18]BAKAS S,AKBARI H,SOTIRAS A,et al.Advancing TheCancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features[J].Scientific Data,2017,4:170117.
[19]BAKAS S,AKBARI H,SOTIRAS A,et al.Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection[EB/OL].(2020-03-11)[2020-09-21].https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=24282666.
[20]BAKAS S,AKBARI H,SOTIRAS A,et al.Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection[EB/OL].(2020-04-09)[2020-09-21].https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=24282668.
[21]AVANTS B B,TUSTISON N,SONG G.Advanced normalization tools (ANTS)[J].Or Insight,2008,11:1-35.
[22]ISENSEE F,KICKINGEREDER P,WICK W,et al.BrainTumor Segmentation and Radiomics Survival Prediction:Contribution to the BRATS 2017 Challenge[C]//Medical Image Computing and Computer Assisted Intervention.2017:287-297.
[23]PRASANNA P,KARNAWAT A,ISMAIL M,et al.Radiomics-based convolutional neural network for brain tumor segmentation on multiparametric magnetic resonance imaging[J].Journal of Medical Imaging,2019,6:e024005.
[24]ZHAO X,HE L,WANG Y,et al.An Efficient Method for Connected-Component Labeling in 3D Binary Images[C]//2018 International Conference on Robots & Intelligent System (ICRIS).IEEE Computer Society,2018.
[1] 朝乐门, 尹显龙. 人工智能治理理论及系统的现状与趋势[J]. 计算机科学, 2021, 48(9): 1-8.
[2] 王俊, 王修来, 庞威, 赵鸿飞. 面向科技前瞻预测的大数据治理研究[J]. 计算机科学, 2021, 48(9): 36-42.
[3] 周新民, 胡宜桂, 刘文洁, 孙荣俊. 基于多模态多层级数据融合方法的城市功能识别研究[J]. 计算机科学, 2021, 48(9): 50-58.
[4] 郑苏苏, 关东海, 袁伟伟. 融合不完整多视图的异质信息网络嵌入方法[J]. 计算机科学, 2021, 48(9): 68-76.
[5] 黄颖琦, 陈红梅. 基于代价敏感卷积神经网络的非平衡问题混合方法[J]. 计算机科学, 2021, 48(9): 77-85.
[6] 罗月童, 汪涛, 杨梦男, 张延孔. 基于历史行车轨迹集的车辆行为可视分析方法[J]. 计算机科学, 2021, 48(9): 86-94.
[7] 戴宏亮, 钟国金, 游志铭, 戴宏明. 基于Spark的舆情情感大数据分析集成方法[J]. 计算机科学, 2021, 48(9): 118-124.
[8] 徐涛, 田崇阳, 刘才华. 基于深度学习的人群异常行为检测综述[J]. 计算机科学, 2021, 48(9): 125-134.
[9] 常子霆, 施雨晴, 王俊, 于明鹤, 姚兰, 赵志滨. 基于双目视觉的车辆速度测量方法[J]. 计算机科学, 2021, 48(9): 135-139.
[10] 赫晓慧, 邱芳冰, 程淅杰, 田智慧, 周广胜. 基于边缘特征融合的高分影像建筑物目标检测[J]. 计算机科学, 2021, 48(9): 140-145.
[11] 张新峰, 宋博. 一种基于改进三元组损失和特征融合的行人重识别方法[J]. 计算机科学, 2021, 48(9): 146-152.
[12] 官铮, 邓扬琳, 聂仁灿. 光谱重建约束非负矩阵分解的高光谱与全色图像融合[J]. 计算机科学, 2021, 48(9): 153-159.
[13] 郑建炜, 黄娟娟, 秦梦洁, 徐宏辉, 刘志. 基于非局部相似及加权截断核范数的高光谱图像去噪[J]. 计算机科学, 2021, 48(9): 160-167.
[14] 袁磊, 刘紫燕, 朱明成, 马珊珊, 陈霖周廷. 融合改进密集连接和分布排序损失的遥感图像检测[J]. 计算机科学, 2021, 48(9): 168-173.
[15] 黄晓生, 徐静. 基于PCANet的非下采样剪切波域多聚焦图像融合[J]. 计算机科学, 2021, 48(9): 181-186.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 李慧,周林,辛文波. 基于双层规划的网络化防空作战编队结构优化[J]. 计算机科学, 2018, 45(4): 266 -272 .
[2] 吴伟男, 刘建明. 面向低功耗无线传感器网络的动态重传算法[J]. 计算机科学, 2018, 45(6): 96 -99 .
[3] 侯林清, 蔡英, 范艳芳, 夏红科. 移动社交网中基于兴趣社区的消息传输方案[J]. 计算机科学, 2018, 45(6): 105 -110 .
[4] 刘景玮, 刘京菊, 陆余良, 杨斌, 朱凯龙. 基于网络攻防博弈模型的最优防御策略选取方法[J]. 计算机科学, 2018, 45(6): 117 -123 .
[5] 庄陵,尹耀虎. 认知异构网络中基于不完全频谱感知的资源分配算法[J]. 计算机科学, 2018, 45(5): 49 -53 .
[6] 侯泽毅, 万虎, 徐远超. NMST:一种基于线段树的持久性内存管理优化方法[J]. 计算机科学, 2018, 45(7): 78 -83 .
[7] 王刚,王含茹,胡可,贺曦冉. 任务推荐中考虑任务关联度与时间因素的改进OCCF方法[J]. 计算机科学, 2018, 45(7): 172 -177 .
[8] 陈婷婷,王应明. 基于AR模型的置信规则库结构识别算法[J]. 计算机科学, 2018, 45(6A): 79 -84 .
[9] 陈伟, 吴友政, 陈文亮, 张民. 基于BiLSTM-CRF的关键词自动抽取[J]. 计算机科学, 2018, 45(6A): 91 -96 .
[10] 周丽军. 基于图像增强与分水岭分割的隧道低对比度裂缝提取方法[J]. 计算机科学, 2018, 45(6A): 259 -261 .