计算机科学 ›› 2024, Vol. 51 ›› Issue (8): 176-182.doi: 10.11896/jsjkx.230700088

• 计算机图形学&多媒体 • 上一篇    下一篇

基于颜色流模型的非配对医学图像颜色迁移方法

王晓洁, 刘尽华, 陆书一, 周元峰   

  1. 山东大学软件学院 济南 250101
  • 收稿日期:2023-07-12 修回日期:2023-11-15 出版日期:2024-08-15 发布日期:2024-08-13
  • 通讯作者: 周元峰(yfzhou@sdu.edu.cn)
  • 作者简介:(wxj_811@163.com)
  • 基金资助:
    战略性国际科技创新合作专项国家重点研发计划(2021YFE0203800);国家自然科学基金浙江省信息化与工业化融合联合基金(U1909210);国家自然科学基金(62172257)

Color Transfer Method for Unpaired Medical Images Based on Color Flow Model

WANG Xiaojie, LIU Jinhua, LU Shuyi, ZHOU Yuanfeng   

  1. School of Software,Shandong University,Jinan 250101,China
  • Received:2023-07-12 Revised:2023-11-15 Online:2024-08-15 Published:2024-08-13
  • About author:WANG Xiaojie,born in 1990,doctoral student.Her main research interests include medical image processing and so on.
    ZHOU Yuanfeng,born in 1980,Ph.D,professor.His main research interests include geometric modeling,information visualization,and image processing.
  • Supported by:
    National Key R&D Plan on Strategic International Scientific and Technological Innovation Cooperation Special Project(2021YFE0203800),NSFC-Zhejiang Joint Fund of the Integration of Informatization and Industrialization(U1909210) and National Natural Science Foundation of China(62172257).

摘要: 在临床应用中,CT图像是一种比较容易获取的影像数据,但是其与真实人体色彩有较大差距。人体断层彩色图像是真实人体的色彩反应,但却是一种稀有数据。如果将两者结合,使得每个病例都可以得到自己的彩色CT数据,将会对医生手术和患者理解有更好的促进作用。因此,文中提出了一种基于颜色流模型的医学图像颜色迁移框架。首先,将CT数据和真实人体彩色数据分别输入颜色流模型中,提取二者的内容特征和颜色特征;然后,在特征层面进行颜色和纹理的迁移工作;最后,将处理以后的特征信息重新输入到可逆颜色流模型中进行图像重建工作。为了使着色以后的图像更具有纹理特征,在每个流模块之后添加了纹理约束损失;同时,为了保证医学图像上细小血管等组织的特征不被丢失,在着色图像和源图像之间添加了边缘约束损失。通过定性和定量实验证明,所提方法比其他的着色方法更加鲁棒,且着色后的图像更加真实。文中也在不同的数据域上进行了测试,依旧可以得到稳定的实验结果。同时,所提方法也可以在不调整窗宽/窗位的前提下显示清晰的组织结构。

关键词: 流模型, 着色, 纹理约束, 稳定性, 边缘约束

Abstract: In clinical applications,CT image is a kind of image data that is relatively easy to obtain,but there is a large gap between them and the real human body color.The tomographic color image of the human body is the color response of the real human body,but it is a rare data.Combining the two,so that each case can get its own color CT data,which will have a effect on the doctor’s surgery and the patient’s understanding to the disease.Therefore,this paper proposes a medical image colorization framework based on a color flow model.It first inputs the CT and human color data into the color flow model and extracts the content and color features.Then,the color and texture transfer work is performed at the feature level.Finally,the processed feature information is re-input into the reversible color flow model for image reconstruction.After each flow module,we add a texture constraint loss to make the shaded image more textured.At the same time,we add edge constraints to ensure that the characteristics of small blood vessels and other tissues on the medical image are not lost.Qualitative and quantitative experiments prove that our method is more robust than other colorization methods,and the experimental results are more realistic.And we conduct extensive experiments on different data domains,proving that our method is not affected by domain shift and can obtain stable experimental results.At the same time,the proposed method can display a clear organizational structure without adjusting the window width/level.

Key words: Flow module, Colorization, Texture constraint, Stability, Edge constraint

中图分类号: 

  • TP391.41
[1]WELSH T,ASHIKHMIN M,MUELLER K.Transferring color to greyscale images[C]//Proceedings of the 29th Annual Confe-rence on Computer Graphics and Interactive Techniques.2002:277-280.
[2]KHAN T H,MOHAMMED S K,IMTIAZ M S,et al.Efficient color reproduction algorithm for endoscopic images based on dynamic color map[J].Journal of Medical and Biological Enginee-ring,2016,36:226-235.
[3]MATHUR A N,KHATTAR A,SHARMA O.2D to 3D medical image colorization[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2021:2847-2856.
[4]AN J,HUANG S,SONGY,et al.Artflow:Unbiased image style transfer via reversible neural flows[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:862-871.
[5]GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Ge-nerative adversarial networks[J].Communications of the ACM,2020,63(11):139-144.
[6]DINH L,KRUEGER D,BENGIO Y.Nice:Non-linear indepen-dent components estimation[J].arXiv:1410.8516,2014.
[7]DINH L,SOHL-DICKSTEIN J,BENGIO S.Density estimation using real nvp[J].arXiv:1605.08803,2016.
[8]KINGMA D P,DHARIWAL P.Glow:Generative flow with invertible 1x1 convolutions[J].Advances in Neural Information Processing Systems,2018,31:10215-10224.
[9]KLOKOV R,BOYER E,VERBEEK J.Discrete point flow networks for efficient point cloud generation[C]//European Conference on Computer Vision.Cham:Springer International Publishing,2020:694-710.
[10]PUMAROLA A,POPOV S,MORENO-NOGUERF,et al.C-flow:Conditional generative flow models for images and 3d point clouds[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:7949-7958.
[11]LIU R,LIU Y,GONG X,et al.Conditional adversarial generative flow for controllable image synthesis[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:7992-8001.
[12]LIU R,LIU Y,GONG X,et al.Conditional adversarial generative flow for controllable image synthesis[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:7992-8001.
[13]ARDIZZONE L,LÜTH C,KRUSE J,et al.Guided image ge-neration with conditional invertible neural networks[J].arXiv:1907.02392,2019.
[14]WINKLER C,WORRALL D,HOOGEBOOME,et al.Learning likelihoods with conditional normalizing flows[J].arXiv:1912.00042,2019.
[15]LUGMAYR A,DANELLJAN M,VAN GOOL L,et al.Srflow:Learning the super-resolution space with normalizing flow[C]//Computer Vision-ECCV 2020:16th European Conference,Glasgow,UK,Part V 16.Springer International Publishing,2020:715-732.
[16]GATYS L A,ECKER A S,BETHGE M.Image style transferusing convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2414-2423.
[17]LUAN F,PARIS S,SHECHTMAN E,et al.Deep photo style transfer[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:4990-4998.
[18]ISOLA P,ZHU J Y,ZHOU T,et al.Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1125-1134.
[19]ZHU J Y,PARK T,ISOLAP,et al.Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:2223-2232.
[20]DHARIWAL P,NICHOL A.Diffusion models beat gans onimage synthesis[J].Advances in Neural Information Processing Systems,2021,34:8780-8794.
[21]HO J,JAIN A,ABBEEL P.Denoising diffusion probabilisticmodels[J].Advances in Neural Information Processing Systems,2020,33:6840-6851.
[22]KINGMA D P,WELLING M.Auto-encoding variational bayes[J].arXiv:1312.6114,2013.
[23]IOFFE S,SZEGEDY C.Batch normalization:Accelerating deep network training by reducing internal covariate shift[C]//International Conference on Machine Learning.PMLR,2015:448-456.
[24]LI Y,FANG C,YANG J,et al.Diversified texture synthesiswith feed-forward networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:3920-3928.
[25]GATYS L,ECKER A S,BETHGE M.Texture synthesis using convolutional neural networks[J].Advances in Neural Information Processing Systems,2015,28:262-270.
[26]ULYANOV D,LEBEDEV V,VEDALDI A,et al.Texture networks:Feed-forward synthesis of textures and stylized images[J].arXiv:1603.03417,2016.
[27]ACKERMAN M J.The visible human project[J].Proceedings of the IEEE,1998,86(3):504-511.
[28]ARMATO III S G,MCLENNAN G,BIDAUT L,et al.The lung image database consortium(LIDC) and image database resource initiative(IDRI):a completed reference database of lung nodules on CT scans[J].Medical Physics,2011,38(2):915-931.
[29]WANG Z,BOVIK A C,SHEIKH H R,et al.Image quality as-sessment:from error visibility to structural similarity[J].IEEE Transactions on Image Processing,2004,13(4):600-612.
[30]HUYNH-THU Q,GHANBARI M.Scope of validity of PSNR in image/video quality assessment[J].Electronics Letters,2008,44(13):800-801.
[31]CHOI E,LEE C.Feature extraction based on the Bhattacharyya distance[J].Pattern Recognition,2003,36(8):1703-1709.
[32]ROFER T.Using histogram correlation to create consistent laser scan maps[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.IEEE,2002,1:625-630.
[33]TORBUNOV D,HUANG Y,YU H,et al.Uvcgan:Unet vision transformer cycle-consistent gan for unpaired image-to-image translation[C]//Proceedings of the IEEE/CVF Winter Confe-rence on Applications of Computer Vision.2023:702-712.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!