%A ANG Li-fang, SHI Chao-yu, LIN Su-zhen, QIN Pin-le, GAO Yuan %T Multi-modal Medical Image Fusion Based on Joint Patch Clustering of Adaptive Dictionary Learning %0 Journal Article %D 2019 %J Computer Science %R 10.11896/j.issn.1002-137X.2019.07.036 %P 238-245 %V 46 %N 7 %U {https://www.jsjkx.com/CN/abstract/article_18421.shtml} %8 2019-07-15 %X In view of the poor image reconstruction quality problem caused by a large amount of redundant information existing in over complete adaptive dictionaries in medical image fusion,this paper proposed a multi-modal medical image fusion method based on joint image patch clustering and adaptive dictionary learning.First,this method calculates the Euclidean distance of image patches and reduces redundant image patches by comparing the cut-off threshold and the minimum distance of image patches.Then,it extracts the local gradient information of image patches as the clustering center by local regression weight of steering kernel (SKR),and combines the two different modal image patches with the same local gradient information for image patch clustering.On the basis of joint image patch clustering,it uses the improved K-SVD algorithm to train the clusters formed by image patch clustering to get sub-dictionaries,and merges the sub-dictionaries into an adaptive dictionary.Finally,the sparse representation coefficients can be obtained by the orthogonal matching tracking algorithm (OMP) and the adaptive dictionary,and they are fused with the rule of “2-norm max”.Through the reconstruction,this paper obtained the fused image.Compared with two methods based on multi-scale transform and six methods based on sparse representation,experimental results show that the proposed method can construct a compact and informative dictionary,and endow the fused image with higher clarity and stronger contrast to facilitate clinical diagnosis and adjuvant treatment.