Research progress of multimodal medical image fusion methods
CHEN Wei1,2,3,4, SUN Kangkang1,2,3,4, LI Qixuan2,3,4, XIE Kai2,3,4, NI Xinye2,3,4
1. School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164 China; 2. Department of Radiotherapy the Second People's Hospital of Changzhou Affiliated to Nanjing Medical University, Changzhou 213003 China; 3. Central Laboratory of Medical Physics, Nanjing Medical University, Changzhou 213003 China; 4. Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003 China
陈伟, 孙康康, 李奇轩, 谢凯, 倪昕晔. 多模态医学图像融合方法的研究进展[J]. 中国辐射卫生, 2023, 32(5): 580-585.
CHEN Wei, SUN Kangkang, LI Qixuan, XIE Kai, NI Xinye. Research progress of multimodal medical image fusion methods. Chinese Journal of Radiological Health, 2023, 32(5): 580-585.
[1] 陆晓骞, 宗凌燕, 沈琪. 正电子发射断层扫描/计算机断层扫描(PET/CT)成像在肿瘤筛查中的应用进展[J]. 中国辐射卫生,2023,32(1):66-69,74. DOI: 10.13491/j.issn.1004-714X.2023.01.014 Lu XQ, Zong LY, Shen Q. Progress in application of positron emission tomography/computed tomography(PET/CT)imaging in tumor screening[J]. Chin J Radiol Health, 2023, 32(1): 66-69,74. DOI: 10.13491/j.issn.1004-714X.2023.01.014 [2] James AP, Dasarathy BV. Medical image fusion: a survey of the state of the art[J]. Inf Fusion, 2014, 19: 4-19. DOI: 10.1016/j.inffus.2013.12.002 [3] Du J, Li WS, Xiao B, et al. Union Laplacian pyramid with multiple features for medical image fusion[J]. Neurocomputing, 2016, 194: 326-339. DOI: 10.1016/j.neucom.2016.02.047 [4] Li ST, Yang B, Hu JW. Performance comparison of different multi-resolution transforms for image fusion[J]. Inf Fusion, 2011, 12(2): 74-84. DOI: 10.1016/j.inffus.2010.03.002 [5] Cheng SL, He JM, Lv ZW. Medical image of PET/CT weighted fusion based on wavelet transform[C]//2008 2nd International Conference on Bioinformatics and Biomedical Engineering. Shanghai: IEEE, 2008: 2523-2525. DOI: 10.1109/ICBBE.2008.964. [6] Bhavana V, Krishnappa HK. Multi-modality medical image fusion using discrete wavelet transform[J]. Procedia Comput Sci, 2015, 70: 625-631. DOI: 10.1016/j.procs.2015.10.057 [7] Zhang Q, Levine MD. Robust multi-focus image fusion using multi-task sparse representation and spatial context[J]. IEEE Trans Image Process, 2016, 25(5): 2045-2058. DOI: 10.1109/TIP.2016.2524212 [8] Zhang Q, Liu Y, Blum RS, et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review[J]. Inf Fusion, 2018, 40: 57-75. DOI: 10.1016/j.inffus.2017.05.006 [9] Zong JJ, Qiu TS. Medical image fusion based on sparse representation of classified image patches[J]. Biomed Signal Process Control, 2017, 34: 195-205. DOI: 10.1016/j.bspc.2017.02.005 [10] Li XS, Wan WJ, Zhou FQ, et al. Medical image fusion based on sparse representation and neighbor energy activity[J]. Biomed Signal Process Control, 2023, 80: 104353. DOI: 10.1016/j.bspc.2022.104353 [11] Li ST, Yin HT, Fang LY. Group-sparse representation with dictionary learning for medical image denoising and fusion[J]. IEEE Trans Biomed Eng, 2012, 59(12): 3450-3459. DOI: 10.1109/TBME.2012.2217493 [12] He CT, Liu QX, Li HL, et al. Multimodal medical image fusion based on IHS and PCA[J]. Procedia Eng, 2010, 7: 280-285. DOI: 10.1016/j.proeng.2010.11.045 [13] Cui ZM, Zhang GM, Wu J. Medical image fusion based on wavelet transform and independent component analysis[C]//2009 International Joint Conference on Artificial Intelligence. Hainan: IEEE, 2009: 480-483. DOI: 10.1109/JCAI.2009.169. [14] Kong WW, Lei Y, Zhao HX. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization[J]. Infrared Phys Technol, 2014, 67: 161-172. DOI: 10.1016/j.infrared.2014.07.019 [15] Xu ZP. Medical image fusion using multi-level local extrema[J]. Inf Fusion, 2014, 19: 38-48. DOI: 10.1016/j.inffus.2013.01.001 [16] Maqsood S, Javed U. Multi-modal medical image fusion based on two-scale image decomposition and sparse representation[J]. Biomed Signal Process Control, 2020, 57: 101810. DOI: 10.1016/j.bspc.2019.101810 [17] Alseelawi N, Hazim HT, Salim Alrikabi HT. A novel method of multimodal medical image fusion based on hybrid approach of NSCT and DTCWT[J]. Int J Online Biomed Eng, 2022, 18(3): 114-133. DOI: 10.3991/ijoe.v18i03.28011 [18] Yin M, Duan PH, Liu W, et al. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation[J]. Neurocomputing, 2017, 226: 182-191. DOI: 10.1016/j.neucom.2016.11.051 [19] Jiang Y, Wang MH. Image fusion with morphological component analysis[J]. Inf Fusion, 2014, 18: 107-118. DOI: 10.1016/j.inffus.2013.06.001 [20] Zhang Y, Liu Y, Sun P, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Inf Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011 [21] Wang ZY, Li XF, Duan HR, et al. Medical image fusion based on convolutional neural networks and non-subsampled contourlet transform[J]. Expert Syst Appl, 2021, 171: 114574. DOI: 10.1016/j.eswa.2021.114574 [22] Kaur M, Singh D. Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks[J]. J Ambient Intell Human Comput, 2021, 12(2): 2483-2493. DOI: 10.1007/s12652-020-02386-0 [23] Xia KJ, Yin HS, Wang JQ. A novel improved deep convolutional neural network model for medical image fusion[J]. Cluster Comput, 2019, 22(S1): 1515-1527. DOI: 10.1007/s10586-018-2026-1 [24] Liu Y, Chen X, Cheng J, et al. A medical image fusion method based on convolutional neural networks[C]//2017 20th International Conference on Information Fusion (Fusion). Xi'an: IEEE, 2017: 1-7. DOI: 10.23919/ICIF.2017.8009769. [25] Wang KP, Zheng MY, Wei HY, et al. Multi-modality medical image fusion using convolutional neural network and contrast pyramid[J]. Sensors, 2020, 20(8): 2169. DOI: 10.3390/s20082169 [26] Xu H, Ma JY. EMFusion: an unsupervised enhanced medical image fusion network[J]. Inf Fusion, 2021, 76: 177-186. DOI: 10.1016/j.inffus.2021.06.001 [27] Fu J, Li WS, Du J, et al. Multimodal medical image fusion via Laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy[J]. Comput Biol Med, 2020, 126: 104048. DOI: 10.1016/j.compbiomed.2020.104048 [28] Lahoud F, Süsstrunk S. Zero-learning fast medical image fusion[C]//2019 22th International Conference on Information Fusion (FUSION). Ottawa: IEEE, 2019: 1-8. DOI: 10.23919/FUSION43075.2019.9011178. [29] Fu J, Li WS, Du J, et al. A multiscale residual pyramid attention network for medical image fusion[J]. Biomed Signal Process Control, 2021, 66: 102488. DOI: 10.1016/j.bspc.2021.102488 [30] Ma JY, Yu W, Liang PW, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Inf Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004 [31] Ma JY, Xu H, Jiang JJ, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Trans Image Process, 2020, 29: 4980-4995. DOI: 10.1109/TIP.2020.2977573 [32] Huang J, Le ZL, Ma Y, et al. MGMDcGAN: medical image fusion using multi-generator multi-discriminator conditional generative adversarial network[J]. IEEE Access, 2020, 8: 55145-55157. DOI: 10.1109/ACCESS.2020.2982016 [33] Tang W, Liu Y, Zhang C, et al. Green fluorescent protein and phase-contrast image fusion via generative adversarial networks[J]. Comput Math Methods Med, 2019, 2019: 5450373. DOI: 10.1155/2019/5450373 [34] Luo XQ, Gao YH, Wang AQ, et al. IFSepR: a general framework for image fusion based on separate representation learning[J]. IEEE Trans Multimedia, 2023, 25: 608-623. DOI: 10.1109/TMM.2021.3129354 [35] Ma JY, Tang L, Fan F, et al. SwinFusion: cross-domain long-range learning for general image fusion via Swin transformer[J]. IEEE/CAA J Autom Sin, 2022, 9(7): 1200-1217. DOI: 10.1109/JAS.2022.105686 [36] Vs V, Valanarasu JMJ, Oza P, et al. Image fusion transformer[C]//2022 IEEE International Conference on Image Processing (ICIP). Bordeaux: IEEE, 2022: 3566-3570. DOI: 10.1109/ICIP46576.2022.9897280.