• Laser & Optoelectronics Progress
  • Vol. 60, Issue 8, 0811002 (2023)
Jianglei Di1,*, Juncheng Lin1, Liyun Zhong1, Kemao Qian2,**, and Yuwen Qin1,***
Author Affiliations
  • 1Guangdong Key Laboratory of Information Photonics Technology, Institute of Advanced Photonics Technology, School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, Guangdong, China
  • 2School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798
  • show less
    DOI: 10.3788/LOP230488 Cite this Article Set citation alerts
    Jianglei Di, Juncheng Lin, Liyun Zhong, Kemao Qian, Yuwen Qin. Review of Sparse-View or Limited-Angle CT Reconstruction Based on Deep Learning[J]. Laser & Optoelectronics Progress, 2023, 60(8): 0811002 Copy Citation Text show less
    Attenuation process of X-ray penetrating an object[18]
    Fig. 1. Attenuation process of X-ray penetrating an object[18]
    Schematic of beam projection[19]. (a) Radon transform-based parallel beam projection process; (b) sectoral beam projection process
    Fig. 2. Schematic of beam projection[19]. (a) Radon transform-based parallel beam projection process; (b) sectoral beam projection process
    Schematic of the CT reconstruction process[21]
    Fig. 3. Schematic of the CT reconstruction process[21]
    Visual of reconstructed CT degradation. (a) Full dose reference CT image; (b) full-view (180°) reconstructed CT image; (c) 1/6 sampling sparse-view reconstructed CT image; (d) reconstructed CT image under limited-angle of [0, 120°]
    Fig. 4. Visual of reconstructed CT degradation. (a) Full dose reference CT image; (b) full-view (180°) reconstructed CT image; (c) 1/6 sampling sparse-view reconstructed CT image; (d) reconstructed CT image under limited-angle of [0, 120°]
    CNN[25]. (a) Basic structure of CNN; (b) basic structure of neurons of a neural network
    Fig. 5. CNN[25]. (a) Basic structure of CNN; (b) basic structure of neurons of a neural network
    Embedding modules in CNN. (a) Residual network module[29]; (b) dense connection module[30]; (c) channel attention module[31]; (d) spatial attention module[32]
    Fig. 6. Embedding modules in CNN. (a) Residual network module[29]; (b) dense connection module[30]; (c) channel attention module[31]; (d) spatial attention module[32]
    CT image domain post-processing process
    Fig. 7. CT image domain post-processing process
    Network structure and comparison of reconstruction results[43]. (a) U-net based on multi-level wavelet transform; (b) comparison of CT reconstruction results
    Fig. 8. Network structure and comparison of reconstruction results[43]. (a) U-net based on multi-level wavelet transform; (b) comparison of CT reconstruction results
    Artifact removal model combining TV regularization iteration reconstruction with U-net[47]
    Fig. 9. Artifact removal model combining TV regularization iteration reconstruction with U-net[47]
    Artifact removal models based on GAN or DDPM. (a) U-WGAN model[53]; (b) DDPM[55]
    Fig. 10. Artifact removal models based on GAN or DDPM. (a) U-WGAN model[53]; (b) DDPM[55]
    Reconstruction results of the artifact removal models based on GAN or DDPM. (a) CT reconstruction results in Ref. [53]; (b) CT reconstruction results in Ref. [55]
    Fig. 11. Reconstruction results of the artifact removal models based on GAN or DDPM. (a) CT reconstruction results in Ref. [53]; (b) CT reconstruction results in Ref. [55]
    FCN artifact removal models based on different network structures. (a) R2-Net model[59]; (b) MS-RDN model[64]
    Fig. 12. FCN artifact removal models based on different network structures. (a) R2-Net model[59]; (b) MS-RDN model64
    Artifact removal model based on Transformer[69]
    Fig. 13. Artifact removal model based on Transformer[69]
    Sinogram domain preprocessing process
    Fig. 14. Sinogram domain preprocessing process
    Sinogram interpolation models based on U-net. (a) Model combining linear interpolation with U-net[72]; (b) DPC-CT model[75]
    Fig. 15. Sinogram interpolation models based on U-net. (a) Model combining linear interpolation with U-net[72]; (b) DPC-CT model[75]
    Sinogram interpolation models based on GAN. (a) CT-Net model[76]; (b) SI-GAN model[80]
    Fig. 16. Sinogram interpolation models based on GAN. (a) CT-Net model[76]; (b) SI-GAN model[80]
    Reconstruction results of the sinogram interpolation models based on GAN. (a) CT reconstruction results in Ref. [76];
    Fig. 17. Reconstruction results of the sinogram interpolation models based on GAN. (a) CT reconstruction results in Ref. [76];
    Dual-domain network joint processing process
    Fig. 18. Dual-domain network joint processing process
    Dual-domain reconstruction network based on U-net and FCN. (a) SPID model[84]; (b) multi-channel sinogram restoration model[86]; (c) DuDoDR-Net model[87]
    Fig. 19. Dual-domain reconstruction network based on U-net and FCN. (a) SPID model[84]; (b) multi-channel sinogram restoration model[86]; (c) DuDoDR-Net model[87]
    Reconstruction results of dual-domain reconstruction networks based on U-net and FCN. (a) CT reconstruction results in Ref. [84]; (b) CT reconstruction results in Ref. [86]; (c) CT reconstruction results in Ref. [87]
    Fig. 20. Reconstruction results of dual-domain reconstruction networks based on U-net and FCN. (a) CT reconstruction results in Ref. [84]; (b) CT reconstruction results in Ref. [86]; (c) CT reconstruction results in Ref. [87]
    DGAN model[97]
    Fig. 21. DGAN model[97]
    Network structure and reconstruction results[102]. (a) Dual-domain reconstruction network based on Transformer; (b) comparison of CT reconstruction results
    Fig. 22. Network structure and reconstruction results[102]. (a) Dual-domain reconstruction network based on Transformer; (b) comparison of CT reconstruction results
    CNN regular term based iterative reconstruction process[103]
    Fig. 23. CNN regular term based iterative reconstruction process[103]
    Optimization model of regular terms and balance parameters based on CNN and reconstruction results[108]. (a) RegFormer model; (b) comparison of CT reconstruction results
    Fig. 24. Optimization model of regular terms and balance parameters based on CNN and reconstruction results[108]. (a) RegFormer model; (b) comparison of CT reconstruction results
    Sub-problem iterative expansion optimization models based on CNN. (a) FISTA-based iterative reconstruction model[112]; (b) shear wave based iterative reconstruction model[113]
    Fig. 25. Sub-problem iterative expansion optimization models based on CNN. (a) FISTA-based iterative reconstruction model[112]; (b) shear wave based iterative reconstruction model[113]
    Reconstruction results of sub-problem iterative expansion optimization model based on CNN. (a) CT reconstruction results in Ref. [112]; (b) CT reconstruction results in Ref. [113]
    Fig. 26. Reconstruction results of sub-problem iterative expansion optimization model based on CNN. (a) CT reconstruction results in Ref. [112]; (b) CT reconstruction results in Ref. [113]
    Unsupervised iterative model and reconstruction results[120]. (a) REDAEP iterative reconstruction model; (b) comparison of CT reconstruction results
    Fig. 27. Unsupervised iterative model and reconstruction results[120]. (a) REDAEP iterative reconstruction model; (b) comparison of CT reconstruction results
    End-to-end mapping reconstruction process
    Fig. 28. End-to-end mapping reconstruction process
    Full-learning reconstruction model. (a) Reconstruction network based on fully connected layer[124-125]; (b) reconstruction network based on stacked U-net[128]
    Fig. 29. Full-learning reconstruction model. (a) Reconstruction network based on fully connected layer[124-125]; (b) reconstruction network based on stacked U-net[128]
    Reconstruction results of full-learning reconstruction models. (a) CT reconstruction results in Ref. [125];
    Fig. 30. Reconstruction results of full-learning reconstruction models. (a) CT reconstruction results in Ref. [125];
    Reconstruction model based on learnable physical analytic algorithm and comparison of reconstruction results[130]. (a) iRadonMAP model; (b) comparison of CT reconstruction results
    Fig. 31. Reconstruction model based on learnable physical analytic algorithm and comparison of reconstruction results[130]. (a) iRadonMAP model; (b) comparison of CT reconstruction results
    Self-supervised untrained projection reconstruction model and reconstruction results[137]. (a) IntraTomo model; (b) comparison of CT reconstruction results
    Fig. 32. Self-supervised untrained projection reconstruction model and reconstruction results[137]. (a) IntraTomo model; (b) comparison of CT reconstruction results
    ReferenceNetwork detailLoss functionDatasetFeature
    40

    Residual learning,

    skip connection

    MSEBiomedical,Ellipsoidal,Human Knee

    Advantages:

    artifact removal in different frequency bands and simple implementation

    Limitations:

    the network structure and loss function are single

    41-42

    Residual learning,

    skip connection,

    wavelet transform

    MSEAAPM Low Dose CT
    43

    Residual learning,

    skip connection,

    wavelet transform

    Chest and Catphan phantom
    44Skip connectionMAE3D Spectral Slices
    45-46

    Residual learning,

    skip connection

    TCIA
    47

    Residual learning,

    skip connection

    SSIM lossAAPM Low Dose CT
    Table 1. Summary of artifact removal model based on U-net
    ReferenceNetwork detailLoss functionDatasetFeature
    48Residual learning

    GAN loss,

    perceptual loss

    Human Knee

    Advantages:

    the resulting CT images are rich in detail and DDPM is more controllable;DDPM models do not require labels

    Limitations:

    GANs are difficult to train and have poor convergence;the sampling speed of DDPM is slow

    49Skip connectionGAN loss,MSETCGA-CESC
    50

    Residual learning,

    skip connection

    MAE,MSETCIA
    52Skip connectionWasserstein loss,MSEDental CT
    53

    Dense block,

    skip connection

    Wasserstein loss,MSE,

    SSIM loss

    AAPM Low Dose CT
    54DDPMMSE,KL divergenceChecked-in Luggage,C4KC-KiTS
    55DDPMMSE,KL divergenceLIDC,LDCT
    Table 2. Summary of artifact removal model based on GAN or DDPM
    ReferenceNetwork detailLoss functionDatasetFeature
    57

    Dense block,

    skip connection

    MSE,MS-SSIM lossNBIA

    Advantages:

    design different network structures according to task requirements and data characteristics and the reconstruction algorithm is fast

    Limitations:

    the loss function is single

    58Residual learning,GoogLeNetMSEClinical Routine CT
    59Residual learning,channel attention,recursive transformerMSEAAPM Low Dose CT
    60Multi-scale dilated convolution,multi-scale poolingLiTS
    61Multi-scale dilated convolution,Clique Block62MSEAAPM Low Dose CT
    63Residual learningMAELIDC-IDRI
    64

    Dense block,

    residual learning

    MAEBreast CT
    65

    Dense block,

    residual learning

    MAEHead CT
    66

    Residual learning,

    skip connection

    MSEAAPM Low Dose CT
    67Skip connectionMSE4D-Lung,DIR-LAB
    Table 3. Summary of artifact removal model based on other FCN
    ReferenceNetwork detailLoss functionDatasetFeature
    70Residual learningXCAT

    Advantages:

    the network structure design is simple and the network operation efficiency is high

    71Residual learning,skip connectionMSELung CT
    72Residual learning,skip connectionMSEmicro-CT
    73Residual learning,skip connection

    Limitations:

    the loss function is single

    74Skip connectionMSEPhantoms
    75

    Skip connection,dense block,

    residual learning

    MSE,

    MS-SSIM loss

    Phantoms
    Table 4. Summary of sinogram interpolation model based on U-net and FCN
    ReferenceNetwork detailLoss functionDatasetFeature
    761D convolutionMSE,GAN lossChecked in luggage CT

    Advantages:

    generate complete projection data at extreme sparse views and have high feature similarity

    Limitations:

    GANs are difficult to train and have poor convergence

    77Skip connectionMSE,GAN lossSiemens Somatom CT
    78Residual learning,skip connectionMSE,GAN lossOral CT
    79Skip connectionMAE,GAN lossCranial cavity CT
    80Skip connectionMAE,GAN lossCranial cavity CTHead PhantomCT
    82

    Skip connection,

    residual learning

    MAE,GAN lossModified FORBILD abdomen phantom CT
    83Skip connectionMSE,GAN lossAAPM Low Dose CT
    Table 5. Summary of sinogram interpolation model based on GAN
    ReferenceNetwork detailLoss functionDatasetFeature
    84Residual learning,skip connectionMSE,TV lossAAPM Low Dose CT

    Advantages:

    the network has dual domain data fidelity;

    end-to-end reconstruction of projection data

    85Residual learning,skip connection,wavelet transformMAETCIA
    86Residual learning,skip connectionMSEthoracic CT
    87Dense block,channel attention,residual learning,skip connectionMAEDeepLesion
    88Residual learningMAE,MSEAAPM Low Dose CT
    89Skip connectionMSEAAPM Low Dose CT
    90Residual learning,skip connection,dual channel fusionMAE,SSIM loss,DIFF lossAAPM Low Dose CT
    91Residual learning,skip connectionSmall animal Xtrim PET
    92Skip connectionMSEAAPM Low Dose CT
    93Skip connectionCross-entropy lossXenopus kidney embryos
    94Skip connectionMSEreal 9-view CT EDS
    95Residual learning,skip connectionMSEAAPM Low Dose CT
    96Residual learning

    MSE,GAN loss,

    Perceptual loss

    Data Science Bowl 2017

    Limitations:

    dual CNN structure is simple;dual GANs further increase the cost of training and the difficulty of convergence

    97Skip connectionMAE,GAN lossHeart craniocaudally CT
    98Skip connection,cosine similarity,Softmax attentionHole_L1 loss,perceptual loss,Cycle GAN lossDeepLesion,LDCT and Projection data
    Table 6. Summary of dual-domain reconstruction network based on CNN and GAN
    ReferenceNetwork detailLoss functionDatasetFeature
    99Swin-TransformerMSEAAPM Low Dose CT

    Advantages:

    long-range dependency modeling capability;

    extracting global feature information

    Limitations:

    large number of parameters of the self-attention mechanism

    100TransformerMSELIDC-IDRI
    101Swin-TransformerMSE,Charbonnier lossLDCT and Projection data
    102Swin-Transformer,Sobel operatorMSEAAPM Low Dose CT
    Table 7. Summary of dual-domain reconstruction network based on Transformer
    ReferenceNetwork detailLoss functionDatasetFeature
    103Residual learningMSEAAPM Low Dose CT

    Advantages:

    avoid the selection of regular terms and balance parameters;

    reduce the cost of manual experiments and computational complexity

    Limitations:

    high number of reconstruction iterations

    104Residual learningMSE,perceptual lossAAPM Low Dose CT
    1051D convolutionMSE
    106Residual learningMSEEllipses,head phantom
    107Residual learning,skip connectionMSETCIA
    108TransformerMSEAAPM Low Dose CT
    Table 8. Summary of optimization model of regular terms and balance parameters based on CNN
    ReferenceNetwork detailLoss functionDatasetFeature
    109Convolution-basedMSEAAPM Low Dose CT,Clinical Head

    Advantages:

    mapping solutions to non-convex problem;

    accelerated reconstruction rate using CNN

    Limitations:

    few parameters for network training;

    high number of reconstruction iterations

    110Convolution-based

    MSE,SSIM loss,

    semantic loss

    AAPM Low Dose CT
    111Convolution-basedMSEAAPM Low Dose CT
    112Residual learningMSESimulated EMT
    113Skip connectionMSEEllipses,AAPM Low Dose CT
    114Skip connectionMSEAAPM Low Dose CT
    115Residual learning,skip connectionMSE,SSIM lossAAPM Low Dose CT
    Table 9. Summary of the sub-problem iterative expansion optimization model based on CNN
    ReferenceNetwork detailLoss functionDatasetFeature
    116Residual learning,skip connectionWasserstein loss,MSENBIA

    Advantages:

    attention mechanism increases reconstruction accuracy;unsupervised training reduces dependence on labeled data and provides greater generalization

    Limitations:

    attention mechanism increases network calculation parameters and reduces reconstruction speed;unsupervised network optimization is difficult

    117

    Dense block,

    residual learning,

    channel and spatial attention

    MSEAAPM Low Dose CT,DeepLesion
    118Residual learningMSEChest and abdomen CT
    119Fully connectedMSE,TV lossAAPM Low Dose CT
    120Residual learningMSEEllipses,Chest CT
    121Convolutional analysis operator learningMSEXCAT
    Table 10. Summary of other CNN iterative expansion and unsupervised iterative reconstruction models
    ReferenceNetwork detailLoss functionDatasetFeature
    122Fully connectedMSEHuman FDG PET

    Advantages:

    the algorithm design is simple to implement;does not require CT reconstruction expertise

    Limitations:

    low reconstruction accuracy;large number of parameters in the fully connected layer

    123Fully connectedMSE
    124-125Fully connectedMSEAAPM Low Dose CT
    126Fully connected,residual learning,multi-channel fusionMSETCGA-ESCA
    127Skip connectionMSEShepp-Logan phantom,Forbild phantom
    128Skip connectionMSE
    Table 11. Summary of full-learning reconstruction model based on neural network
    ReferenceNetwork detailLoss functionDatasetFeature
    [129]Fully connectedMSEAAPM Low Dose CT

    Advantages:

    incorporates physical reconstruction process;reduced model parameters

    Limitations:

    reconstruction accuracy and network structure need further optimization

    [130]Fully connected,residual learningMSEAAPM Low Dose CT
    [131]Residual learning,upsampling and downsampling blockMSEAAPM Low Dose CT
    [132]

    Hard shrinkage operator,

    multi-channel fusion

    MSECoronary artery,abdomen CT
    [133]Skip connectionSSIM loss,MAE,Wasserstein lossBreast CT
    Table 12. Summary of reconstruction model based on learnable physical analytic algorithm
    ReferenceNetwork detailLoss functionDatasetFeature
    [134]Convolution-basedMSEMulti-grain structures CT

    Advantages:

    network training does not depend on label;greater generalization of self-supervised networks

    Limitations:

    self-supervised network reconstruction process requires optimized weights resulting in long reconstruction time;the accuracy of untrained network reconstruction is still relatively low

    [135]LSTM,residual learning,skip connection

    MSE,

    Profiles loss,GAN loss

    AAPM Low Dose CT
    [137]Fourier feature projection layer,full connectedMSELogan phantom,ATLAS,Covid-19,SL and LoDoPaB-CT,Pepper,Rose
    [138]Fourier feature projection layer,full connectedMSE

    XCAT,

    AAPM Low Dose CT

    [139]Convolution-basedMSE,TV lossShepp-Logan phantom,LIDC-IDRI,random ellipses
    Table 13. Summary of unsupervised or self-supervised end-to-end reconstruction models
    Application problemNetwork structureIuput-OutputAdvantageLimitation
    Image post-processing

    FCN,GAN,U-net,

    Transformer,DDPM

    CT-CTAdaptive artifact removal;simple and doableLack of fidelity to the sinograms,MSE loss leads to structural ambiguity

    Sinogram

    pre-processing

    FCN,GAN,U-netSinogram-SinogramAdaptive interpolation;simple and doableLack of fidelity to CT images;may introduce tiny false structures

    Dual-domain

    data processing

    FCN,GAN,U-net,TransformerSinogram-CTEnd-to-end reconstruction;dual-domain data fidelityThe existing model structure is relatively simple;increased amount of computation
    Iterative reconstructionFCN,GAN,U-net,TransformerSinogram/CT-CTReduce the computational complexity and labor experiment costsThe process of iterating multiple times cannot be avoided;the reconstruction time is still long
    End-to-end mapping reconstruction

    MLP,FCN,GAN,

    U-net

    Sinogram-CTMLP or CNN mapping method is simple to design;the learnable analytical reconstruction algorithm has a physical process guidanceMLP or CNN mapping methods lack physical reconstruction process and the reconstruction accuracy is not high
    Table 14. Applications of sparse-view or limited-angle CT reconstruction based on deep learning
    Jianglei Di, Juncheng Lin, Liyun Zhong, Kemao Qian, Yuwen Qin. Review of Sparse-View or Limited-Angle CT Reconstruction Based on Deep Learning[J]. Laser & Optoelectronics Progress, 2023, 60(8): 0811002
    Download Citation