Fig. 1. Attenuation process of X-ray penetrating an object
[18] Fig. 2. Schematic of beam projection
[19]. (a) Radon transform-based parallel beam projection process; (b) sectoral beam projection process
Fig. 3. Schematic of the CT reconstruction process
[21] Fig. 4. Visual of reconstructed CT degradation. (a) Full dose reference CT image; (b) full-view (180°) reconstructed CT image; (c) 1/6 sampling sparse-view reconstructed CT image; (d) reconstructed CT image under limited-angle of [0, 120°]
Fig. 5. CNN
[25]. (a) Basic structure of CNN; (b) basic structure of neurons of a neural network
Fig. 6. Embedding modules in CNN. (a) Residual network module
[29]; (b) dense connection module
[30]; (c) channel attention module
[31]; (d) spatial attention module
[32] Fig. 7. CT image domain post-processing process
Fig. 8. Network structure and comparison of reconstruction results
[43]. (a) U-net based on multi-level wavelet transform; (b) comparison of CT reconstruction results
Fig. 9. Artifact removal model combining TV regularization iteration reconstruction with U-net
[47] Fig. 10. Artifact removal models based on GAN or DDPM. (a) U-WGAN model
[53]; (b) DDPM
[55] Fig. 11. Reconstruction results of the artifact removal models based on GAN or DDPM. (a) CT reconstruction results in Ref. [
53]; (b) CT reconstruction results in Ref. [
55]
Fig. 12. FCN artifact removal models based on different network structures. (a) R2-Net model
[59]; (b) MS-RDN model
[64] Fig. 13. Artifact removal model based on Transformer
[69] Fig. 14. Sinogram domain preprocessing process
Fig. 15. Sinogram interpolation models based on U-net. (a) Model combining linear interpolation with U-net
[72]; (b) DPC-CT model
[75] Fig. 16. Sinogram interpolation models based on GAN. (a) CT-Net model
[76]; (b) SI-GAN model
[80] Fig. 17. Reconstruction results of the sinogram interpolation models based on GAN. (a) CT reconstruction results in Ref. [
76];
Fig. 18. Dual-domain network joint processing process
Fig. 19. Dual-domain reconstruction network based on U-net and FCN. (a) SPID model
[84]; (b) multi-channel sinogram restoration model
[86]; (c) DuDoDR-Net model
[87] Fig. 20. Reconstruction results of dual-domain reconstruction networks based on U-net and FCN. (a) CT reconstruction results in Ref. [
84]; (b) CT reconstruction results in Ref. [
86]; (c) CT reconstruction results in Ref. [
87]
Fig. 22. Network structure and reconstruction results
[102]. (a) Dual-domain reconstruction network based on Transformer; (b) comparison of CT reconstruction results
Fig. 23. CNN regular term based iterative reconstruction process
[103] Fig. 24. Optimization model of regular terms and balance parameters based on CNN and reconstruction results
[108]. (a) RegFormer model; (b) comparison of CT reconstruction results
Fig. 25. Sub-problem iterative expansion optimization models based on CNN. (a) FISTA-based iterative reconstruction model
[112]; (b) shear wave based iterative reconstruction model
[113] Fig. 26. Reconstruction results of sub-problem iterative expansion optimization model based on CNN. (a) CT reconstruction results in Ref. [
112]; (b) CT reconstruction results in Ref. [
113]
Fig. 27. Unsupervised iterative model and reconstruction results
[120]. (a) REDAEP iterative reconstruction model; (b) comparison of CT reconstruction results
Fig. 28. End-to-end mapping reconstruction process
Fig. 29. Full-learning reconstruction model. (a) Reconstruction network based on fully connected layer
[124-125]; (b) reconstruction network based on stacked U-net
[128] Fig. 30. Reconstruction results of full-learning reconstruction models. (a) CT reconstruction results in Ref. [
125];
Fig. 31. Reconstruction model based on learnable physical analytic algorithm and comparison of reconstruction results
[130]. (a) iRadonMAP model; (b) comparison of CT reconstruction results
Fig. 32. Self-supervised untrained projection reconstruction model and reconstruction results
[137]. (a) IntraTomo model; (b) comparison of CT reconstruction results
Reference | Network detail | Loss function | Dataset | Feature |
---|
[40] | Residual learning, skip connection | MSE | Biomedical,Ellipsoidal,Human Knee | Advantages: artifact removal in different frequency bands and simple implementation Limitations: the network structure and loss function are single | [41-42] | Residual learning, skip connection, wavelet transform | MSE | AAPM Low Dose CT | [43] | Residual learning, skip connection, wavelet transform | | Chest and Catphan phantom | [44] | Skip connection | MAE | 3D Spectral Slices | [45-46] | Residual learning, skip connection | | TCIA | [47] | Residual learning, skip connection | SSIM loss | AAPM Low Dose CT |
|
Table 1. Summary of artifact removal model based on U-net
Reference | Network detail | Loss function | Dataset | Feature |
---|
[48] | Residual learning | GAN loss, perceptual loss | Human Knee | Advantages: the resulting CT images are rich in detail and DDPM is more controllable;DDPM models do not require labels Limitations: GANs are difficult to train and have poor convergence;the sampling speed of DDPM is slow | [49] | Skip connection | GAN loss,MSE | TCGA-CESC | [50] | Residual learning, skip connection | MAE,MSE | TCIA | [52] | Skip connection | Wasserstein loss,MSE | Dental CT | [53] | Dense block, skip connection | Wasserstein loss,MSE, SSIM loss | AAPM Low Dose CT | [54] | DDPM | MSE,KL divergence | Checked-in Luggage,C4KC-KiTS | [55] | DDPM | MSE,KL divergence | LIDC,LDCT |
|
Table 2. Summary of artifact removal model based on GAN or DDPM
Reference | Network detail | Loss function | Dataset | Feature |
---|
[57] | Dense block, skip connection | MSE,MS-SSIM loss | NBIA | Advantages: design different network structures according to task requirements and data characteristics and the reconstruction algorithm is fast Limitations: the loss function is single | [58] | Residual learning,GoogLeNet | MSE | Clinical Routine CT | [59] | Residual learning,channel attention,recursive transformer | MSE | AAPM Low Dose CT | [60] | Multi-scale dilated convolution,multi-scale pooling | | LiTS | [61] | Multi-scale dilated convolution,Clique Block[62] | MSE | AAPM Low Dose CT | [63] | Residual learning | MAE | LIDC-IDRI | [64] | Dense block, residual learning | MAE | Breast CT | [65] | Dense block, residual learning | MAE | Head CT | [66] | Residual learning, skip connection | MSE | AAPM Low Dose CT | [67] | Skip connection | MSE | 4D-Lung,DIR-LAB |
|
Table 3. Summary of artifact removal model based on other FCN
Reference | Network detail | Loss function | Dataset | Feature |
---|
[70] | Residual learning | | XCAT | Advantages: the network structure design is simple and the network operation efficiency is high | [71] | Residual learning,skip connection | MSE | Lung CT | [72] | Residual learning,skip connection | MSE | micro-CT | [73] | Residual learning,skip connection | | | Limitations: the loss function is single | [74] | Skip connection | MSE | Phantoms | [75] | Skip connection,dense block, residual learning | MSE, MS-SSIM loss | Phantoms |
|
Table 4. Summary of sinogram interpolation model based on U-net and FCN
Reference | Network detail | Loss function | Dataset | Feature |
---|
[76] | 1D convolution | MSE,GAN loss | Checked in luggage CT | Advantages: generate complete projection data at extreme sparse views and have high feature similarity Limitations: GANs are difficult to train and have poor convergence | [77] | Skip connection | MSE,GAN loss | Siemens Somatom CT | [78] | Residual learning,skip connection | MSE,GAN loss | Oral CT | [79] | Skip connection | MAE,GAN loss | Cranial cavity CT | [80] | Skip connection | MAE,GAN loss | Cranial cavity CT,Head PhantomCT | [82] | Skip connection, residual learning | MAE,GAN loss | Modified FORBILD abdomen phantom CT | [83] | Skip connection | MSE,GAN loss | AAPM Low Dose CT |
|
Table 5. Summary of sinogram interpolation model based on GAN
Reference | Network detail | Loss function | Dataset | Feature |
---|
[84] | Residual learning,skip connection | MSE,TV loss | AAPM Low Dose CT | Advantages: the network has dual domain data fidelity; end-to-end reconstruction of projection data | [85] | Residual learning,skip connection,wavelet transform | MAE | TCIA | [86] | Residual learning,skip connection | MSE | thoracic CT | [87] | Dense block,channel attention,residual learning,skip connection | MAE | DeepLesion | [88] | Residual learning | MAE,MSE | AAPM Low Dose CT | [89] | Skip connection | MSE | AAPM Low Dose CT | [90] | Residual learning,skip connection,dual channel fusion | MAE,SSIM loss,DIFF loss | AAPM Low Dose CT | [91] | Residual learning,skip connection | | Small animal Xtrim PET | [92] | Skip connection | MSE | AAPM Low Dose CT | [93] | Skip connection | Cross-entropy loss | Xenopus kidney embryos | [94] | Skip connection | MSE | real 9-view CT EDS | [95] | Residual learning,skip connection | MSE | AAPM Low Dose CT | [96] | Residual learning | MSE,GAN loss, Perceptual loss | Data Science Bowl 2017 | Limitations: dual CNN structure is simple;dual GANs further increase the cost of training and the difficulty of convergence | [97] | Skip connection | MAE,GAN loss | Heart craniocaudally CT | [98] | Skip connection,cosine similarity,Softmax attention | Hole_L1 loss,perceptual loss,Cycle GAN loss | DeepLesion,LDCT and Projection data |
|
Table 6. Summary of dual-domain reconstruction network based on CNN and GAN
Reference | Network detail | Loss function | Dataset | Feature |
---|
[99] | Swin-Transformer | MSE | AAPM Low Dose CT | Advantages: long-range dependency modeling capability; extracting global feature information Limitations: large number of parameters of the self-attention mechanism | [100] | Transformer | MSE | LIDC-IDRI | [101] | Swin-Transformer | MSE,Charbonnier loss | LDCT and Projection data | [102] | Swin-Transformer,Sobel operator | MSE | AAPM Low Dose CT |
|
Table 7. Summary of dual-domain reconstruction network based on Transformer
Reference | Network detail | Loss function | Dataset | Feature |
---|
[103] | Residual learning | MSE | AAPM Low Dose CT | Advantages: avoid the selection of regular terms and balance parameters; reduce the cost of manual experiments and computational complexity Limitations: high number of reconstruction iterations | [104] | Residual learning | MSE,perceptual loss | AAPM Low Dose CT | [105] | 1D convolution | MSE | | [106] | Residual learning | MSE | Ellipses,head phantom | [107] | Residual learning,skip connection | MSE | TCIA | [108] | Transformer | MSE | AAPM Low Dose CT |
|
Table 8. Summary of optimization model of regular terms and balance parameters based on CNN
Reference | Network detail | Loss function | Dataset | Feature |
---|
[109] | Convolution-based | MSE | AAPM Low Dose CT,Clinical Head | Advantages: mapping solutions to non-convex problem; accelerated reconstruction rate using CNN Limitations: few parameters for network training; high number of reconstruction iterations | [110] | Convolution-based | MSE,SSIM loss, semantic loss | AAPM Low Dose CT | [111] | Convolution-based | MSE | AAPM Low Dose CT | [112] | Residual learning | MSE | Simulated EMT | [113] | Skip connection | MSE | Ellipses,AAPM Low Dose CT | [114] | Skip connection | MSE | AAPM Low Dose CT | [115] | Residual learning,skip connection | MSE,SSIM loss | AAPM Low Dose CT |
|
Table 9. Summary of the sub-problem iterative expansion optimization model based on CNN
Reference | Network detail | Loss function | Dataset | Feature |
---|
[116] | Residual learning,skip connection | Wasserstein loss,MSE | NBIA | Advantages: attention mechanism increases reconstruction accuracy;unsupervised training reduces dependence on labeled data and provides greater generalization Limitations: attention mechanism increases network calculation parameters and reduces reconstruction speed;unsupervised network optimization is difficult | [117] | Dense block, residual learning, channel and spatial attention | MSE | AAPM Low Dose CT,DeepLesion | [118] | Residual learning | MSE | Chest and abdomen CT | [119] | Fully connected | MSE,TV loss | AAPM Low Dose CT | [120] | Residual learning | MSE | Ellipses,Chest CT | [121] | Convolutional analysis operator learning | MSE | XCAT |
|
Table 10. Summary of other CNN iterative expansion and unsupervised iterative reconstruction models
Reference | Network detail | Loss function | Dataset | Feature |
---|
[122] | Fully connected | MSE | Human FDG PET | Advantages: the algorithm design is simple to implement;does not require CT reconstruction expertise Limitations: low reconstruction accuracy;large number of parameters in the fully connected layer | [123] | Fully connected | MSE | | [124-125] | Fully connected | MSE | AAPM Low Dose CT | [126] | Fully connected,residual learning,multi-channel fusion | MSE | TCGA-ESCA | [127] | Skip connection | MSE | Shepp-Logan phantom,Forbild phantom | [128] | Skip connection | MSE | |
|
Table 11. Summary of full-learning reconstruction model based on neural network
Reference | Network detail | Loss function | Dataset | Feature |
---|
[129] | Fully connected | MSE | AAPM Low Dose CT | Advantages: incorporates physical reconstruction process;reduced model parameters Limitations: reconstruction accuracy and network structure need further optimization | [130] | Fully connected,residual learning | MSE | AAPM Low Dose CT | [131] | Residual learning,upsampling and downsampling block | MSE | AAPM Low Dose CT | [132] | Hard shrinkage operator, multi-channel fusion | MSE | Coronary artery,abdomen CT | [133] | Skip connection | SSIM loss,MAE,Wasserstein loss | Breast CT |
|
Table 12. Summary of reconstruction model based on learnable physical analytic algorithm
Reference | Network detail | Loss function | Dataset | Feature |
---|
[134] | Convolution-based | MSE | Multi-grain structures CT | Advantages: network training does not depend on label;greater generalization of self-supervised networks Limitations: self-supervised network reconstruction process requires optimized weights resulting in long reconstruction time;the accuracy of untrained network reconstruction is still relatively low | [135] | LSTM,residual learning,skip connection | MSE, Profiles loss,GAN loss | AAPM Low Dose CT | [137] | Fourier feature projection layer,full connected | MSE | Logan phantom,ATLAS,Covid-19,SL and LoDoPaB-CT,Pepper,Rose | [138] | Fourier feature projection layer,full connected | MSE | XCAT, AAPM Low Dose CT | [139] | Convolution-based | MSE,TV loss | Shepp-Logan phantom,LIDC-IDRI,random ellipses |
|
Table 13. Summary of unsupervised or self-supervised end-to-end reconstruction models
Application problem | Network structure | Iuput-Output | Advantage | Limitation |
---|
Image post-processing | FCN,GAN,U-net, Transformer,DDPM | CT-CT | Adaptive artifact removal;simple and doable | Lack of fidelity to the sinograms,MSE loss leads to structural ambiguity | Sinogram pre-processing | FCN,GAN,U-net | Sinogram-Sinogram | Adaptive interpolation;simple and doable | Lack of fidelity to CT images;may introduce tiny false structures | Dual-domain data processing | FCN,GAN,U-net,Transformer | Sinogram-CT | End-to-end reconstruction;dual-domain data fidelity | The existing model structure is relatively simple;increased amount of computation | Iterative reconstruction | FCN,GAN,U-net,Transformer | Sinogram/CT-CT | Reduce the computational complexity and labor experiment costs | The process of iterating multiple times cannot be avoided;the reconstruction time is still long | End-to-end mapping reconstruction | MLP,FCN,GAN, U-net | Sinogram-CT | MLP or CNN mapping method is simple to design;the learnable analytical reconstruction algorithm has a physical process guidance | MLP or CNN mapping methods lack physical reconstruction process and the reconstruction accuracy is not high |
|
Table 14. Applications of sparse-view or limited-angle CT reconstruction based on deep learning