• Acta Optica Sinica
  • Vol. 44, Issue 7, 0734002 (2024)
Chuanjiang Liu1、2, Ao Wang2, Genyuan Zhang2, Wei Yuan2, and Fenglin Liu1、2、*
Author Affiliations
  • 1College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400044, China
  • 2Engineering Research Center of Industrial Computed Tomography Non-Destructive Testing, Ministry of Education, Chongqing University, Chongqing 400044, China
  • show less
    DOI: 10.3788/AOS231855 Cite this Article Set citation alerts
    Chuanjiang Liu, Ao Wang, Genyuan Zhang, Wei Yuan, Fenglin Liu. Source Blur Elimination in Micro-CT Using Self-Attention-Based U-Net[J]. Acta Optica Sinica, 2024, 44(7): 0734002 Copy Citation Text show less

    Abstract

    Objective

    Spatial resolution of X-ray imaging systems is crucial for microstructural object studies due to the small size of the subjects. Specifically, the focal spot size of the X-ray source is a main factor affecting the spatial resolution of micro-computed tomography (micro-CT), which will produce penumbra blur on detectors and thus blur the reconstructed images and reduce spatial resolution. Meanwhile, reducing the focal spot size by decreasing the X-ray tube power is a straightforward solution, but will prolong the scan duration. Therefore, we aim to develop a deep learning-based strategy by learning the inverse finite focal spot model to mitigate the penumbra blur for obtaining CT images with high spatial resolution even in the case of a non-ideal X-ray source.

    Methods

    First, we derive the finite focal spot model that builds a relationship from the ideal point source projection to the finite focal spot projection. Based on the derived model, we numerically compute a paired projection dataset. Second, we utilize the neural network U-net and an attention mechanism module of convolution modulation block to build a self-attention mechanism-based U-net (SU-net) and thus learn the inverse finite focal spot model. The goal is to estimate the ideal point source projection from the actual non-ideal focal spot projection. SU-net (Fig. 1) which introduces convolution modulation blocks into the contracting path of the U-net is proposed to boost the U-net property. Finally, the standard filtered back-projection (FBP) is employed for reconstruction using the estimated ideal point projection.

    Results and Discussions

    Simulation experiments are performed by the public dataset 2DeteCT to verify the effectiveness of the SU-net, which consists of a wide variety of dried fruits, nuts, and different types of rocks. Two groups of results are randomly selected in the test dataset for visualization (Fig. 2) and quantitative indicators are tested on the whole test dataset (Fig. 3). The results show that our proposed SU-net can estimate the ideal point source projection from the non-ideal focal spot projection. To verify the robustness of the SU-net, we test it with data outside of the simulation experimental dataset (Fig. 4), and the results show that it has better generalization than the end-to-end enhanced super resolution generative adversarial network (ESRGAN). Meanwhile, the ablation experiment is conducted with the same dataset and experimental parameters as the simulation experiment to confirm the validity of the added convolutional modulation module (CM) and gradient deviation loss, with quantitative indicators measured (Table 1). The results show that both the CM module and gradient deviation loss added by us can improve the network performance. Practical experiments are carried out to evaluate the effectiveness of the SU-net algorithm on real data (Fig. 5). Since it is difficult to obtain label data in the actual experiment, we select three evaluation indicators that do not require label data (Table 2), including PIQE (perception-based image quality evaluator), NIQE (natural image quality), and image sharpness evaluation function DCT (discrete cosine transform). The results show that our proposed SU-net algorithm achieves the optimal results compared with the comparison methods.

    Conclusions

    In micro-CT imaging, the focal spot size of the actual X-ray source is limited, and under the relatively large focal spot size, the projected image will be blurred, and the reconstruction of the measured projection directly using the CT algorithm based on the point source model will cause the image to be blurred. We propose a U-net based on the self-attention mechanism to estimate the ideal point source projection from the actual measured non-ideal focal spot projection. Meanwhile, we establish a training dataset according to the relationship between the non-ideal focal spot projection and the ideal point source projection to optimize the network. Simulation and practical experiments show that this method can effectively estimate clear projection from blurred projection. The advantage of the proposed method is that we can construct a dataset by the relationship between the finite focal spot projection model and the ideal point source projection model, without collecting data pairs composed of non-ideal focal spot projection and ideal point source projection, which greatly reduces the difficulty of constructing datasets. Secondly, the proposed network directly based on the relationship between the finite focal spot projection model and the ideal point source projection model has strong interpretability, which means the inverse relationship from the finite focal spot model to the ideal point source model is learned through the network. Therefore, this method has better generalization than end-to-end ESRGAN, especially for CT images with high fidelity of image details. Our limitation is that the training is conducted for a specific focal spot size and a specific scanning geometry without considering the influence of noise. Subsequent studies will train networks with different focal spot sizes and geometric parameters and consider situations with noise.

    Chuanjiang Liu, Ao Wang, Genyuan Zhang, Wei Yuan, Fenglin Liu. Source Blur Elimination in Micro-CT Using Self-Attention-Based U-Net[J]. Acta Optica Sinica, 2024, 44(7): 0734002
    Download Citation