• Journal of Infrared and Millimeter Waves
  • Vol. 43, Issue 2, 288 (2024)
Yuan-Yuan LI1、2、3, Yong-Chun YUAN1, Li-Hua RUAN1、3, Qing-Qing ZHAO1, and Tao ZHANG1、2、3、*
Author Affiliations
  • 1Shanghai Institute of Technical Physics,Chinese Academy of Sciences,Shanghai 200083,China
  • 2School of Information Science and Technology,ShanghaiTech University,Shanghai 201210,China
  • 3University of Chinese Academy of Sciences,Beijing 100049,China
  • show less
    DOI: 10.11972/j.issn.1001-9014.2024.02.019 Cite this Article
    Yuan-Yuan LI, Yong-Chun YUAN, Li-Hua RUAN, Qing-Qing ZHAO, Tao ZHANG. A microscopic image enhancement method for cell experiments in space[J]. Journal of Infrared and Millimeter Waves, 2024, 43(2): 288 Copy Citation Text show less

    Abstract

    High image quality is crucial for cell experiments in space, as it requires the ability of remotely monitoring to grasp the progress and direction of experiments. However, due to space limitations and environmental factors, imaging equipment is strongly constrained in some performance, which directly affects the imaging quality and observation of cultivated targets. Moreover, experimental analysis on the ground requires tasks such as feature extraction and cell counting, but uneven lighting can seriously affect computer processing. Therefore, a method called STAR-ADF is proposed, and experimental results show that the proposed method can effectively remove noise, equalize illumination, and increase the enhancement evaluation index by 12.5% in comparison with original figures, which has certain robustness.

    Introduction

    Numerous cell experiments conducted in space aim to study the rhythms of cell growth and differentiation,the biological effects of the space environment,and the impact of microgravity on cellular tissues1. These experiments contribute to a better understanding of the effects of the microgravity in space on living organisms. During in situ cell culture,the assessment of experimental progress and subsequent strategic adjustments heavily relies upon cell morphology of a distinct period2. Hence,it is imperative for the camera apparatus to possess attributes encompassing high resolution,high sensitivity,and low noise levels to ensure the clarity and accuracy of captured images.

    In 2022,Shanshan He et al.3 showcased the utilization of CosMx™ SMI for spatial molecular imaging,revealing its exceptional sensitivity and an impressively low rate of cell recognition errors. The realm of space cell culture has been venturing into the realm of 3D organ cultivation,and the University of Zurich successfully transported a dedicated space vehicle to the International Space Station (ISS) in March 2020. However,achieving high-quality 3D imaging in the space environment continues to pose challenges for current applications.

    Imaging devices and detectors are subject to the unique conditions of the space environment,including strong radiation,vacuum,and temperature fluctuations,which are quite different from the environment on Earth. These conditions can introduce intricate noise patterns and uneven illumination issues which display on the Grayscale,thereby diminishing the quality and accuracy of the images. Such degradation has far-reaching effects for scientific observations and analysis,so it is imperative to employ suitable techniques to enhance the quality of detector output data.

    For the two image tasks of highlight removing and denoising,state-of-the-art algorithms are mainly divided into traditional methods and machine learning-based methods. Soleimani et al.4 removed the effects of light with adaptive threshold techniques and denoised with BM3D filtering to achieve better cell segmentation. However,this method may lead to a loss of target details. Park et al.5 proposed a dual autoencoder network model based on the Retinex theory for enhancing and denoising low-light images. Based on the U-net model,Ai et al.6 employed a machine learning approach,utilizing short-exposure and long-exposure images as input and ground truth,respectively,for training. The results demonstrate the network's ability to achieve balanced illumination while enhancing details in dark regions. Recently,Generative Adversarial Networks (GANs) have also been explored as unsupervised image enhancement methods. Ying et al.7 proposed the LE-GAN method,which incorporates an illumination-aware module into the GAN framework,effectively addressing noise and overexposure issues in real-scene datasets. Nonetheless,these network models still require standardized datasets,posing a challenging problem in resource-limited space science experiments.

    Based on the Retinex theory,we present a method called STAR-ADF for enhancing microscopic images obtained from space life science experiments. This method not only ensures preserving image details but also effectively addresses challenges associated with brightness and noise,which provides a valuable and practical tool for scientists to analyze the space experiment data. Compared to other methods,the STAR-ADF method demonstrates superior performance in terms of enhanced contrast,making the features of the image clearer and more prominent. By applying the STAR-ADF method,we can obtain more accurate and reliable image results,which supports the research and analysis of space life science experiments.

    1 Data acquisition

    This study utilized a custom-designed visible light microscopy camera device equipped with the detector featuring a resolution of 1 294×1 024 pixels. The experimental data were derived from life science experiments conducted in space. The image datasets involved in the experiment include brightfield microscopic images,including mesenchymal stem cells,osteoblasts,pluripotent stem cells,liver stem cells,germ cells,human embryonic stem cells,mouse embryonic stem cells under diverse experimental conditions. In order to show uneven illumination of the detector output,we demonstrate the relationship between the light source of the brightfield microscope and the cell culture region,as shown in Fig. 1.

    Schematic diagram of the spatial microscope camera

    Figure 1.Schematic diagram of the spatial microscope camera

    2 Methods

    We propose a novel method for targeted improvement of image quality,accomplished through the integration of the structure and texture aware Retinex (STAR) model8 and the nonlinear anisotropic diffusion filtering technique,as shown in Fig. 2. The two input and output examples in the pipeline are displayed below each module. The first line shows significant uniform illumination,while the second line shows notable noise removal. Initially,we employ the STAR model to adjust the brightness of the images uniformly and minimize the loss of image details. By utilizing an exponential total variation weighting matrix,the STAR model effectively combines image texture information,thereby preserving image details. Secondly,the anisotropic filtering is employed to reduce the impact of image quality,which adjusts the filter response based on the degree of difference between the pixels in the image. The principle is based on the local image features to preserve the edges and details in the image while reducing the impact of noise in flat regions,ultimately improving the overall visual quality of the image. By combining the advantages of the STAR model and anisotropic filters,our proposed approach enhances the image quality,including improved sharpness,noise reduction,and enhanced visual appearance.

    Proposed methodological framework diagram

    Figure 2.Proposed methodological framework diagram

    2.1 Theoretical model

    In 1971,Land et al.9 proposed the Retinex mechanism. The light information received by the detector is determined by the light intensity and the reflection coefficient of the target surface. Suppose that L is the illumination information,R is the reflection coefficient,correspondingly I is the data received by the detector,and the relationship between the three can be shown as follows:

    Ii, j=Li, jRi, j,

    where the pixels of the image are represented as (i,j),and IL,and R represent the original image,the extracted light layer image,and the reflection information layer,respectively. The generation of light regions in the image results from the unevenness of illumination,that is,the non-uniform brightness value expressed in the light layer10. For the extraction of illumination components,this paper transforms them into optimization problems,alternately iteratively optimizing the difference between the input image and the product of the estimated light and reflection layers. At the same time,in order to prevent overfitting and ensure that the estimated light and reflection layers are meaningful,it is necessary to use the regularization function to add constraints to the solution space. The total variational (TV) algorithm11 is a denoising algorithm widely used in the field of image processing. By calculating the ℓ1 norm minimization of the image gradient,the image is smoother while maintaining the edge information12-13. Building upon this foundation,the optimization problem is reformulated after incorporating the full variational regularization constraint,yielding the following expression:

    minL,RI(i, j)-L(i, j)R(i, j)+TV(L)+TV(R),

    where TV is the total variation of the image,and TVL=i,jL(i,j)TVR=i,jR(i,j).

    However,in detail-rich cell microscopic images,applying total variational filtering may result in excessive suppression of high-frequency details in images19. In order to improve the performance of the total variational algorithm,an indexed total variational method is proposed (see Eq. (3)). By assigning values to the exponential parameters α and β,the characteristics of which are used to adjust the filter effect in a targeted manner. Since the edge gradient of the target is usually larger than the detail gradient,the edge will be amplified when the index is greater than 1,and the detail will be amplified when the index is less than 1. Our experiments have shown that the exponential total variance method has a good effect in cell microscopic image enhancement,which balances strong light removal,edge enhancement and detail preservation to improve image quality while maintaining texture richness.

    minL,RI(i, j)-L(i, j)R(i, j)+λETV(L)+μETV(R),

    where ETV denotes the exponential local variance of the image,the weight matrix WL=1/(L(i,j)α+ε) and WR=1/(R(i,j)β+ε) is added. Therefore,we can derive that ETVL=i,jWLL(i,j)ETVR=i,jWRR(i,j). Then the iterative solution of illumination component and uniform illumination results through vector form:

    l=argminl i-diagr×l22+λdiagWL×vl22,
    r=argminr i-diagl×r22+μdiagWR×vr22,

    where we denote that i=vec(I:)l=vec(L:)r=vec(R:)vl=vec(L:)vr=vec(R:),and diag()  is the conversion of vectors into diagonal matrices. The estimated component can be expressed as: L^=resize(l)R^=resize(r).

    Normalize the results (see Eq. (6)),std indicates the standard deviation of the image:

    Rn=R^stdR^ .

    The principle of anisotropic filtering14,is to treat the image as a heat field,the distribution of pixel values at each point is the heat distribution in space,the heat diffuses from high temperature to low temperature,and the heat conduction stops at the contour boundary in the image15. By calculating the gradient of the pixel and its four neighbors,the thermal conductivity of the pixel in the spatial direction can be solved. ADRni,j is the anisotropy measure of i,j

    ADRni, j=e-DRni,j2t,

    where Rni,j is the gray value of the first pixel of the images to be denoised,D represents the direction,and for the four-neighborhood solution referring to east,west,south and north. DRn(i,j) is the gradient of a certain direction of the point,and the weight of the filter of each pixel (denoted as A()) is calculated according to the anisotropy metric. According to the gradient information and noise level of the image,the filtering direction of anisotropic filtering is determined. In general,filtering along the edges of the image better preserves the edge details of the image. According to the determined filtering direction,the image is filtered:

    Oi, j=d=14AD(d)Rni, jDdRni, j .

    The proposed method aims to achieve a dual objective: smoothing the image pixels while retaining the maximal edge information. To this end,multiple iterative filtering operations are applied to progressively diminish noise artifacts presenting in the image. Subsequently,the output at this stage undergoes further processing,utilizing contrast limited adaptive histogram equalization (CLAHE) to enhance the image's contrast16.

    In addition,for objective evaluation,three key evaluation indicators are used to assess the performance of various methods in meeting the task requirements: image entropy,contrast ratio (EME),and image similarity (SSIM). Firstly,entropy is utilized to quantify the information loss before and after the image processing (Eq. (9) and Eq. (10)). Higher image entropy values indicate a more uniform and complex distribution of pixel values,signifying a greater presence of details and information within the image1720. Conversely,lower image entropy values suggest a more concentrated and simpler distribution of pixel values,indicating a relatively reduced amount of information in the image. Assuming that the frequency of a pixel value is denoted as k(v),the frequency of that pixel value can be expressed as:

    pv=kvH×W,

    where H and W represent the height and width of the image,respectively. The image entropy is calculated using the following formula:

    Entropy=-(p(v)log2(pv)) .

    Secondly,we utilize Enhancement Measure Evaluation (EME) as an assessment index to quantify the improvement in image contrast achieved after processing. The calculation method involves dividing the image into M*N small areas,and then calculating the logarithmic mean of the ratio of the largest gray value to the smallest value in the small area,and the evaluation result is obtained as the logarithmic mean:

    Eme=MN20log max Im,nmin Im,nM×N,

    where max Im,n represents the maximum gray value within the image block (m,n) and min Im,n is the minimum gray value within the image block.

    Lastly,we adopt the Structural Similarity (SSIM) as the evaluation metric to quantify the similarity between images before and after processing. Given the representations of the original image and the evaluated image as Io and Ie,respectively,then SSIM quantifies the degree of similarity between them in a quantitative manner as the following formula:

    Ssim=(2meanIomeanIe[meanIo2+meanIe2+c1]2CovIo,Ie+c2sigmaIo2+sigmaIe2+c2,

    where mean represents the mean value of the image,sigma represents the standard deviation of the image,Cov denotes the covariance between the two images. The constant c1 and c2 are introduced to avoid dividing by zero,with cx=(kxmax (I))2, x=1 or 2. For 8-bit grayscale images,max I=255.

    2.2 STAR-ADF method flow

    The STAR-ADF method is specifically designed to improve the quality and contrast of microscopic images for better visualization of cells and microstructures. The flowchart of the STAR-ADF method is illustrated in Fig. 3. For the uniform illumination task,the light layer separation method based on the Retinex theory finds a wide range of applications. Given our emphasis on detail preservation,we address this challenge by creating separate maps for the light layer and the reflection layer,utilizing the spatial properties of the image. To maintain rich texture information in microscopic images,we explore the application of multiple filtering methods in both the frequency and spatial domains. During the experiment,we initially performed a high-frequency component enhancement filtering attempt on the frequency domain image after Fourier transform or wavelet transform. However,we observed significant distortion in the resulting images. In the content of microscopic images,we want to achieve as smooth an image as possible in the target area,while keeping the contours and details as unsmooth as possible. For this reason,we select a total variational model and optimize the model by gradient descent to get the optimal solution. Fully variational models are widely used in image processing,and their advantage lies in their ability to smooth flat areas of the image while preserving image edges and texture details,effectively improving image quality and keeping important information. This method extends the optimization model of the full variational method by incorporating exponential parameters,allowing separate extraction of the illumination structure and reflection texture of the image. These results generate a structure map and a reflection map,which serve as weight matrices,providing measurements of regional structure and texture information within the image. Satisfactory smoothness and the light layer can be obtained by using this approach,but the scattered noise spots within the image still require further resolution.

    Flow chart of the STAR-ADF method

    Figure 3.Flow chart of the STAR-ADF method

    Compared with other denoising methods,anisotropic filters are more suitable for our image data according to image characteristic analysis and attempts. Anisotropic filtering,a nonlinear filtering method commonly employed for image denoising,is calculated. First,the gradient intensity and orientation of each pixel in the grayscale images are calculated to capture edge and texture information. Based on the gradient information of the pixels,we calculate a weight coefficient that reflects the structural features surrounding each pixel. During the filtering process,this weight coefficient guides the adjustment of smoothness in different directions. A sliding window is then employed to weight the pixel values of a local neighborhood for each pixel in the image. By introducing weight coefficients,the filter becomes more sensitive to pixel value variations along edges and texture directions while ensuring smoother pixel value transitions in flat regions. This preservation of edge and texture information enhances the overall image quality. Ultimately,the filtered image attains a higher level of visual fidelity,representing the improved outcome of this denoising procedure.

    3 Analysis of experimental results

    In this section,we present a comprehensive performance analysis of the proposed method,evaluating its effectiveness from both subjective and objective perspectives. The test dataset employed in the experiment consists of the image data derived from various life science experiments as discussed in Part 2. The experiment is performed with MATLAB software on a PC with 3.70 GHz CPU and 16.00 GB RAM.

    Firstly,our method decomposes the image to obtain the illumination layer and reflection layer as shown in Figs. 4(a) and 4(b),respectively. The extraction of the weighted matrix map is shown in Fig. 4(c).

    Decomposition images:(a) reflection layer;(b) illumination layer;(c) weighted matrix map

    Figure 4.Decomposition images:(a) reflection layer;(b) illumination layer;(c) weighted matrix map

    MethodsEntropyEmeSsim
    Original6.941 47.237 21.000 0
    MSR5.311 42.102 20.994 7
    LIME6.225 96.734 50.993 4
    Jiep4.995 92.259 00.992 5
    Proposed6.673 08.141 60.996 2

    Table 2. Comparison results of objective evaluation index methods

    Entropy of Lα=0.1α=0.2α=0.5Entropy of Rα=0.1α=0.2α=0.5
    β=1.56.812 16.812 16.812 1β=1.55.914 35.914 05.912 4
    β=26.815 86.815 86.815 8β=25.932 05.931 75.930 4
    β=46.825 56.825 56.825 5β=45.928 15.927 95.926 5

    Table 1. Entropy of different exponential parameters

    The exponential parameters are determined through a series of experiments. Initially,a parameter range is defined,followed by iterative testing and evaluation of entropy values to identify the optimal parameter value. The entropy outcomes for various parameter choices are presented in Table 1. Favorable outcomes are characterized by higher entropy values for R and lower entropy values for L. Consequently,α=0.1 and β=2 are deemed more fitting for our datasets.

    The visual outcomes of our experiments are illustrated in Fig. 5. Notably,when α=0.1β=2,the reflection layer exhibits more pronounced detail,while the illumination layer appears smoother.

    Comparison of exponential parameter results:(a) reflection layer results;(b) illumination layer results

    Figure 5.Comparison of exponential parameter results:(a) reflection layer results;(b) illumination layer results

    The histogram functions as a statistical representation of the grayscale values present within the image,facilitating an evaluation of the influence of uniform illumination in the experiment,as illustrated in Figs. 6(a) and 6(b). Two densely distributed subplots from the same image are selected for histogram analysis. In the original image (Fig. 6(a)),the blue box delineates a specific area,while the orange-yellow box indicates the magnified section of the image. The histogram information for these two regions is presented on the rightmost side. Notably,the distribution of image luminance values within the immediate vicinity of the light source (blue box area) is relatively narrow,resulting in a smaller contrast measurement EME value18when compared to the surrounding boundary area (orange-yellow box area). In the original image (Fig. 6(a)),the blue box represents a specific area,while the orange-yellow box corresponds to the magnified portion of the image. The rightmost part of the figure displays the histogram information for these two regions. Notably,within the direct vicinity of the light source (blue box area),the distribution range of image luminance values is relatively small,resulting in a smaller contrast measurement EME valuecompared to the surrounding boundary area (orange-yellow box area). The original image (Fig. 6(c)) exhibits fringe noise artifacts and local area magnification. Subsequently,Fig. 6(d) presents the result plot obtained after applying the STAR-ADF treatment.

    Illumination uniformization and denoising results:(a) the original image and the local indicator evaluation;(b) the STAR-ADF enhanced image and the local indicator evaluation;(c) the original image and the extraction;(d) the STAR-ADF enhanced image and the extraction

    Figure 6.Illumination uniformization and denoising results:(a) the original image and the local indicator evaluation;(b) the STAR-ADF enhanced image and the local indicator evaluation;(c) the original image and the extraction;(d) the STAR-ADF enhanced image and the extraction

    To rigorously assess the efficacy of our proposed method,we conduct a comprehensive cross-sectional comparison,as illustrated in Fig. 7,against three distinct lighting decomposition approaches. To ensure comparability,consistent contrast enhancement operations are applied to all datasets in the present study. These existing methods show adaptability to some extent concerning different target characteristics. Noteworthy disparities emerge when confronted with varying degrees and positions of uneven lighting conditions.

    Results of four different cell experiments by different methods

    Figure 7.Results of four different cell experiments by different methods

    Specifically,the MSR method21-22 may exhibit excessive enhancement in regions featuring darker edges,resulting in noticeably bright edge artifacts within the image. Both the MSR method and the LIME method show limited adaptability to regions with high brightness,which makes it difficult to achieve uniform illumination in areas directly exposed to the light source. On the other hand,the Jiep method23 achieves desirable brightness uniformity but compromises on the degree of detail preservation,and is unable to implement our proposed method. Consequently,our approach excels in terms of illumination uniformity and detail preservation,affording significant advantages over the compared methods.

    We evaluate each method with 20 test images and get the results shown in Table 2. From the results of the objective evaluation index of the experiment,the method proposed by us is in the entropy,EME and SSIM perform well on evaluation indicators. This shows that,compared to the other three enhancement methods of lighting decomposition,our method has a higher image information retention and enhancement effect while maintaining the structural and detailed similarity of the image.

    4 Conclusions

    In this paper,we present a novel image enhancement method termed STAR-ADR,which demonstrates excellent application results for the images of spatial cell tissue experiments. Through the proposed method,we address the challenges of uneven illumination and fringe noise that commonly arise in the space environment,consequently improving the image quality requisite for cell tissue experiments. The method enables enhanced target identification,enabling scientists to observe images more clearly and facilitating subsequent computer-based recognition processes. Furthermore,objective evaluation indices substantiate the effectiveness of the proposed method. The results show a significant 12.5% improvement in the contrast evaluation index compared to the original image,while preserving essential image details. These findings provide empirical evidence of the efficacy and utility of the STAR-ADR method for enhancing image quality of spatial cell tissue experiments.

    References

    [1] Ying-Hui LI, Ye-Qing SUN, Hui-Qiong ZHENG et al. Recent Review and Prospect of Space Life Science in China for 40 Years. Chinese Journal of Space Science, 41, 46-67(2021).

    [2] I Julien, A C Mazen, K Bartosz et al. Artificial-Intelligence-Based Imaging Analysis of Stem Cells: A Systematic Scoping Review. Biology, 11, 1412(2022).

    [3] Shan-Shan He, B Ruchir, B Carl et al. High-Plex Imaging of RNA and Proteins at Subcellular Resolution in Fixed Tissue by Spatial Molecular Imaging. Nature Biotechnology, 40, 1794-1806(2022).

    [4] S Soleimani, M Mirzaei, D C Toncu. A New Method of SC Image Processing for Confluence Estimation. Micron, 101, 206-212(2017).

    [5] P Seonhee, Y Soohwan, K Minseo et al. Dual Autoencoder Network for Retinex-Based Low-Light Image Enhancement. IEEE Access, 6, 22084-22093(2018).

    [6] S Ai, J Kwon. Extreme Low-Light Image Enhancement for Surveillance Cameras Using Attention U-Net. Sensors, 20, 495(2020).

    [7] Ying FU, Yang HONG, Lin-Wei CHEN et al. LE-GAN: Unsupervised Low-Light Image Enhancement Network Using Attention Module and Identity Invariant Loss. Knowledge-Based Systems, 240, 108010(2022).

    [8] Jun XU, Ying-Kun HOU, Dong-Wei REN et al. STAR: A Structure and Texture Aware Retinex Model. IEEE Transactions on Image Processing, 29, 5022-5037(2020).

    [9] E H Land, J J McCann. Lightness and Retinex Theory. Journal of the Optical Society of America, 61, 1-11(1971).

    [10] J J McCann. Do Humans Discount the Illuminant?. International Society for Optics and Photonics, 5666, 9-16(2005).

    [11] M K Ng, Wei Wang. A Total Variation Model for Retinex. SIAM J. Imag. Sci., 4, 345-365(2011).

    [12] M Song, M Kim. Gradient-Based Cell Localization for Automated Stem Cell Counting in Non-Fluorescent Images. Tissue Engineering and Regenerative Medicine, 11, 149-154(2014).

    [13] G D Finlayson, M S Drew, Cheng Lu. Entropy Minimization for Shadow Removal. International Journal of Computer Vision, 85, 35-57(2009).

    [14] S M Chao, D M Tsai. An Improved Anisotropic Diffusion Model for Detail- and Edge-Preserving Smoothing. Pattern Recognition. Pattern Recognition Letters, 31, 2012-2023(2010).

    [15] B Riya Gupta, S S Lamba. An Efficient Anisotropic Diffusion Model for Image Denoising with Edge Preservation. Computers & Mathematics with Applications, 93, 106-119(2021).

    [16] L Shyam, C Mahesh. Efficient Algorithm for Contrast Enhancement of Natural Images. International Arab Journal of Information Technology, 11, 95-102(2014).

    [17] Meng-Qiu ZHU, Ling-Jie YU, Zong-Biao WANG et al. Review: A Survey on Objective Evaluation of Image Sharpness. Applied Sciences, 13, 2652(2023).

    [18] Wen-Cheng WANG, Xiao-Jin WU, Xiao-Hui YUAN et al. An Experiment-Based Review of Low-Light Image Enhancement Methods. IEEE Access, 8, 87884-87917(2020).

    [19] Xue-Yang FU, Pei-Xian ZHUANG, Yue HUANG et al. A Retinex-Based Enhancing Approach for Single Underwater Image, 4572-4576(2014).

    [20] T Celik, T Tjahjadi. Automatic Image Equalization and Contrast Enhancement Using Gaussian Mixture Modeling. IEEE Transactions on Image Processing, 21, 145-156(2012).

    [21] D J Jobson, Z Rahman, G A Woodell. A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes. IEEE Transactions on Image Processing, 6, 965-976(2002).

    [22] Kai-Qiang XU, C Jung. Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation(2017).

    [23] Bo-Lun CAI, Xiang-Min XU, Kai-Ling GUO et al. A Joint Intrinsic-Extrinsic Prior Model for Retinex, 4020-4029(2017).

    [24] Xiao-Jie GUO, Yu LI, Hai-Bin LING. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Transactions on Image Processing, 26, 982-993(2017).

    Yuan-Yuan LI, Yong-Chun YUAN, Li-Hua RUAN, Qing-Qing ZHAO, Tao ZHANG. A microscopic image enhancement method for cell experiments in space[J]. Journal of Infrared and Millimeter Waves, 2024, 43(2): 288
    Download Citation