Yuan-Yuan LI, Yong-Chun YUAN, Li-Hua RUAN, Qing-Qing ZHAO, Tao ZHANG. A microscopic image enhancement method for cell experiments in space[J]. Journal of Infrared and Millimeter Waves, 2024, 43(2): 288
Copy Citation Text
High image quality is crucial for cell experiments in space, as it requires the ability of remotely monitoring to grasp the progress and direction of experiments. However, due to space limitations and environmental factors, imaging equipment is strongly constrained in some performance, which directly affects the imaging quality and observation of cultivated targets. Moreover, experimental analysis on the ground requires tasks such as feature extraction and cell counting, but uneven lighting can seriously affect computer processing. Therefore, a method called STAR-ADF is proposed, and experimental results show that the proposed method can effectively remove noise, equalize illumination, and increase the enhancement evaluation index by 12.5% in comparison with original figures, which has certain robustness.
Numerous cell experiments conducted in space aim to study the rhythms of cell growth and differentiation,the biological effects of the space environment,and the impact of microgravity on cellular tissues[1]. These experiments contribute to a better understanding of the effects of the microgravity in space on living organisms. During in situ cell culture,the assessment of experimental progress and subsequent strategic adjustments heavily relies upon cell morphology of a distinct period[2]. Hence,it is imperative for the camera apparatus to possess attributes encompassing high resolution,high sensitivity,and low noise levels to ensure the clarity and accuracy of captured images.
In 2022,Shanshan He et al.[3] showcased the utilization of CosMx™ SMI for spatial molecular imaging,revealing its exceptional sensitivity and an impressively low rate of cell recognition errors. The realm of space cell culture has been venturing into the realm of 3D organ cultivation,and the University of Zurich successfully transported a dedicated space vehicle to the International Space Station (ISS) in March 2020. However,achieving high-quality 3D imaging in the space environment continues to pose challenges for current applications.
Imaging devices and detectors are subject to the unique conditions of the space environment,including strong radiation,vacuum,and temperature fluctuations,which are quite different from the environment on Earth. These conditions can introduce intricate noise patterns and uneven illumination issues which display on the Grayscale,thereby diminishing the quality and accuracy of the images. Such degradation has far-reaching effects for scientific observations and analysis,so it is imperative to employ suitable techniques to enhance the quality of detector output data.
For the two image tasks of highlight removing and denoising,state-of-the-art algorithms are mainly divided into traditional methods and machine learning-based methods. Soleimani et al.[4] removed the effects of light with adaptive threshold techniques and denoised with BM3D filtering to achieve better cell segmentation. However,this method may lead to a loss of target details. Park et al.[5] proposed a dual autoencoder network model based on the Retinex theory for enhancing and denoising low-light images. Based on the U-net model,Ai et al.[6] employed a machine learning approach,utilizing short-exposure and long-exposure images as input and ground truth,respectively,for training. The results demonstrate the network's ability to achieve balanced illumination while enhancing details in dark regions. Recently,Generative Adversarial Networks (GANs) have also been explored as unsupervised image enhancement methods. Ying et al.[7] proposed the LE-GAN method,which incorporates an illumination-aware module into the GAN framework,effectively addressing noise and overexposure issues in real-scene datasets. Nonetheless,these network models still require standardized datasets,posing a challenging problem in resource-limited space science experiments.
Based on the Retinex theory,we present a method called STAR-ADF for enhancing microscopic images obtained from space life science experiments. This method not only ensures preserving image details but also effectively addresses challenges associated with brightness and noise,which provides a valuable and practical tool for scientists to analyze the space experiment data. Compared to other methods,the STAR-ADF method demonstrates superior performance in terms of enhanced contrast,making the features of the image clearer and more prominent. By applying the STAR-ADF method,we can obtain more accurate and reliable image results,which supports the research and analysis of space life science experiments.
1 Data acquisition
This study utilized a custom-designed visible light microscopy camera device equipped with the detector featuring a resolution of 1 294×1 024 pixels. The experimental data were derived from life science experiments conducted in space. The image datasets involved in the experiment include brightfield microscopic images,including mesenchymal stem cells,osteoblasts,pluripotent stem cells,liver stem cells,germ cells,human embryonic stem cells,mouse embryonic stem cells under diverse experimental conditions. In order to show uneven illumination of the detector output,we demonstrate the relationship between the light source of the brightfield microscope and the cell culture region,as shown in Fig. 1.
Figure 1.Schematic diagram of the spatial microscope camera
We propose a novel method for targeted improvement of image quality,accomplished through the integration of the structure and texture aware Retinex (STAR) model[8] and the nonlinear anisotropic diffusion filtering technique,as shown in Fig. 2. The two input and output examples in the pipeline are displayed below each module. The first line shows significant uniform illumination,while the second line shows notable noise removal. Initially,we employ the STAR model to adjust the brightness of the images uniformly and minimize the loss of image details. By utilizing an exponential total variation weighting matrix,the STAR model effectively combines image texture information,thereby preserving image details. Secondly,the anisotropic filtering is employed to reduce the impact of image quality,which adjusts the filter response based on the degree of difference between the pixels in the image. The principle is based on the local image features to preserve the edges and details in the image while reducing the impact of noise in flat regions,ultimately improving the overall visual quality of the image. By combining the advantages of the STAR model and anisotropic filters,our proposed approach enhances the image quality,including improved sharpness,noise reduction,and enhanced visual appearance.
In 1971,Land et al.[9] proposed the Retinex mechanism. The light information received by the detector is determined by the light intensity and the reflection coefficient of the target surface. Suppose that is the illumination information, is the reflection coefficient,correspondingly is the data received by the detector,and the relationship between the three can be shown as follows:
,
where the pixels of the image are represented as ,and ,,and represent the original image,the extracted light layer image,and the reflection information layer,respectively. The generation of light regions in the image results from the unevenness of illumination,that is,the non-uniform brightness value expressed in the light layer[10]. For the extraction of illumination components,this paper transforms them into optimization problems,alternately iteratively optimizing the difference between the input image and the product of the estimated light and reflection layers. At the same time,in order to prevent overfitting and ensure that the estimated light and reflection layers are meaningful,it is necessary to use the regularization function to add constraints to the solution space. The total variational (TV) algorithm[11] is a denoising algorithm widely used in the field of image processing. By calculating the ℓ1 norm minimization of the image gradient,the image is smoother while maintaining the edge information[12-13]. Building upon this foundation,the optimization problem is reformulated after incorporating the full variational regularization constraint,yielding the following expression:
,
where is the total variation of the image,and ,.
However,in detail-rich cell microscopic images,applying total variational filtering may result in excessive suppression of high-frequency details in images[19]. In order to improve the performance of the total variational algorithm,an indexed total variational method is proposed (see Eq. (3)). By assigning values to the exponential parameters and ,the characteristics of which are used to adjust the filter effect in a targeted manner. Since the edge gradient of the target is usually larger than the detail gradient,the edge will be amplified when the index is greater than 1,and the detail will be amplified when the index is less than 1. Our experiments have shown that the exponential total variance method has a good effect in cell microscopic image enhancement,which balances strong light removal,edge enhancement and detail preservation to improve image quality while maintaining texture richness.
,
where denotes the exponential local variance of the image,the weight matrix and is added. Therefore,we can derive that ,. Then the iterative solution of illumination component and uniform illumination results through vector form:
,,
where we denote that ,,,,,and is the conversion of vectors into diagonal matrices. The estimated component can be expressed as: ,.
Normalize the results (see Eq. (6)), indicates the standard deviation of the image:
.
The principle of anisotropic filtering[14],is to treat the image as a heat field,the distribution of pixel values at each point is the heat distribution in space,the heat diffuses from high temperature to low temperature,and the heat conduction stops at the contour boundary in the image[15]. By calculating the gradient of the pixel and its four neighbors,the thermal conductivity of the pixel in the spatial direction can be solved. is the anisotropy measure of :
,
where is the gray value of the first pixel of the images to be denoised, represents the direction,and for the four-neighborhood solution referring to east,west,south and north. is the gradient of a certain direction of the point,and the weight of the filter of each pixel (denoted as ) is calculated according to the anisotropy metric. According to the gradient information and noise level of the image,the filtering direction of anisotropic filtering is determined. In general,filtering along the edges of the image better preserves the edge details of the image. According to the determined filtering direction,the image is filtered:
.
The proposed method aims to achieve a dual objective: smoothing the image pixels while retaining the maximal edge information. To this end,multiple iterative filtering operations are applied to progressively diminish noise artifacts presenting in the image. Subsequently,the output at this stage undergoes further processing,utilizing contrast limited adaptive histogram equalization (CLAHE) to enhance the image's contrast[16].
In addition,for objective evaluation,three key evaluation indicators are used to assess the performance of various methods in meeting the task requirements: image entropy,contrast ratio (EME),and image similarity (SSIM). Firstly,entropy is utilized to quantify the information loss before and after the image processing (Eq. (9) and Eq. (10)). Higher image entropy values indicate a more uniform and complex distribution of pixel values,signifying a greater presence of details and information within the image[17,20] . Conversely,lower image entropy values suggest a more concentrated and simpler distribution of pixel values,indicating a relatively reduced amount of information in the image. Assuming that the frequency of a pixel value is denoted as ,the frequency of that pixel value can be expressed as:
,
where and represent the height and width of the image,respectively. The image entropy is calculated using the following formula:
.
Secondly,we utilize Enhancement Measure Evaluation (EME) as an assessment index to quantify the improvement in image contrast achieved after processing. The calculation method involves dividing the image into M*N small areas,and then calculating the logarithmic mean of the ratio of the largest gray value to the smallest value in the small area,and the evaluation result is obtained as the logarithmic mean:
,
where represents the maximum gray value within the image block ( and is the minimum gray value within the image block.
Lastly,we adopt the Structural Similarity (SSIM) as the evaluation metric to quantify the similarity between images before and after processing. Given the representations of the original image and the evaluated image as and ,respectively,then SSIM quantifies the degree of similarity between them in a quantitative manner as the following formula:
,
where represents the mean value of the image, represents the standard deviation of the image, denotes the covariance between the two images. The constant and are introduced to avoid dividing by zero,with . For 8-bit grayscale images,.
2.2 STAR-ADF method flow
The STAR-ADF method is specifically designed to improve the quality and contrast of microscopic images for better visualization of cells and microstructures. The flowchart of the STAR-ADF method is illustrated in Fig. 3. For the uniform illumination task,the light layer separation method based on the Retinex theory finds a wide range of applications. Given our emphasis on detail preservation,we address this challenge by creating separate maps for the light layer and the reflection layer,utilizing the spatial properties of the image. To maintain rich texture information in microscopic images,we explore the application of multiple filtering methods in both the frequency and spatial domains. During the experiment,we initially performed a high-frequency component enhancement filtering attempt on the frequency domain image after Fourier transform or wavelet transform. However,we observed significant distortion in the resulting images. In the content of microscopic images,we want to achieve as smooth an image as possible in the target area,while keeping the contours and details as unsmooth as possible. For this reason,we select a total variational model and optimize the model by gradient descent to get the optimal solution. Fully variational models are widely used in image processing,and their advantage lies in their ability to smooth flat areas of the image while preserving image edges and texture details,effectively improving image quality and keeping important information. This method extends the optimization model of the full variational method by incorporating exponential parameters,allowing separate extraction of the illumination structure and reflection texture of the image. These results generate a structure map and a reflection map,which serve as weight matrices,providing measurements of regional structure and texture information within the image. Satisfactory smoothness and the light layer can be obtained by using this approach,but the scattered noise spots within the image still require further resolution.
Compared with other denoising methods,anisotropic filters are more suitable for our image data according to image characteristic analysis and attempts. Anisotropic filtering,a nonlinear filtering method commonly employed for image denoising,is calculated. First,the gradient intensity and orientation of each pixel in the grayscale images are calculated to capture edge and texture information. Based on the gradient information of the pixels,we calculate a weight coefficient that reflects the structural features surrounding each pixel. During the filtering process,this weight coefficient guides the adjustment of smoothness in different directions. A sliding window is then employed to weight the pixel values of a local neighborhood for each pixel in the image. By introducing weight coefficients,the filter becomes more sensitive to pixel value variations along edges and texture directions while ensuring smoother pixel value transitions in flat regions. This preservation of edge and texture information enhances the overall image quality. Ultimately,the filtered image attains a higher level of visual fidelity,representing the improved outcome of this denoising procedure.
3 Analysis of experimental results
In this section,we present a comprehensive performance analysis of the proposed method,evaluating its effectiveness from both subjective and objective perspectives. The test dataset employed in the experiment consists of the image data derived from various life science experiments as discussed in Part 2. The experiment is performed with MATLAB software on a PC with 3.70 GHz CPU and 16.00 GB RAM.
Firstly,our method decomposes the image to obtain the illumination layer and reflection layer as shown in Figs. 4(a) and 4(b),respectively. The extraction of the weighted matrix map is shown in Fig. 4(c).
The exponential parameters are determined through a series of experiments. Initially,a parameter range is defined,followed by iterative testing and evaluation of entropy values to identify the optimal parameter value. The entropy outcomes for various parameter choices are presented in Table 1. Favorable outcomes are characterized by higher entropy values for and lower entropy values for . Consequently, are deemed more fitting for our datasets.
The visual outcomes of our experiments are illustrated in Fig. 5. Notably,when ,,the reflection layer exhibits more pronounced detail,while the illumination layer appears smoother.
The histogram functions as a statistical representation of the grayscale values present within the image,facilitating an evaluation of the influence of uniform illumination in the experiment,as illustrated in Figs. 6(a) and 6(b). Two densely distributed subplots from the same image are selected for histogram analysis. In the original image (Fig. 6(a)),the blue box delineates a specific area,while the orange-yellow box indicates the magnified section of the image. The histogram information for these two regions is presented on the rightmost side. Notably,the distribution of image luminance values within the immediate vicinity of the light source (blue box area) is relatively narrow,resulting in a smaller contrast measurement EME value[18] when compared to the surrounding boundary area (orange-yellow box area). In the original image (Fig. 6(a)),the blue box represents a specific area,while the orange-yellow box corresponds to the magnified portion of the image. The rightmost part of the figure displays the histogram information for these two regions. Notably,within the direct vicinity of the light source (blue box area),the distribution range of image luminance values is relatively small,resulting in a smaller contrast measurement EME valuecompared to the surrounding boundary area (orange-yellow box area). The original image (Fig. 6(c)) exhibits fringe noise artifacts and local area magnification. Subsequently,Fig. 6(d) presents the result plot obtained after applying the STAR-ADF treatment.
Figure 6.Illumination uniformization and denoising results:(a) the original image and the local indicator evaluation;(b) the STAR-ADF enhanced image and the local indicator evaluation;(c) the original image and the extraction;(d) the STAR-ADF enhanced image and the extraction
To rigorously assess the efficacy of our proposed method,we conduct a comprehensive cross-sectional comparison,as illustrated in Fig. 7,against three distinct lighting decomposition approaches. To ensure comparability,consistent contrast enhancement operations are applied to all datasets in the present study. These existing methods show adaptability to some extent concerning different target characteristics. Noteworthy disparities emerge when confronted with varying degrees and positions of uneven lighting conditions.
Figure 7.Results of four different cell experiments by different methods
Specifically,the MSR method[21-22] may exhibit excessive enhancement in regions featuring darker edges,resulting in noticeably bright edge artifacts within the image. Both the MSR method and the LIME method show limited adaptability to regions with high brightness,which makes it difficult to achieve uniform illumination in areas directly exposed to the light source. On the other hand,the Jiep method[23] achieves desirable brightness uniformity but compromises on the degree of detail preservation,and is unable to implement our proposed method. Consequently,our approach excels in terms of illumination uniformity and detail preservation,affording significant advantages over the compared methods.
We evaluate each method with 20 test images and get the results shown in Table 2. From the results of the objective evaluation index of the experiment,the method proposed by us is in the entropy,EME and SSIM perform well on evaluation indicators. This shows that,compared to the other three enhancement methods of lighting decomposition,our method has a higher image information retention and enhancement effect while maintaining the structural and detailed similarity of the image.
4 Conclusions
In this paper,we present a novel image enhancement method termed STAR-ADR,which demonstrates excellent application results for the images of spatial cell tissue experiments. Through the proposed method,we address the challenges of uneven illumination and fringe noise that commonly arise in the space environment,consequently improving the image quality requisite for cell tissue experiments. The method enables enhanced target identification,enabling scientists to observe images more clearly and facilitating subsequent computer-based recognition processes. Furthermore,objective evaluation indices substantiate the effectiveness of the proposed method. The results show a significant 12.5% improvement in the contrast evaluation index compared to the original image,while preserving essential image details. These findings provide empirical evidence of the efficacy and utility of the STAR-ADR method for enhancing image quality of spatial cell tissue experiments.
[16] L Shyam, C Mahesh. Efficient Algorithm for Contrast Enhancement of Natural Images. International Arab Journal of Information Technology, 11, 95-102(2014).
[21] D J Jobson, Z Rahman, G A Woodell. A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes. IEEE Transactions on Image Processing, 6, 965-976(2002).
Yuan-Yuan LI, Yong-Chun YUAN, Li-Hua RUAN, Qing-Qing ZHAO, Tao ZHANG. A microscopic image enhancement method for cell experiments in space[J]. Journal of Infrared and Millimeter Waves, 2024, 43(2): 288