• Chinese Optics Letters
  • Vol. 20, Issue 3, 031701 (2022)
Zhengfen Jiang1, Boyi Li2, Tho N. H. T. Tran2, Jiehui Jiang1, Xin Liu2、3、*, and Dean Ta2、4、**
Author Affiliations
  • 1School of Communication & Information Engineering, Shanghai University, Shanghai 200444, China
  • 2Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
  • 3State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200433, China
  • 4Center for Biomedical Engineering, Fudan University, Shanghai 200433, China
  • show less
    DOI: 10.3788/COL202220.031701 Cite this Article Set citation alerts
    Zhengfen Jiang, Boyi Li, Tho N. H. T. Tran, Jiehui Jiang, Xin Liu, Dean Ta. Fluo-Fluo translation based on deep learning[J]. Chinese Optics Letters, 2022, 20(3): 031701 Copy Citation Text show less

    Abstract

    Fluorescence microscopy technology uses fluorescent dyes to provide highly specific visualization of cell components, which plays an important role in understanding the subcellular structure. However, fluorescence microscopy has some limitations such as the risk of non-specific cross labeling in multi-labeled fluorescent staining and limited number of fluorescence labels due to spectral overlap. This paper proposes a deep learning-based fluorescence to fluorescence (Fluo-Fluo) translation method, which uses a conditional generative adversarial network to predict a fluorescence image from another fluorescence image and further realizes the multi-label fluorescent staining. The cell types used include human motor neurons, human breast cancer cells, rat cortical neurons, and rat cardiomyocytes. The effectiveness of the method is verified by successfully generating virtual fluorescence images highly similar to the true fluorescence images. This study shows that a deep neural network can implement Fluo-Fluo translation and describe the localization relationship between subcellular structures labeled with different fluorescent markers. The proposed Fluo-Fluo method can avoid non-specific cross labeling in multi-label fluorescence staining and is free from spectral overlaps. In theory, an unlimited number of fluorescence images can be predicted from a single fluorescence image to characterize cells.
    LcGAN(G,D)=Ex,ypdata(x,y)[logD(x,y)]+Expdata(x){log{1D[x,G(x)]}},

    View in Article

    LL1(G)=Ex,ypdata(x,y)[yG(x)1].

    View in Article

    LMS-SSIM(G)=Ex,ypdata(x,y){1MS-SSIM[y,G(x)]}.

    View in Article

    G*=argminGmaxDLcGAN(G,D)+λ1LL1(G)+λ2LMS-SSIM(G).

    View in Article

    SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2),

    View in Article

    MSE=1Ni=1N(xiyi)2,

    View in Article

    PSNR=10log10[(2n1)2MSE].

    View in Article

    MAE=1Ni=1N|xiyi|.

    View in Article

    Zhengfen Jiang, Boyi Li, Tho N. H. T. Tran, Jiehui Jiang, Xin Liu, Dean Ta. Fluo-Fluo translation based on deep learning[J]. Chinese Optics Letters, 2022, 20(3): 031701
    Download Citation