• Photonics Research
  • Vol. 8, Issue 6, 1011 (2020)
Zhe Zhang1、†, Dongzhou Gou2、†, Fan Feng3, Ruyi Zheng2, Ke Du2, Hongrun Yang2、4, Guangyi Zhang1, Huitao Zhang5, Louis Tao3, Liangyi Chen2、6、*, and Heng Mao1、5、7、*
Author Affiliations
  • 1LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, China
  • 2State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, Peking University, Beijing 100871, China
  • 3Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
  • 4School of Software and Microelectronics, Peking University, Beijing 100871, China
  • 5Beijing Advanced Innovation Center for Imaging Theory and Technology, Capital Normal University, Beijing 100871, China
  • 6e-mail: lychen@pku.edu.cn
  • 7e-mail: heng.mao@pku.edu.cn
  • show less
    DOI: 10.1364/PRJ.388651 Cite this Article Set citation alerts
    Zhe Zhang, Dongzhou Gou, Fan Feng, Ruyi Zheng, Ke Du, Hongrun Yang, Guangyi Zhang, Huitao Zhang, Louis Tao, Liangyi Chen, Heng Mao. 3D Hessian deconvolution of thick light-sheet z-stacks for high-contrast and high-SNR volumetric imaging[J]. Photonics Research, 2020, 8(6): 1011 Copy Citation Text show less

    Abstract

    Due to its ability of optical sectioning and low phototoxicity, z-stacking light-sheet microscopy has been the tool of choice for in vivo imaging of the zebrafish brain. To image the zebrafish brain with a large field of view, the thickness of the Gaussian beam inevitably becomes several times greater than the system depth of field (DOF), where the fluorescence distributions outside the DOF will also be collected, blurring the image. In this paper, we propose a 3D deblurring method, aiming to redistribute the measured intensity of each pixel in a light-sheet image to in situ voxels by 3D deconvolution. By introducing a Hessian regularization term to maintain the continuity of the neuron distribution and using a modified stripe-removal algorithm, the reconstructed z-stack images exhibit high contrast and a high signal-to-noise ratio. These performance characteristics can facilitate subsequent processing, such as 3D neuron registration, segmentation, and recognition.

    1. INTRODUCTION

    Fluorescence microscopy (FM) can provide both in vitro and in vivo imaging of biological tissues as well as their functional dynamics, which is observable with high spatial and temporal resolution [13]. Moreover, benefiting from the various choices of fluorescent indicators for neuronal activities, FM has been widely used for in vivo brain imaging [49]. However, because a large number of neurons are spread in the brain region [10], brain imaging must record neuronal activities with a large field of view (FOV) simultaneously [1113]. For example, three-dimensional imaging of the zebrafish brain should cover a volume of 800μm×600μm×300μmwith subneuron resolution.

    Different from confocal [14], spinning disk confocal [15], multiphoton [16], and light-field [17] FMs, selective plane illumination microscopy [1820] (also called light-sheet microscopy) has recently emerged as the preferable method for volumetric imaging of the zebrafish brain. Based on z-scanning light-sheet illumination, the brain region can be excited slice-by-slice, and these side-sliced fluorescent images are built into the volumetric stack. This form of microscopy, originally designed for highly efficient optical sectioning, can effectively enhance the image contrast and reduce the phototoxicity in deep tissues [20].

    Configuration of the Gaussian, Bessel, and Airy beams with the same light-sheet FOV.

    Figure 1.Configuration of the Gaussian, Bessel, and Airy beams with the same light-sheet FOV.

    According to Gaussian beam propagation [14], the relationship between the light-sheet thickness (2w0) and light-sheet nominal width (2z0) is given as where λ is the wavelength in vacuum, n is the refractive index of the medium on the object side, w0 is the waist radius of the Gaussian beam, and z0 is the Rayleigh length of the Gaussian beam. As illustrated, imaging with a wider FOV uses a thicker Gaussian beam.

    Generally, thin light-sheet z-stacking images have desired optical sectioning and z-axis spatial resolution and do not require 3D deconvolution, since the thickness of light-sheet illumination is very close to the system depth of field (DOF) [20]. However, for zebrafish brain-wide imaging, the light sheet is thicker, where the fluorescence distributions outside the DOF will also be collected, blurring the image. Therefore, we resort to 3D deconvolution to redistribute the measured intensity from the z-stack images. Meanwhile, the nonuniform illumination of a Gaussian beam will produce an additional difference in each light-sheet image. In addition, melanogenesis tissues located at the surface of the fish head locally and significantly absorb the excitation energy, which also leads to dark stripes in the light-sheet image [24].

    To address the abovementioned problems in zebrafish larva brain-wide imaging, we implement a thick Gaussian beam light-sheet microscope and expand the width of the light sheet to greater than 300 μm, which makes the thickness of the light sheet more than 8 μm, nearly 9.6-fold the system DOF. In this paper, we first derive an approximated 3D convolution forward model for thick light-sheet z-stacking imaging and propose a 3D deblurring method by introducing a Hessian regularization term to maintain the continuity of the neuron distribution. Employing this 3D reconstruction algorithm and a modified stripe-removal algorithm, the reconstructed z-stack images can exhibit high contrast and a high signal-to-noise ratio (SNR).

    This paper is organized as follows. Section 2 introduces the forward model and 3D deblurring algorithm. Section 3 presents our experimental setup and preparation. Sections 4 and 5 present the results of both a numerical simulation and imaging experiments on fluorescent beads and a zebrafish larvae brain. Section 6 concludes and discusses the paper.

    2. THEORY

    A. Forward Model

    Schematic of Gaussian beam light-sheet z-stacking imaging. (a) z-stacking imaging by moving light-sheet and objective. (b) Illumination region and system DOF under the large FOV imaging.

    Figure 2.Schematic of Gaussian beam light-sheet z-stacking imaging. (a) z-stacking imaging by moving light-sheet and objective. (b) Illumination region and system DOF under the large FOV imaging.

    Assume that the illumination distribution ILS(x,y,z) is uniform in the xy plane and symmetric with respect to the z-axis, i.e., ILS(x,y,z)=ILS(z)=ILS(z). Assume that the system 3D point spread function (PSF) h(x,y,z) is space-invariant in the brain area. The imaging forward model of thick Gaussian beam light-sheet microscopy can be approximated as follows (see Appendix A): where f(x,y,z) is the true volumetric image, g(x,y,z=zj) is the 2D blurred image when the illumination and detection are located at the z=zj plane, hLS(x,y,z) is the overall 3D PSF affected by the illumination, and w=(p,q,r).

    B. 3D Reconstruction Algorithm

    Here, we define a loss function as the sum of the fidelity term and Hessian regularization term, which is also popular in other imaging modalities [25,26]. The Hessian regularization term is from a priori knowledge of zebrafish brain (see Section 6). The fidelity term constrains the imaging forward model by using the mean square error, and the Hessian regularization term constrains the continuity of the neuron distribution. Such a loss function can be written as where α is the penalty parameter of the fidelity term. The Hessian regularization is defined as where αh and αz are the penalty parameters of continuity along the xy plane and z-axis, respectively. The second-order partial derivatives of f in different directions are abbreviated as fi,i=xx,yy,zz,xy,xz,yz.

    In this paper, we use the alternating direction method of multipliers (ADMM) [27] to solve this loss function based on its fast convergence performance in an L1-regularized optimization problem. The key of the algorithm is to decouple the L1 and L2 portions from the loss function [28]. Hence, we rewrite Eq. (3) by introducing auxiliary variables d and using the augmented Lagrangian methods (see Appendix B): where di,bi(i=xx,yy,zz,xy,xz,yz) are the auxiliary and dual variables, δ is the step size, and k is the iteration counter.

    Consequently, the framework of the ADMM algorithm is presented in Algorithm 1.

    Evaluation Indicators of Different 3D Images in Fig. 5

     PSNRSNRSSIMR
    Blurred Image16.601.990.0440.62
    2D RL16.301.690.0720.57
    3D Wiener16.832.220.1490.64
    3D RL16.561.950.1580.68
    Our 3D Method18.463.860.1790.84

    Table Infomation Is Not Enable

    C. Full Procedure

    In real experiments, light-sheet images are corrupted with photon shot noise and camera readout noise. We use the side window filtering (SWF) technique [29] to eliminate the mixed Poisson–Gaussian noise in the fluorescence image. In contrast to conventional local window-based filtering methods, the SWF method aligns the edges and/or corners of the window with the pixels being processed, which better preserves the image boundaries during the denoising process.

    Furthermore, nonneuron melanogenesis tissues located at the surface of the fish head can absorb the excitation energy dramatically, which results in dark stripes along the excitation path. The widely used wavelet-FFT algorithm can be used to remove the stripe artifacts [30] but at cost price of deteriorating the image in regions devoid of stripes. Therefore, we introduce a prelocation method based on the output of the wavelet-FFT algorithm. Specifically, the positions and widths of all stripes are determined from the difference map between the original image and the wavelet-FFT processed image. Only the stripe areas in the original image will be multiplied with an adaptive Gaussian stripe function to correct for the artifacts. This modified algorithm can avoid information loss outside the stripe areas and provide consistent contrast across different regions of the image.

    Flowchart of 3D image deblurring processing.

    Figure 3.Flowchart of 3D image deblurring processing.

    3. EXPERIMENTAL SETUP AND PREPARATION

    Schematic of our light-sheet microscope setup. Galvo z and Galvo y are used to scan the beam along the z-axis and y-axis, respectively. IO and DO are the illumination and detection objectives, respectively.

    Figure 4.Schematic of our light-sheet microscope setup. Galvo z and Galvo y are used to scan the beam along the z-axis and y-axis, respectively. IO and DO are the illumination and detection objectives, respectively.

    All larval zebrafish (elavl3:H2B-GCaMP6s, elavl3:GCaMP6s, and elavl3:EGFP) were raised in E3 embryo medium according to the standard protocol under 28.5°C. 0.2 mmol/L 1-phenyl-2-thiourea was added to the embryo medium to inhibit melanogenesis and allow optical imaging of the brain region. In our light-sheet imaging experiment, a 5–7 day post fertilization (dpf) zebrafish will be anesthetized before being embedded in a 1.5% low-melting agarose cylinder with a diameter of 1 mm and immersed in E3 medium. The brain area was electrically aligned to the center of the detection FOV being in good posture for z-stacking imaging.

    The 3D PSF h(x,y,z) could be calibrated with sparse fluorescent beads statically embedded in the agarose cylinder or calculated by using the ImageJ plugin PSF generator and using the system parameters. The light-sheet illumination distribution ILS(z) could only be captured from the system utilizing imaging with a high-density fluorescent bead mixed medium. As shown in Eq. (2), the overall 3D PSF hLS(x,y,z) is given by the product of h(x,y,z) and ILS(z). After the imaging experiments, the results showed that the calculated PSF is better than the calibrated PSF during the reconstruction iteration (see Section 6).

    The experiments are implemented on an HP Z6 Workstation (Intel Xeon Gold 6128 CPU @ 3.40 GHz × 24), the graphics card model is GeForce RTX 2080 Ti, and the operating system is Ubuntu 19.04. The 3D deconvolution algorithm is implemented using MATLAB R2019b.

    4. SIMULATION

    Comparisons of different deconvolution methods. (a) Simulated 3D image (ground truth) in the x−y, x−z, and y−z sections, and three colored subregions enlarged for a detailed observation at the bottom. (b) Blurred 3D image after forward 3D convolution and Gaussian and Poisson mixed noise addition. (c)–(f) Four reconstruction results using the 2D RL method, the 3D Wiener method, the 3D RL method, and our 3D method, respectively. The R value in the title represents the correlation coefficient of the 3D distribution between the ground truth and each deblurred image.

    Figure 5.Comparisons of different deconvolution methods. (a) Simulated 3D image (ground truth) in the xy, xz, and yz sections, and three colored subregions enlarged for a detailed observation at the bottom. (b) Blurred 3D image after forward 3D convolution and Gaussian and Poisson mixed noise addition. (c)–(f) Four reconstruction results using the 2D RL method, the 3D Wiener method, the 3D RL method, and our 3D method, respectively. The R value in the title represents the correlation coefficient of the 3D distribution between the ground truth and each deblurred image.

    Then, the ground truth was convolved with the overall 3D PSF function according to Eq. (2) followed by the introduction of Gaussian and Poisson mixed noise to generate a blurred 3D image stack [Fig. 5(b)]. These images were deconvolved with four deconvolution methods, the 2D Richardson–Lucy (RL) method, the 3D RL method [31,32], the 3D Wiener method [33], and our 3D method (also called the Hessian method) (see Section 2) [Figs. 5(c)5(f)]. RL and Wiener are popular deconvolution algorithms for fluorescence microscopy images [34]. Of course, many software packages, such as DeconvolutionLab2 [35], also include the 3D RL and 3D Wiener algorithms and so on, but considering the consistency of the computing platform and the stability of performance, we choose the code provided by the MATLAB toolbox. We calculated four evaluation indicators, including the peak signal-to-noise ratio (PSNR) [36], SNR [33], structural similarity (SSIM) index [37], and correlation coefficient (R) [38] between the blurred image and four deblurred 3D image stacks, as shown in Table 1. The higher the indicator value, the closer the reconstructed image is to the real situation. Therefore, from this calculation, it is obvious that our method can recover more correct high-frequency components with much less noise.

    5. IMAGING EXPERIMENTS

    Contrast comparison of 3D deconvolution methods for imaging a 6 μm hollow fluorescence microsphere. (a) Middle x−y sections of the observed 3D image and three reconstructed images from the 3D RL method, the 3D Wiener method, and our 3D method. Scale bar: 3 μm. (b) Four normalized profiles corresponding to the colored dashed lines in (a).

    Figure 6.Contrast comparison of 3D deconvolution methods for imaging a 6 μm hollow fluorescence microsphere. (a) Middle xy sections of the observed 3D image and three reconstructed images from the 3D RL method, the 3D Wiener method, and our 3D method. Scale bar: 3 μm. (b) Four normalized profiles corresponding to the colored dashed lines in (a).

    There are three typical zebrafish lines frequently used for in vivo brain imaging, Tg (elavl3:EGFP), Tg (elavl3:GCaMP6s), and Tg (elavl3:H2B-GCaMP6s), which have neuronal-specific expression of GFP, the calcium indicator GCaMP6s localized in the cytoplasm of neurons, and the calcium indicator GCaMP6s localized in the nuclei of neurons, respectively.

    Comparison of 2D and 3D deconvolution for imaging the rhombencephalon activity of 7 dpf Tg (elavl3:GCaMP6s) zebrafish larva, recorded by 1328 (x)×1328 (y)×81 (z) voxels. (a) Selected x−y, x−z, and y−z sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison between 2D (2D RL method) and 3D deconvolution (our 3D method).

    Figure 7.Comparison of 2D and 3D deconvolution for imaging the rhombencephalon activity of 7 dpf Tg (elavl3:GCaMP6s) zebrafish larva, recorded by 1328(x)×1328(y)×81(z) voxels. (a) Selected xy, xz, and yz sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison between 2D (2D RL method) and 3D deconvolution (our 3D method).

    Comparison of different 3D deconvolution methods for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928 (x)×1928 (y)×81 (z) voxels. (a) Selected x−y, x−z, and y−z sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). (c) Power spectral distributions (8×8×1 binning). Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 1, Visualization 2, and Visualization 3.

    Figure 8.Comparison of different 3D deconvolution methods for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928(x)×1928(y)×81(z) voxels. (a) Selected xy, xz, and yz sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). (c) Power spectral distributions (8×8×1 binning). Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 1, Visualization 2, and Visualization 3.

    SNR comparison of the 3D deconvolution methods for imaging the mesencephalon activity of 7 dpf Tg (elavl3:H2B-GCaMP6s) zebrafish larva, recorded by 1448 (x)×1448 (y)×81 (z) voxels. (a) The 32nd x−y section of the raw image (observed image) and our image (our 3D method). The corresponding cyan subregion of the x−y section on the left was enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). Scale bar: 50 μm. (b) The 724th x−z section of the observed image (3D Wiener method, 3D RL method, and our 3D method), where the magenta and yellow subregions were enlarged for a clear observation. Scale bar: 50 μm. (c) Normalized distribution of the yellow profiles labeled in (a), where the blue bars in (c) mark all the regions of the suspected neuron boundary by manual identification. (d) Average modified signal-to-noise ratio (MSNR) of the fluorescence peaks along lines across the neuron from images reconstructed with the 3D Wiener method, the 3D RL method, and our 3D method (n=9). Centerline: medians. Limits: 75% and 25%. Whiskers: maximum and minimum. Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 4, Visualization 5, and Visualization 6.

    Figure 9.SNR comparison of the 3D deconvolution methods for imaging the mesencephalon activity of 7 dpf Tg (elavl3:H2B-GCaMP6s) zebrafish larva, recorded by 1448(x)×1448(y)×81(z) voxels. (a) The 32nd xy section of the raw image (observed image) and our image (our 3D method). The corresponding cyan subregion of the xy section on the left was enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). Scale bar: 50 μm. (b) The 724th xz section of the observed image (3D Wiener method, 3D RL method, and our 3D method), where the magenta and yellow subregions were enlarged for a clear observation. Scale bar: 50 μm. (c) Normalized distribution of the yellow profiles labeled in (a), where the blue bars in (c) mark all the regions of the suspected neuron boundary by manual identification. (d) Average modified signal-to-noise ratio (MSNR) of the fluorescence peaks along lines across the neuron from images reconstructed with the 3D Wiener method, the 3D RL method, and our 3D method (n=9). Centerline: medians. Limits: 75% and 25%. Whiskers: maximum and minimum. Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 4, Visualization 5, and Visualization 6.

    Destriping results for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928 (x)×1928 (y) pixels. (a) Image after using the destriping algorithm. Scale bar: 50 μm. (b) The corresponding subregions before and after destriping in (a) were enlarged for a comparison. (c) Two normalized profiles corresponding to the colored dashed lines in (b).

    Figure 10.Destriping results for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928(x)×1928(y) pixels. (a) Image after using the destriping algorithm. Scale bar: 50 μm. (b) The corresponding subregions before and after destriping in (a) were enlarged for a comparison. (c) Two normalized profiles corresponding to the colored dashed lines in (b).

    6. CONCLUSION AND DISCUSSION

    For zebrafish larva brain-wide imaging, light-sheet illumination is required to cover a large FOV and possess concentrated energy in the focal plane. For this purpose, we implemented a thick Gaussian beam light-sheet microscope and expanded the width of the light sheet to more than 300 μm, making the thickness of the light sheet more than 8 μm, nearly 9.6-fold the system DOF. Our 3D deblurring method has been proposed to redistribute the measured intensity of each pixel in the light-sheet image to in situ voxels by 3D deconvolution. By introducing a Hessian regularization term to maintain the continuity of the neuron distribution and using a modified stripe-removal algorithm, the reconstructed z-stack images exhibit high contrast and a high SNR. These performance characteristics can facilitate subsequent processing, such as 3D neuron segmentation and recognition.

    Comparing Fig. 5(c) with Fig. 5(f) in the simulation section, it has been verified that 3D deconvolution is more effective than 2D deconvolution for thick light-sheet imaging. In addition, comparing the different 3D deconvolution methods, especially referring to the reconstructed movies, our 3D deblurring method (see Fig. 3), including stripe-removal postprocessing, has been validated.

    Here, we present some discussions concerning the deblurring methods and imaging settings. System.Xml.XmlElementSystem.Xml.XmlElementSystem.Xml.XmlElementSystem.Xml.XmlElementSystem.Xml.XmlElementSystem.Xml.XmlElementSystem.Xml.XmlElementSystem.Xml.XmlElement

    APPENDIX A

    The formation of light-sheet images is related to 3D PSF and illumination distribution. A generalized 3D light-sheet imaging forward model is presented in this section.

    The coordinate in our system is shown in Fig.?4. We assume that the observed biological sample f(x,y,z) is located in a rectangular region V(lx×ly×lz). The light sheet ILS(x,y,z) is formed by rapidly scanning the Gaussian beam along the y-axis, and it is easy to have ILS(x,y,z)=ILS(x,y?q,z),qR.In addition, the light sheet is symmetrical along the z-axis, i.e., ILS(x,y,z)=ILS(x,y,?z).Let the area V satisfy V=i=1NiAi, V=j=1NjBj, where the 3D PSF is invariant in the subarea Ai, and the light-sheet illumination distribution is uniform in the subarea Bi, i.e., ILS(x,y,z)=ILS(x?p,y,z),pR.In this case, the light-sheet images are obtained by this model: gAi(x,y,z=zj)=wf(p,q,r)ILSBj(p,q,r?z)hAi(x?p,y?q,z?r)dw=wf(p,q,r)ILSBj(x?p,y?q,z?r)hAi(x?p,y?q,z?r)dw=wf(p,q,r)hLSAiBj(x?p,y?q,z?r)dw=(f*hLSAiBj)(x,y,z),x,y,zR.In our system, we consider that the illumination distribution is uniform and the 3D PSF is invariant in the FOV of zebrafish brain. Therefore, the light-sheet image is the convolution of the overall PSF and the observed biological sample.

    APPENDIX B

    In this section, 3D deconvolution with Hessian regularization solved by the ADMM algorithm is presented. We rewrite Eq.?(3) as minf,d{α2hLS*f?g22+φ(d)},where φ(d)=dxx1+dyy1+dzz1+dxy1+dxz1+dyz1,subject to dxx=αhfxx,dyy=αhfyy,dzz=αzfzz,dxy=2αhfxy,dxz=2αzfxz,dyz=2αzfyz.According to augmented Lagrangian and the method of multipliers, we compute the following update steps for each ADMM iteration: (fk+1,dk+1)=minf,d{α2hLS*f?g22+φ(d)+ρ2·[dxx?αhfxx?bxxk22+dyy?αhfyy?byyk22+dzz?αzfzz?bzzk22+dxy?2αhfxy?bxyk22+dxz?2αzfxz?bxzk22+dyz?2αzfyz?byzk22]},bik+1=bik+δ(cbifik+1?dik+1),i=xx,yy,zz,xy,xz,yz.Alternately solve the joint optimization of each variable as follows. The f-minimization step is expressed as fk+1=minf{α2hLS*f?g22+ρ2·[dxxk?αhfxx?bxxk22+dyyk?αhfyy?byyk22+dzzk?αzfzz?bzzk22+dxyk?2αhfxy?bxyk22+dxzk?2αzfxz?bxzk22+dyzk?2αzfyz?byzk22]},and the d-minimization step is expressed as dxxk+1=mindxx{dxx1+ρ2dxx?αhfxxk+1?bxxk22},dyyk+1=mindyy{dyy1+ρ2dyy?αhfyyk+1?byyk22},dzzk+1=mindzz{dzz1+ρ2dzz?αzfzzk+1?bzzk22},dxyk+1=mindxy{dxy1+ρ2dxy?αhfxyk+1?bxyk22},dxzk+1=mindxz{dxz1+ρ2dxz?2αzfxzk+1?bxzk22},dyzk+1=mindyz{dyz1+ρ2dyz?2αzfyzk+1?byzk22},and the dual variables bi,i=xx,yy,zz,xy,xz,yz, are solved by bxxk+1=bxxk+δ(αhfxxk+1?dxxk+1),byyk+1=byyk+δ(αhfyyk+1?dyyk+1),bzzk+1=bzzk+δ(αzfzzk+1?dzzk+1),bxyk+1=bxyk+δ(2αhfxyk+1?dxyk+1),bxzk+1=bxzk+δ(2αzfxzk+1?dxzk+1),byzk+1=byzk+δ(2αzfyzk+1?dyzk+1),where δ is a step size and k is the iteration counter.

    The f-minimization involves solving a minimum Euclidean norm problem. And f is solved by fk+1=ifft{αρfft(hLS)ˉfft(g)+fft(Lk)αρ|fft(hLS)|2+|fft(C)|2},where C=αh?xx2+αh?yy2+αz?zz2+2αh?xy2+2αz?xz2+2αz?yz2,Lk=αh(?xx2)T(dxxk?bxxk)+αh(?yy2)T(dyyk?byyk)+αz(?zz2)T(dzzk?bzzk)+2αh(?xy2)T(dxyk?bxyk)+2αz(?xz2)T(dxzk?bxzk)+2αz(?yz2)T(dyzk?byzk),fft is the fast Fourier transform, and ifft is the inverse fast Fourier transform. ?xx is the second-order partial derivative operator in the x direction, i.e., ?xx=[1,?2,1], and ?xx, ?yy, ?zz, ?xy, ?xz, and ?yz are defined similarly.

    The d-minimization involves solving a closed-form solution by using subdifferential calculus. And the auxiliary variables di,i=xx,yy,zz,xy,xz,yz, are solved by dxxk+1={αhfxxk+1+bxxk?1ρ,αhfxxk+1+bxxk(1ρ,)0,αhfxxk+1+bxxk(?1ρ,1ρ)αhfxxk+1+bxxk+1ρ,αhfxxk+1+bxxk(?,?1ρ)=S1/ρ(αhfxxk+1+bxxk),dyyk+1=S1/ρ(αhfyyk+1+byyk),dzzk+1=S1/ρ(αzfzzk+1+bzzk),dxyk+1=S1/ρ(2αhfxyk+1+bxyk),dxzk+1=S1/ρ(2αzfxzk+1+bxzk),dyzk+1=S1/ρ(2αzfyzk+1+byzk).

    References

    [1] F. Balzarotti, Y. Eilers, K. C. Gwosch, A. H. Gynna, V. Westphal, F. D. Stefani, J. Elf, S. W. Hell. Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes. Science, 355, 606-612(2017).

    [2] S. J. Sahl, S. W. Hell, S. Jakobs. Fluorescence nanoscopy in cell biology. Nat. Rev. Mol. Cell Biol., 18, 685-701(2017).

    [3] L. Gao, L. Shao, C. D. Higgins, J. S. Poulton, M. Peifer, M. W. Davidson, X. Wu, B. Goldstein, E. Betzig. Noninvasive imaging beyond the diffraction limit of 3D dynamics in thickly fluorescent specimens. Cell, 151, 1370-1385(2012).

    [4] M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, P. J. Keller. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nat. Methods, 10, 413-420(2013).

    [5] S. Wolf, W. Supatto, G. Debregeas, P. Mahou, S. G. Kruglik, J. M. Sintes, E. Beaurepaire, R. Candelier. Whole-brain functional imaging with two-photon light-sheet microscopy. Nat. Methods, 12, 379-380(2015).

    [6] W. Yang, R. Yuste. In vivo imaging of neural activity. Nat. Methods, 14, 349-359(2017).

    [7] S. Weisenburger, F. Tejera, J. Demas, B. Chen, J. Manley, F. T. Sparks, F. Martinez Traub, T. Daigle, H. Zeng, A. Losonczy, A. Vaziri. Volumetric Ca2+ imaging in the mouse brain using hybrid multiplexed sculpted light microscopy. Cell, 177, 1050-1066(2019).

    [8] Y. Mu, D. V. Bennett, M. Rubinov, S. Narayan, C. T. Yang, M. Tanimoto, B. D. Mensh, L. L. Looger, M. B. Ahrens. Glia accumulate evidence that actions are futile and suppress unsuccessful behavior. Cell, 178, 27-43(2019).

    [9] G. Sancataldo, L. Silvestri, A. L. Allegra Mascaro, L. Sacconi, F. S. Pavone. Advanced fluorescence microscopy for in vivo imaging of neuronal activity. Optica, 6, 758-765(2019).

    [10] M. Kunst, E. Laurell, N. Mokayes, A. Kramer, F. Kubo, A. Fernandes, D. Forster, M. Dal Maschio, H. Baier. A cellular-resolution atlas of the larval zebrafish brain. Neuron, 103, 21-38(2019).

    [11] H. Wang, Q. Zhu, L. Ding, Y. Shen, C. Yang, F. Xu, C. Shu, Y. Guo, Z. Xiong, Q. Shan, F. Jia, P. Su, Q. Yang, B. Li, Y. Cheng, X. He, X. Chen, F. Wu, J.-N. Zhou, F. Xu, H. Han, P. Lau, G. Bi. Scalable volumetric imaging for ultrahigh-speed brain mapping at synaptic resolution. Natl. Sci. Rev., 6, 982-992(2019).

    [12] X. Chen, Y. Mu, Y. Hu, A. T. Kuan, M. Nikitchenko, O. Randlett, A. B. Chen, J. P. Gavornik, H. Sompolinsky, F. Engert, M. B. Ahrens. Brain-wide organization of neuronal activity and convergent sensorimotor transformations in larval zebrafish. Neuron, 100, 876-890(2018).

    [13] L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, Q. Wen. Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio). eLife, 6, e28158(2017).

    [14] L. Novotny, B. Hecht. Principles of Nano-optics(2012).

    [15] F. Schueder, J. Lara-Gutierrez, B. J. Beliveau, S. K. Saka, H. M. Sasaki, J. B. Woehrstein, M. T. Strauss, H. Grabmayr, P. Yin, R. Jungmann. Multiplexed 3D super-resolution imaging of whole cells using spinning disk confocal microscopy and DNA-PAINT. Nat. Commun., 8, 2090(2017).

    [16] T. Wang, D. G. Ouzounov, C. Wu, N. G. Horton, B. Zhang, C. Wu, Y. Zhang, M. J. Schnitzer, C. Xu. Three-photon imaging of mouse brain structure and function through the intact skull. Nat. Methods, 15, 789-792(2018).

    [17] M. Levoy, R. Ng, A. Adams, M. Footer, M. Horowitz. Light field microscopy. ACM Trans. Graph., 25, 924-934(2006).

    [18] J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, E. H. K. Stelzer. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science, 305, 1007-1009(2004).

    [19] L. A. Royer, W. C. Lemon, R. K. Chhetri, Y. Wan, M. Coleman, E. W. Myers, P. J. Keller. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol., 34, 1267-1278(2016).

    [20] O. E. Olarte, J. Andilla, E. J. Gualda, P. Loza-Alvarez. Light-sheet microscopy: a tutorial. Adv. Opt. Photonics, 10, 111-179(2018).

    [21] L. Gao, L. Shao, B.-C. Chen, E. Betzig. 3D live fluorescence imaging of cellular dynamics using Bessel beam plane illumination microscopy. Nat. Protocols, 9, 1083-1101(2014).

    [22] T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Llado, D. E. K. Ferrier, T. Cizmar, F. J. Gunn-Moore, K. Dholakia. Light-sheet microscopy using an Airy beam. Nat. Methods, 11, 541-544(2014).

    [23] B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, I. Hammer, A. John, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Boehme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, E. Betzig. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science, 346, 439(2014).

    [24] Y. Liu, J. D. Lauderdale, P. Kner. Stripe artifact reduction for digital scanned structured illumination light sheet microscopy. Opt. Lett., 44, 2510-2513(2019).

    [25] X. S. Huang, J. C. Fan, L. J. Li, H. S. Liu, R. L. Wu, Y. Wu, L. S. Wei, H. Mao, A. Lal, P. Xi, L. Q. Tang, Y. F. Zhang, Y. M. Liu, S. Tan, L. Y. Chen. Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy. Nat. Biotechnol., 36, 451-459(2018).

    [26] H. Ikoma, M. Broxton, T. Kudo, G. Wetzstein. A convex 3D deconvolution algorithm for low photon count fluorescence imaging. Sci. Rep., 8, 11489(2018).

    [27] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers(2011).

    [28] T. Goldstein, S. Osher. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci., 2, 323-343(2009).

    [29] Y. H. Gong, I. F. Sbalzarini. Curvature filters efficiently reduce certain variational energies. IEEE Trans. Image Process., 26, 1786-1798(2017).

    [30] B. Munch, P. Trtik, F. Marone, M. Stampanoni. Stripe and ring artifact removal with combined wavelet–Fourier filtering. Opt. Express, 17, 8567-8591(2009).

    [31] W. H. Richardson. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am., 62, 55-59(1972).

    [32] L. B. Lucy. An iterative technique for the rectification of observed distributions. Astron. J., 79, 745(1974).

    [33] R. C. Gonzalez, R. E. Woods. Digital Image Processing(2018).

    [34] P. Sarder, A. Nehorai. Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Process. Mag., 23, 32-45(2006).

    [35] D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, M. Unser. DeconvolutionLab2: an open-source software for deconvolution microscopy. Methods, 115, 28-41(2017).

    [36] Q. Huynh-Thu, M. Ghanbari. The accuracy of PSNR in predicting video quality for different video scenes and frame rates. Telecommun. Syst., 49, 35-48(2012).

    [37] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process., 13, 600-612(2004).

    [38] R. A. Fisher. Statistical Methods for Research Workers. Biological Monographs and Manuals(1954).

    [39] R. M. Power, J. Huisken. A guide to light-sheet fluorescence microscopy for multiscale imaging. Nat. Methods, 14, 360-373(2017).

    [40] P. J. Verveer, J. Swoger, F. Pampaloni, K. Greger, M. Marcello, E. H. K. Stelzer. High-resolution three-dimensional imaging of large specimens with light sheet-based microscopy. Nat. Methods, 4, 311-313(2007).

    [41] B. Schmid, J. Huisken. Real-time multi-view deconvolution. Bioinformatics, 31, 3398-3400(2015).

    [42] S. Preibisch, F. Amat, E. Stamataki, M. Sarov, R. H. Singer, E. Myers, P. Tomancak. Efficient Bayesian-based multiview deconvolution. Nat. Methods, 11, 645-648(2014).

    [43] M. Temerinac-Ott, O. Ronneberger, R. Nitschke, W. Driever, H. Burkhardt. Spatially-variant Lucy-Richardson deconvolution for multiview fusion of microscopical 3D images. 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 899-904(2011).

    Zhe Zhang, Dongzhou Gou, Fan Feng, Ruyi Zheng, Ke Du, Hongrun Yang, Guangyi Zhang, Huitao Zhang, Louis Tao, Liangyi Chen, Heng Mao. 3D Hessian deconvolution of thick light-sheet z-stacks for high-contrast and high-SNR volumetric imaging[J]. Photonics Research, 2020, 8(6): 1011
    Download Citation