Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D Hessian deconvolution of thick light-sheet z-stacks for high-contrast and high-SNR volumetric imaging

Open Access Open Access

Abstract

Due to its ability of optical sectioning and low phototoxicity, z-stacking light-sheet microscopy has been the tool of choice for in vivo imaging of the zebrafish brain. To image the zebrafish brain with a large field of view, the thickness of the Gaussian beam inevitably becomes several times greater than the system depth of field (DOF), where the fluorescence distributions outside the DOF will also be collected, blurring the image. In this paper, we propose a 3D deblurring method, aiming to redistribute the measured intensity of each pixel in a light-sheet image to in situ voxels by 3D deconvolution. By introducing a Hessian regularization term to maintain the continuity of the neuron distribution and using a modified stripe-removal algorithm, the reconstructed z-stack images exhibit high contrast and a high signal-to-noise ratio. These performance characteristics can facilitate subsequent processing, such as 3D neuron registration, segmentation, and recognition.

© 2020 Chinese Laser Press

1. INTRODUCTION

Fluorescence microscopy (FM) can provide both in vitro and in vivo imaging of biological tissues as well as their functional dynamics, which is observable with high spatial and temporal resolution [13]. Moreover, benefiting from the various choices of fluorescent indicators for neuronal activities, FM has been widely used for in vivo brain imaging [49]. However, because a large number of neurons are spread in the brain region [10], brain imaging must record neuronal activities with a large field of view (FOV) simultaneously [1113]. For example, three-dimensional imaging of the zebrafish brain should cover a volume of 800μm×600μm×300μmwith subneuron resolution.

Different from confocal [14], spinning disk confocal [15], multiphoton [16], and light-field [17] FMs, selective plane illumination microscopy [1820] (also called light-sheet microscopy) has recently emerged as the preferable method for volumetric imaging of the zebrafish brain. Based on z-scanning light-sheet illumination, the brain region can be excited slice-by-slice, and these side-sliced fluorescent images are built into the volumetric stack. This form of microscopy, originally designed for highly efficient optical sectioning, can effectively enhance the image contrast and reduce the phototoxicity in deep tissues [20].

A high-speed beam-scanning mechanism has been combined with several different configurations of light-sheet formation mechanisms, including Gaussian beam [20], Bessel beam [21], Airy beam [22], and lattice beam [23]. To illuminate an object with the same width as the light sheet, a Gaussian beam uses a smaller excitation numerical aperture (NA) than Bessel and Airy beams, as shown in Fig. 1. For instance, when the width of the light sheet is 300 μm, the NAs of the Gaussian, Bessel, and Airy beams are 0.09, 0.14, and 0.42, respectively (α=7, β=0.1, λ=0.488μm, and n=4/3). With the presence of refractive index anisotropy, absorption, and scattering in deep tissue, a lower-NA beam excitation leads to a lower perturbation, which is beneficial for maintaining beam concentration and penetration in a large FOV.

 figure: Fig. 1.

Fig. 1. Configuration of the Gaussian, Bessel, and Airy beams with the same light-sheet FOV.

Download Full Size | PDF

According to Gaussian beam propagation [14], the relationship between the light-sheet thickness (2w0) and light-sheet nominal width (2z0) is given as

2z0=2πnw02λ,
where λ is the wavelength in vacuum, n is the refractive index of the medium on the object side, w0 is the waist radius of the Gaussian beam, and z0 is the Rayleigh length of the Gaussian beam. As illustrated, imaging with a wider FOV uses a thicker Gaussian beam.

Generally, thin light-sheet z-stacking images have desired optical sectioning and z-axis spatial resolution and do not require 3D deconvolution, since the thickness of light-sheet illumination is very close to the system depth of field (DOF) [20]. However, for zebrafish brain-wide imaging, the light sheet is thicker, where the fluorescence distributions outside the DOF will also be collected, blurring the image. Therefore, we resort to 3D deconvolution to redistribute the measured intensity from the z-stack images. Meanwhile, the nonuniform illumination of a Gaussian beam will produce an additional difference in each light-sheet image. In addition, melanogenesis tissues located at the surface of the fish head locally and significantly absorb the excitation energy, which also leads to dark stripes in the light-sheet image [24].

To address the abovementioned problems in zebrafish larva brain-wide imaging, we implement a thick Gaussian beam light-sheet microscope and expand the width of the light sheet to greater than 300 μm, which makes the thickness of the light sheet more than 8 μm, nearly 9.6-fold the system DOF. In this paper, we first derive an approximated 3D convolution forward model for thick light-sheet z-stacking imaging and propose a 3D deblurring method by introducing a Hessian regularization term to maintain the continuity of the neuron distribution. Employing this 3D reconstruction algorithm and a modified stripe-removal algorithm, the reconstructed z-stack images can exhibit high contrast and a high signal-to-noise ratio (SNR).

This paper is organized as follows. Section 2 introduces the forward model and 3D deblurring algorithm. Section 3 presents our experimental setup and preparation. Sections 4 and 5 present the results of both a numerical simulation and imaging experiments on fluorescent beads and a zebrafish larvae brain. Section 6 concludes and discusses the paper.

2. THEORY

A. Forward Model

As illustrated in Fig. 2(a), a small-NA (0.04–0.06) Gaussian beam propagates along the x-axis and converges at the brain area of the zebrafish with a very long Rayleigh length. By high-speed beam scanning along the y-axis, dynamic xy plane light-sheet illumination can be generated, and a wide-field image can be captured by the detection objective. Additionally, by introducing fine z-axis translation step-by-step to both the light sheet and objective, volumetric imaging of the zebrafish brain can be achieved. The relationship between the light-sheet illumination region and detection DOF is shown in Fig. 2(b). As illustrated, the detection image collects the fluorescence not only from the DOF but also from the out-of-focus planes of the detection objective.

 figure: Fig. 2.

Fig. 2. Schematic of Gaussian beam light-sheet z-stacking imaging. (a) z-stacking imaging by moving light-sheet and objective. (b) Illumination region and system DOF under the large FOV imaging.

Download Full Size | PDF

Assume that the illumination distribution ILS(x,y,z) is uniform in the xy plane and symmetric with respect to the z-axis, i.e., ILS(x,y,z)=ILS(z)=ILS(z). Assume that the system 3D point spread function (PSF) h(x,y,z) is space-invariant in the brain area. The imaging forward model of thick Gaussian beam light-sheet microscopy can be approximated as follows (see Appendix A):

g(x,y,z=zj)=wf(p,q,r)ILS(p,q,rz)h(xp,yq,zr)dwwf(p,q,r)ILS(xp,yq,zr)h(xp,yq,zr)dw=(f*hLS)(x,y,z),x,y,zR,
where f(x,y,z) is the true volumetric image, g(x,y,z=zj) is the 2D blurred image when the illumination and detection are located at the z=zj plane, hLS(x,y,z) is the overall 3D PSF affected by the illumination, and w=(p,q,r).

B. 3D Reconstruction Algorithm

Here, we define a loss function as the sum of the fidelity term and Hessian regularization term, which is also popular in other imaging modalities [25,26]. The Hessian regularization term is from a priori knowledge of zebrafish brain (see Section 6). The fidelity term constrains the imaging forward model by using the mean square error, and the Hessian regularization term constrains the continuity of the neuron distribution. Such a loss function can be written as

minfα2hLS*fg22+RHessian(f),
where α is the penalty parameter of the fidelity term. The Hessian regularization is defined as
RHessian(f)=αhfxxαhfxyαzfxzαhfyxαhfyyαzfyzαzfzxαzfzyαzfzz1=αhfxx1+αhfyy1+αzfzz1+2αhfxy1+2αzfxz1+2αzfyz1,
where αh and αz are the penalty parameters of continuity along the xy plane and z-axis, respectively. The second-order partial derivatives of f in different directions are abbreviated as fi,i=xx,yy,zz,xy,xz,yz.

In this paper, we use the alternating direction method of multipliers (ADMM) [27] to solve this loss function based on its fast convergence performance in an L1-regularized optimization problem. The key of the algorithm is to decouple the L1 and L2 portions from the loss function [28]. Hence, we rewrite Eq. (3) by introducing auxiliary variables d and using the augmented Lagrangian methods (see Appendix B):

(fk+1,dk+1)=minf,d{α2hLS*fg22+φ(d)+ρ2[dxxαhfxxbxxk22+dyyαhfyybyyk22+dzzαzfzzbzzk22+dxy2αhfxybxyk22+dxz2αzfxzbxzk22+dyz2αzfyzbyzk22]},
bik+1=bik+δ(cbifik+1dik+1),i=xx,yy,zz,xy,xz,yz,
where
φ(d)=dxx1+dyy1+dzz1+dxy1+dxz1+dyz1,
di,bi(i=xx,yy,zz,xy,xz,yz) are the auxiliary and dual variables, δ is the step size, and k is the iteration counter.

Consequently, the framework of the ADMM algorithm is presented in Algorithm 1.

Tables Icon

Table 1. Evaluation Indicators of Different 3D Images in Fig. 5

Tables Icon

Algorithm 1. Split Bregman (ADMM) Algorithm

C. Full Procedure

In real experiments, light-sheet images are corrupted with photon shot noise and camera readout noise. We use the side window filtering (SWF) technique [29] to eliminate the mixed Poisson–Gaussian noise in the fluorescence image. In contrast to conventional local window-based filtering methods, the SWF method aligns the edges and/or corners of the window with the pixels being processed, which better preserves the image boundaries during the denoising process.

Furthermore, nonneuron melanogenesis tissues located at the surface of the fish head can absorb the excitation energy dramatically, which results in dark stripes along the excitation path. The widely used wavelet-FFT algorithm can be used to remove the stripe artifacts [30] but at cost price of deteriorating the image in regions devoid of stripes. Therefore, we introduce a prelocation method based on the output of the wavelet-FFT algorithm. Specifically, the positions and widths of all stripes are determined from the difference map between the original image and the wavelet-FFT processed image. Only the stripe areas in the original image will be multiplied with an adaptive Gaussian stripe function to correct for the artifacts. This modified algorithm can avoid information loss outside the stripe areas and provide consistent contrast across different regions of the image.

As illustrated in Fig. 3, the acquired volume (z-stacking images) originates from the convolution of the object with an overall 3D PSF. The whole processing of 3D deblurring can be classified into three steps: (1) slide-by-slide denoising; (2) 3D deconvolution; and (3) slide-by-slide stripe removal.

 figure: Fig. 3.

Fig. 3. Flowchart of 3D image deblurring processing.

Download Full Size | PDF

3. EXPERIMENTAL SETUP AND PREPARATION

As shown in Fig. 4, we implemented a conventional single-side light-sheet microscope and customized the immersion chamber and six-axis motorized stage for in vivo imaging of zebrafish brain-wide neurons. The 488 nm laser beam (Coherent, OBIS 488LX) was expanded, and the y-axis was scanned to generate the desired light-sheet illumination in the nominal object plane, with a width of 300 μm and a thickness of 8.7 μm (fitted from the actual beam distribution). In the system, a DO (detection objective, Zeiss, 40×/1.0) was mounted on the z-axis piezo stage (PI, P-721.SL2) for the step-by-step motion with 1 μm interval, while the light sheet could z-shift in phase by Galvo z (Cambridge, 6215H) scanning. The illumination objective (Mitutoyo, 5×/0.14) has a long working distance and more than half distance is in the chamber filled with E3 medium. A sensitive sCMOS camera (Hamamatsu, Flash 4.0 V3) was used to capture the z-stacking images. The coordinate system follows the right-hand rule.

 figure: Fig. 4.

Fig. 4. Schematic of our light-sheet microscope setup. Galvo z and Galvo y are used to scan the beam along the z-axis and y-axis, respectively. IO and DO are the illumination and detection objectives, respectively.

Download Full Size | PDF

All larval zebrafish (elavl3:H2B-GCaMP6s, elavl3:GCaMP6s, and elavl3:EGFP) were raised in E3 embryo medium according to the standard protocol under 28.5°C. 0.2 mmol/L 1-phenyl-2-thiourea was added to the embryo medium to inhibit melanogenesis and allow optical imaging of the brain region. In our light-sheet imaging experiment, a 5–7 day post fertilization (dpf) zebrafish will be anesthetized before being embedded in a 1.5% low-melting agarose cylinder with a diameter of 1 mm and immersed in E3 medium. The brain area was electrically aligned to the center of the detection FOV being in good posture for z-stacking imaging.

The 3D PSF h(x,y,z) could be calibrated with sparse fluorescent beads statically embedded in the agarose cylinder or calculated by using the ImageJ plugin PSF generator and using the system parameters. The light-sheet illumination distribution ILS(z) could only be captured from the system utilizing imaging with a high-density fluorescent bead mixed medium. As shown in Eq. (2), the overall 3D PSF hLS(x,y,z) is given by the product of h(x,y,z) and ILS(z). After the imaging experiments, the results showed that the calculated PSF is better than the calibrated PSF during the reconstruction iteration (see Section 6).

The experiments are implemented on an HP Z6 Workstation (Intel Xeon Gold 6128 CPU @ 3.40 GHz × 24), the graphics card model is GeForce RTX 2080 Ti, and the operating system is Ubuntu 19.04. The 3D deconvolution algorithm is implemented using MATLAB R2019b.

4. SIMULATION

In the numerical simulation, we designed a 3D image stack as the ground truth in Fig. 5(a), containing 512 (x) ×512 (y) ×64 (z) voxels, which consists of 16 line-structure objects with different spatial frequencies and different line directions. Every line-structure object consists of 64×64×64 voxels and has sufficient null voxels between objects. Each x/y pixel represents a 0.1625 μm interval, and each z pixel represents a 1 μm interval, which are the same parameters as those used in the actual imaging experiments (see Section 5). In addition, the 3D PSF was calculated from ImageJ software, using five distributed layers to cover the full width at half-maximum of the waist, and the illumination distribution was measured and fitted by an analytical expression. All the above considerations were intended to make the simulation as close to real imaging experiments as possible.

 figure: Fig. 5.

Fig. 5. Comparisons of different deconvolution methods. (a) Simulated 3D image (ground truth) in the xy, xz, and yz sections, and three colored subregions enlarged for a detailed observation at the bottom. (b) Blurred 3D image after forward 3D convolution and Gaussian and Poisson mixed noise addition. (c)–(f) Four reconstruction results using the 2D RL method, the 3D Wiener method, the 3D RL method, and our 3D method, respectively. The R value in the title represents the correlation coefficient of the 3D distribution between the ground truth and each deblurred image.

Download Full Size | PDF

Then, the ground truth was convolved with the overall 3D PSF function according to Eq. (2) followed by the introduction of Gaussian and Poisson mixed noise to generate a blurred 3D image stack [Fig. 5(b)]. These images were deconvolved with four deconvolution methods, the 2D Richardson–Lucy (RL) method, the 3D RL method [31,32], the 3D Wiener method [33], and our 3D method (also called the Hessian method) (see Section 2) [Figs. 5(c)5(f)]. RL and Wiener are popular deconvolution algorithms for fluorescence microscopy images [34]. Of course, many software packages, such as DeconvolutionLab2 [35], also include the 3D RL and 3D Wiener algorithms and so on, but considering the consistency of the computing platform and the stability of performance, we choose the code provided by the MATLAB toolbox. We calculated four evaluation indicators, including the peak signal-to-noise ratio (PSNR) [36], SNR [33], structural similarity (SSIM) index [37], and correlation coefficient (R) [38] between the blurred image and four deblurred 3D image stacks, as shown in Table 1. The higher the indicator value, the closer the reconstructed image is to the real situation. Therefore, from this calculation, it is obvious that our method can recover more correct high-frequency components with much less noise.

5. IMAGING EXPERIMENTS

First, we used hollow fluorescence microspheres embedded in agarose, which have identical stereostructures with a diameter of 6 μm, to evaluate the reconstruction accuracy. All of the parameters of the light sheet were already given (see Section 4). As shown in Fig. 6(a), we present the middle slide of a single microsphere obtained from the observed 3D image, the 3D RL method, the 3D Wiener method, and our Hessian reconstruction method (76×76×26 voxels). All reconstructed 3D images were normalized to the total gray level sum of the observed 3D image in this section. It has been shown that all deconvolution methods can provide the correct x-position of the sphere surface, but the contrast of our Hessian method is the best [Fig. 6(b)]. It is clear that the peak gray level is increased and the gray level of the hollow area is lowered, which verifies that convolution redistributes energy. The asymmetry of the image distribution along the illumination axis (x-axis) is obvious, which may be due to the refraction and scattering of the illumination from left to right and more emission photons being collected from the center-right of the sphere surface.

 figure: Fig. 6.

Fig. 6. Contrast comparison of 3D deconvolution methods for imaging a 6 μm hollow fluorescence microsphere. (a) Middle xy sections of the observed 3D image and three reconstructed images from the 3D RL method, the 3D Wiener method, and our 3D method. Scale bar: 3 μm. (b) Four normalized profiles corresponding to the colored dashed lines in (a).

Download Full Size | PDF

There are three typical zebrafish lines frequently used for in vivo brain imaging, Tg (elavl3:EGFP), Tg (elavl3:GCaMP6s), and Tg (elavl3:H2B-GCaMP6s), which have neuronal-specific expression of GFP, the calcium indicator GCaMP6s localized in the cytoplasm of neurons, and the calcium indicator GCaMP6s localized in the nuclei of neurons, respectively.

By focusing on the rhombencephalon structure of the Tg (elavl3:GCaMP6s) zebrafish larva, we compared the performance of the 2D and 3D deconvolution methods (Fig. 7). By zooming into the neuron region of the xy, xz, and yz sections, 3D deconvolution clearly provides better contrast and SNR than 2D deconvolution [Fig. 7(b)]. 3D image information is overlooked in 2D deconvolution, which makes it difficult to improve the contrast and also introduces artifacts and noise.

 figure: Fig. 7.

Fig. 7. Comparison of 2D and 3D deconvolution for imaging the rhombencephalon activity of 7 dpf Tg (elavl3:GCaMP6s) zebrafish larva, recorded by 1328(x)×1328(y)×81(z) voxels. (a) Selected xy, xz, and yz sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison between 2D (2D RL method) and 3D deconvolution (our 3D method).

Download Full Size | PDF

For the neuronal structure recorded within the Tg (elavl3:EGFP) zebrafish larva, we compared the spectrum performance of different 3D deconvolution methods (Fig. 8). While all deconvolution methods improved the image contrast to some extent, only our Hessian method generated images with a more continuous cytoplasm and thus the most reliable image contrast [Fig. 8(b)]. Moreover, compared to the other two deconvolution methods, our Hessian method can recover more high-frequency signal components as evaluated by the Fourier transform of these images in Fig. 8(c).

 figure: Fig. 8.

Fig. 8. Comparison of different 3D deconvolution methods for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928(x)×1928(y)×81(z) voxels. (a) Selected xy, xz, and yz sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). (c) Power spectral distributions (8×8×1 binning). Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 1, Visualization 2, and Visualization 3.

Download Full Size | PDF

For the neuronal activities recorded within the Tg (elavl3:H2B-GCaMP6s) zebrafish larva, we compared the SNRs of different 3D deconvolution methods in Fig. 9. Clearly, our Hessian method outperforms the other deconvolution methods in terms of reducing noise [Fig. 9(c)]. We also calculated the MSNR (defined as the ratio of the peak value to the background variance) to quantitatively demonstrate the superiority of our Hessian method in noise suppression [Fig. 9(d)]. The higher the MSNR value is, the better the SNR. In addition, the MSNR of 3D RL method is the smallest because it tends to amplify noise for sharper object edges. Moreover, our Hessian method is significantly superior to other methods in restoring the z-axis continuity [Fig. 9(b)].

 figure: Fig. 9.

Fig. 9. SNR comparison of the 3D deconvolution methods for imaging the mesencephalon activity of 7 dpf Tg (elavl3:H2B-GCaMP6s) zebrafish larva, recorded by 1448(x)×1448(y)×81(z) voxels. (a) The 32nd xy section of the raw image (observed image) and our image (our 3D method). The corresponding cyan subregion of the xy section on the left was enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). Scale bar: 50 μm. (b) The 724th xz section of the observed image (3D Wiener method, 3D RL method, and our 3D method), where the magenta and yellow subregions were enlarged for a clear observation. Scale bar: 50 μm. (c) Normalized distribution of the yellow profiles labeled in (a), where the blue bars in (c) mark all the regions of the suspected neuron boundary by manual identification. (d) Average modified signal-to-noise ratio (MSNR) of the fluorescence peaks along lines across the neuron from images reconstructed with the 3D Wiener method, the 3D RL method, and our 3D method (n=9). Centerline: medians. Limits: 75% and 25%. Whiskers: maximum and minimum. Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 4, Visualization 5, and Visualization 6.

Download Full Size | PDF

Additionally, the result of our stripe-removal algorithm is given in Fig. 10. As shown in Figs. 10(a) and 10(b), the stripe artifacts are suppressed after the processing, and the contrasts both inside and outside the stripe area are closer. In addition, the processed image data outside the stripe area is much identical to the original image data there, as shown in Fig. 10(c), which provides a good data preservation for the nonstripe area without any deterioration.

 figure: Fig. 10.

Fig. 10. Destriping results for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928(x)×1928(y) pixels. (a) Image after using the destriping algorithm. Scale bar: 50 μm. (b) The corresponding subregions before and after destriping in (a) were enlarged for a comparison. (c) Two normalized profiles corresponding to the colored dashed lines in (b).

Download Full Size | PDF

6. CONCLUSION AND DISCUSSION

For zebrafish larva brain-wide imaging, light-sheet illumination is required to cover a large FOV and possess concentrated energy in the focal plane. For this purpose, we implemented a thick Gaussian beam light-sheet microscope and expanded the width of the light sheet to more than 300 μm, making the thickness of the light sheet more than 8 μm, nearly 9.6-fold the system DOF. Our 3D deblurring method has been proposed to redistribute the measured intensity of each pixel in the light-sheet image to in situ voxels by 3D deconvolution. By introducing a Hessian regularization term to maintain the continuity of the neuron distribution and using a modified stripe-removal algorithm, the reconstructed z-stack images exhibit high contrast and a high SNR. These performance characteristics can facilitate subsequent processing, such as 3D neuron segmentation and recognition.

Comparing Fig. 5(c) with Fig. 5(f) in the simulation section, it has been verified that 3D deconvolution is more effective than 2D deconvolution for thick light-sheet imaging. In addition, comparing the different 3D deconvolution methods, especially referring to the reconstructed movies, our 3D deblurring method (see Fig. 3), including stripe-removal postprocessing, has been validated.

Here, we present some discussions concerning the deblurring methods and imaging settings.

  • 1. Due to the characteristics of the objective lens and microscope system, fluorescence images are often deteriorated by the out-of-focus fluorescence, tissue scattering, noise, etc. The deconvolution-based image reconstruction algorithm can effectively improve the resolution, contrast, and SNR of practical imaging [34]. Because different imaging modalities often have different imaging models, deconvolution algorithms are still challenged for better performance. As we know, the Hessian regularization term is inherently a good priori for biological microscopic structures, which has been widely applied in structured illumination microscopy [25] and wide-field fluorescence microscopy [26], both on 2D and 3D image stacks. Therefore, we attempt the Hessian regularization term to help us for 3D deconvolution of light-sheet z-stacks on the zebrafish brain, not only improving the image contrast but also SNR.
  • 2. Multiview light-sheet imaging can be achieved by rotating the sample between acquisitions in the traditional single-view light-sheet system [39]. And high isotropic resolution can be provided by the multiview fusion or deconvolution algorithm [40]. Our Hessian deconvolution algorithm can also be applied to the multiview imaging, especially for a large FOV imaging. Meantime, considering the high memory cost in multiview processing we need to split the 3D image data into blocks of appropriate size. Such a strategy has already been used in the RL multiview deconvolution [4143].
  • 3. The required deconvolution time strongly depends on the available hardware. For example, with the GPU (11 GB RAM), deconvolution of simulations (512×512×64 voxels) shown in Fig. 5 required 0.22 minutes. With the CPU (93 GB RAM), deconvolution of zebrafish experiments (2048×2048×100 voxels) shown in Fig. 8 required about 195 minutes.
  • 4. Due to the fact that the original image sampling, 0.1625 μm laterally and 1 μm axially, is much smaller than the cytoplasm (elavl3:EGFP, elavl3:GCaMP6s) or neuron (elavl3:H2B-GCaMP6s), it is appropriate to use the Hessian regularization term to constrain the reconstructed distribution and suppress the noise in principle, as shown in all of the reconstruction results (from Fig. 6 to 9).
  • 5. The reconstruction result using our Hessian method is dependent on the parameters of α, αh, and αz. It is necessary to adjust these values based on the structural features of the fluorescence images. Taking the zebrafish brain images as an example, we should increase the contribution of the Hessian term to process more piecewise smoothness images and decrease the contribution of the Hessian term for hollow-ring-network images, where the continuity of the image distribution is on a small scale.
  • 6. It has been demonstrated that the calculated 3D PSF from the model and system parameters is more effective than the calibrated 3D PSF from a fluorescent bead in frozen agarose. The main reason might be that the aberrations and diffusion of the calibrated PSF depending on the position of the bead in frozen agarose are quite different from the aberrations and diffusion of zebrafish neurons in frozen agarose. It can be interpreted that the Hessian term would be better in compensating for random aberrations and diffusion when the PSF does not involve any substantiated a priori knowledge.
  • 7. The z-stacking interval, light-sheet thickness, and DOF are three important parameters in our thick light-sheet z-stacking imaging system and in the 3D deblurring processing. The former two parameters are determined by the imaging requirements, and the last parameter is adjustable based on the NA of the objective. In practice, if the ratio of the light-sheet thickness to the DOF becomes too large, the light-sheet images will become overblurred, and if the ratio becomes too small, the 3D PSF will be compressed into a 2D distribution. In our experiments, the ratio was adjusted to 9.6 to obtain a better 3D reconstruction.
  • 8. The z-stretching operation in Figs. 79 could only be employed to maintain spatial reality for the observations in both the xz section and the yz section. However, all of the quantitative processing and evaluation steps of the 3D reconstruction still use the 3D discrete data format.

APPENDIX A

The formation of light-sheet images is related to 3D PSF and illumination distribution. A generalized 3D light-sheet imaging forward model is presented in this section.

The coordinate in our system is shown in Fig. 4. We assume that the observed biological sample f(x,y,z) is located in a rectangular region V(lx×ly×lz). The light sheet ILS(x,y,z) is formed by rapidly scanning the Gaussian beam along the y-axis, and it is easy to have

ILS(x,y,z)=ILS(x,yq,z),qR.
In addition, the light sheet is symmetrical along the z-axis, i.e.,
ILS(x,y,z)=ILS(x,y,z).
Let the area V satisfy V=i=1NiAi, V=j=1NjBj, where the 3D PSF is invariant in the subarea Ai, and the light-sheet illumination distribution is uniform in the subarea Bi, i.e.,
ILS(x,y,z)=ILS(xp,y,z),pR.
In this case, the light-sheet images are obtained by this model:
gAi(x,y,z=zj)=wf(p,q,r)ILSBj(p,q,rz)hAi(xp,yq,zr)dw=wf(p,q,r)ILSBj(xp,yq,zr)hAi(xp,yq,zr)dw=wf(p,q,r)hLSAiBj(xp,yq,zr)dw=(f*hLSAiBj)(x,y,z),x,y,zR.
In our system, we consider that the illumination distribution is uniform and the 3D PSF is invariant in the FOV of zebrafish brain. Therefore, the light-sheet image is the convolution of the overall PSF and the observed biological sample.

APPENDIX B

In this section, 3D deconvolution with Hessian regularization solved by the ADMM algorithm is presented. We rewrite Eq. (3) as

minf,d{α2hLS*fg22+φ(d)},
where
φ(d)=dxx1+dyy1+dzz1+dxy1+dxz1+dyz1,
subject to
dxx=αhfxx,dyy=αhfyy,dzz=αzfzz,
dxy=2αhfxy,dxz=2αzfxz,dyz=2αzfyz.
According to augmented Lagrangian and the method of multipliers, we compute the following update steps for each ADMM iteration:
(fk+1,dk+1)=minf,d{α2hLS*fg22+φ(d)+ρ2·[dxxαhfxxbxxk22+dyyαhfyybyyk22+dzzαzfzzbzzk22+dxy2αhfxybxyk22+dxz2αzfxzbxzk22+dyz2αzfyzbyzk22]},
bik+1=bik+δ(cbifik+1dik+1),i=xx,yy,zz,xy,xz,yz.
Alternately solve the joint optimization of each variable as follows. The f-minimization step is expressed as
fk+1=minf{α2hLS*fg22+ρ2·[dxxkαhfxxbxxk22+dyykαhfyybyyk22+dzzkαzfzzbzzk22+dxyk2αhfxybxyk22+dxzk2αzfxzbxzk22+dyzk2αzfyzbyzk22]},
and the d-minimization step is expressed as
dxxk+1=mindxx{dxx1+ρ2dxxαhfxxk+1bxxk22},dyyk+1=mindyy{dyy1+ρ2dyyαhfyyk+1byyk22},dzzk+1=mindzz{dzz1+ρ2dzzαzfzzk+1bzzk22},dxyk+1=mindxy{dxy1+ρ2dxyαhfxyk+1bxyk22},dxzk+1=mindxz{dxz1+ρ2dxz2αzfxzk+1bxzk22},dyzk+1=mindyz{dyz1+ρ2dyz2αzfyzk+1byzk22},
and the dual variables bi,i=xx,yy,zz,xy,xz,yz, are solved by
bxxk+1=bxxk+δ(αhfxxk+1dxxk+1),byyk+1=byyk+δ(αhfyyk+1dyyk+1),bzzk+1=bzzk+δ(αzfzzk+1dzzk+1),bxyk+1=bxyk+δ(2αhfxyk+1dxyk+1),bxzk+1=bxzk+δ(2αzfxzk+1dxzk+1),byzk+1=byzk+δ(2αzfyzk+1dyzk+1),
where δ is a step size and k is the iteration counter.

The f-minimization involves solving a minimum Euclidean norm problem. And f is solved by

fk+1=ifft{αρfft(hLS)¯fft(g)+fft(Lk)αρ|fft(hLS)|2+|fft(C)|2},
where
C=αhxx2+αhyy2+αzzz2+2αhxy2+2αzxz2+2αzyz2,Lk=αh(xx2)T(dxxkbxxk)+αh(yy2)T(dyykbyyk)+αz(zz2)T(dzzkbzzk)+2αh(xy2)T(dxykbxyk)+2αz(xz2)T(dxzkbxzk)+2αz(yz2)T(dyzkbyzk),
fft is the fast Fourier transform, and ifft is the inverse fast Fourier transform. xx is the second-order partial derivative operator in the x direction, i.e., xx=[1,2,1], and xx, yy, zz, xy, xz, and yz are defined similarly.

The d-minimization involves solving a closed-form solution by using subdifferential calculus. And the auxiliary variables di,i=xx,yy,zz,xy,xz,yz, are solved by

dxxk+1={αhfxxk+1+bxxk1ρ,αhfxxk+1+bxxk(1ρ,)0,αhfxxk+1+bxxk(1ρ,1ρ)αhfxxk+1+bxxk+1ρ,αhfxxk+1+bxxk(,1ρ)=S1/ρ(αhfxxk+1+bxxk),dyyk+1=S1/ρ(αhfyyk+1+byyk),dzzk+1=S1/ρ(αzfzzk+1+bzzk),dxyk+1=S1/ρ(2αhfxyk+1+bxyk),dxzk+1=S1/ρ(2αzfxzk+1+bxzk),dyzk+1=S1/ρ(2αzfyzk+1+byzk).

Funding

National Natural Science Foundation of China (21927813, 31570839, 31771147, 61520106004, 61671311, 81827809, 917502003, 91854112); Natural Science Foundation of Beijing Municipality (5194026, L172003); National Major Science and Technology Projects of China (2016YFA0500400).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. F. Balzarotti, Y. Eilers, K. C. Gwosch, A. H. Gynna, V. Westphal, F. D. Stefani, J. Elf, and S. W. Hell, “Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes,” Science 355, 606–612 (2017). [CrossRef]  

2. S. J. Sahl, S. W. Hell, and S. Jakobs, “Fluorescence nanoscopy in cell biology,” Nat. Rev. Mol. Cell Biol. 18, 685–701 (2017). [CrossRef]  

3. L. Gao, L. Shao, C. D. Higgins, J. S. Poulton, M. Peifer, M. W. Davidson, X. Wu, B. Goldstein, and E. Betzig, “Noninvasive imaging beyond the diffraction limit of 3D dynamics in thickly fluorescent specimens,” Cell 151, 1370–1385 (2012). [CrossRef]  

4. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10, 413–420 (2013). [CrossRef]  

5. S. Wolf, W. Supatto, G. Debregeas, P. Mahou, S. G. Kruglik, J. M. Sintes, E. Beaurepaire, and R. Candelier, “Whole-brain functional imaging with two-photon light-sheet microscopy,” Nat. Methods 12, 379–380 (2015). [CrossRef]  

6. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14, 349–359 (2017). [CrossRef]  

7. S. Weisenburger, F. Tejera, J. Demas, B. Chen, J. Manley, F. T. Sparks, F. Martinez Traub, T. Daigle, H. Zeng, A. Losonczy, and A. Vaziri, “Volumetric Ca2+ imaging in the mouse brain using hybrid multiplexed sculpted light microscopy,” Cell 177, 1050–1066 (2019). [CrossRef]  

8. Y. Mu, D. V. Bennett, M. Rubinov, S. Narayan, C. T. Yang, M. Tanimoto, B. D. Mensh, L. L. Looger, and M. B. Ahrens, “Glia accumulate evidence that actions are futile and suppress unsuccessful behavior,” Cell 178, 27–43 (2019). [CrossRef]  

9. G. Sancataldo, L. Silvestri, A. L. Allegra Mascaro, L. Sacconi, and F. S. Pavone, “Advanced fluorescence microscopy for in vivo imaging of neuronal activity,” Optica 6, 758–765 (2019). [CrossRef]  

10. M. Kunst, E. Laurell, N. Mokayes, A. Kramer, F. Kubo, A. Fernandes, D. Forster, M. Dal Maschio, and H. Baier, “A cellular-resolution atlas of the larval zebrafish brain,” Neuron 103, 21–38 (2019). [CrossRef]  

11. H. Wang, Q. Zhu, L. Ding, Y. Shen, C. Yang, F. Xu, C. Shu, Y. Guo, Z. Xiong, Q. Shan, F. Jia, P. Su, Q. Yang, B. Li, Y. Cheng, X. He, X. Chen, F. Wu, J.-N. Zhou, F. Xu, H. Han, P. Lau, and G. Bi, “Scalable volumetric imaging for ultrahigh-speed brain mapping at synaptic resolution,” Natl. Sci. Rev. 6, 982–992 (2019). [CrossRef]  

12. X. Chen, Y. Mu, Y. Hu, A. T. Kuan, M. Nikitchenko, O. Randlett, A. B. Chen, J. P. Gavornik, H. Sompolinsky, F. Engert, and M. B. Ahrens, “Brain-wide organization of neuronal activity and convergent sensorimotor transformations in larval zebrafish,” Neuron 100, 876–890 (2018). [CrossRef]  

13. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

14. L. Novotny and B. Hecht, Principles of Nano-optics, 2nd ed. (Cambridge University, 2012).

15. F. Schueder, J. Lara-Gutierrez, B. J. Beliveau, S. K. Saka, H. M. Sasaki, J. B. Woehrstein, M. T. Strauss, H. Grabmayr, P. Yin, and R. Jungmann, “Multiplexed 3D super-resolution imaging of whole cells using spinning disk confocal microscopy and DNA-PAINT,” Nat. Commun. 8, 2090 (2017). [CrossRef]  

16. T. Wang, D. G. Ouzounov, C. Wu, N. G. Horton, B. Zhang, C. Wu, Y. Zhang, M. J. Schnitzer, and C. Xu, “Three-photon imaging of mouse brain structure and function through the intact skull,” Nat. Methods 15, 789–792 (2018). [CrossRef]  

17. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006). [CrossRef]  

18. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305, 1007–1009 (2004). [CrossRef]  

19. L. A. Royer, W. C. Lemon, R. K. Chhetri, Y. Wan, M. Coleman, E. W. Myers, and P. J. Keller, “Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms,” Nat. Biotechnol. 34, 1267–1278 (2016). [CrossRef]  

20. O. E. Olarte, J. Andilla, E. J. Gualda, and P. Loza-Alvarez, “Light-sheet microscopy: a tutorial,” Adv. Opt. Photonics 10, 111–179 (2018). [CrossRef]  

21. L. Gao, L. Shao, B.-C. Chen, and E. Betzig, “3D live fluorescence imaging of cellular dynamics using Bessel beam plane illumination microscopy,” Nat. Protocols 9, 1083–1101 (2014). [CrossRef]  

22. T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Llado, D. E. K. Ferrier, T. Cizmar, F. J. Gunn-Moore, and K. Dholakia, “Light-sheet microscopy using an Airy beam,” Nat. Methods 11, 541–544 (2014). [CrossRef]  

23. B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, I. Hammer, A. John, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Boehme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science 346, 439 (2014). [CrossRef]  

24. Y. Liu, J. D. Lauderdale, and P. Kner, “Stripe artifact reduction for digital scanned structured illumination light sheet microscopy,” Opt. Lett. 44, 2510–2513 (2019). [CrossRef]  

25. X. S. Huang, J. C. Fan, L. J. Li, H. S. Liu, R. L. Wu, Y. Wu, L. S. Wei, H. Mao, A. Lal, P. Xi, L. Q. Tang, Y. F. Zhang, Y. M. Liu, S. Tan, and L. Y. Chen, “Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy,” Nat. Biotechnol. 36, 451–459 (2018). [CrossRef]  

26. H. Ikoma, M. Broxton, T. Kudo, and G. Wetzstein, “A convex 3D deconvolution algorithm for low photon count fluorescence imaging,” Sci. Rep. 8, 11489 (2018). [CrossRef]  

27. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers (Now Publishers Inc., 2011).

28. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM J. Imaging Sci. 2, 323–343 (2009). [CrossRef]  

29. Y. H. Gong and I. F. Sbalzarini, “Curvature filters efficiently reduce certain variational energies,” IEEE Trans. Image Process. 26, 1786–1798 (2017). [CrossRef]  

30. B. Munch, P. Trtik, F. Marone, and M. Stampanoni, “Stripe and ring artifact removal with combined wavelet–Fourier filtering,” Opt. Express 17, 8567–8591 (2009). [CrossRef]  

31. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972). [CrossRef]  

32. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745 (1974). [CrossRef]  

33. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson, 2018).

34. P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Process. Mag. 23, 32–45 (2006). [CrossRef]  

35. D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, and M. Unser, “DeconvolutionLab2: an open-source software for deconvolution microscopy,” Methods 115, 28–41 (2017). [CrossRef]  

36. Q. Huynh-Thu and M. Ghanbari, “The accuracy of PSNR in predicting video quality for different video scenes and frame rates,” Telecommun. Syst. 49, 35–48 (2012). [CrossRef]  

37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

38. R. A. Fisher, “Statistical Methods for Research Workers,” in Biological Monographs and Manuals, 12th ed. (Oliver and Boyd, 1954).

39. R. M. Power and J. Huisken, “A guide to light-sheet fluorescence microscopy for multiscale imaging,” Nat. Methods 14, 360–373 (2017). [CrossRef]  

40. P. J. Verveer, J. Swoger, F. Pampaloni, K. Greger, M. Marcello, and E. H. K. Stelzer, “High-resolution three-dimensional imaging of large specimens with light sheet-based microscopy,” Nat. Methods 4, 311–313 (2007). [CrossRef]  

41. B. Schmid and J. Huisken, “Real-time multi-view deconvolution,” Bioinformatics 31, 3398–3400 (2015). [CrossRef]  

42. S. Preibisch, F. Amat, E. Stamataki, M. Sarov, R. H. Singer, E. Myers, and P. Tomancak, “Efficient Bayesian-based multiview deconvolution,” Nat. Methods 11, 645–648 (2014). [CrossRef]  

43. M. Temerinac-Ott, O. Ronneberger, R. Nitschke, W. Driever, and H. Burkhardt, “Spatially-variant Lucy-Richardson deconvolution for multiview fusion of microscopical 3D images,” in 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro (2011), pp. 899–904.

Supplementary Material (6)

NameDescription
Visualization 1       Sample : the rhombencephalon structure of Tg (elavl3:EGFP) zebrafish larva (6dpf) Image size : 1928 (x) × 1928 (y) × 81 (z) pixels Pixel size : 0.1625 × 0.1625 × 1 µm^3 Scale bar: 50 µm Video frame rate : 10 fps Total frames : 81 Original video file
Visualization 2       Sample : the rhombencephalon structure of Tg (elavl3:EGFP) zebrafish larva (6dpf) Image size : 1928 (x) × 1928 (y) × 81 (z) pixels Pixel size : 0.1625 × 0.1625 × 1 µm^3 Scale bar: 50 µm Video frame rate : 10 fps Total frames : 81 Original video file
Visualization 3       Sample : the rhombencephalon structure of Tg (elavl3:EGFP) zebrafish larva (6dpf) Image size : 1928 (x) × 1928 (y) × 81 (z) pixels Pixel size : 0.1625 × 0.1625 × 1 µm^3 Scale bar: 50 µm Video frame rate : 10 fps Total frames : 81 Original video file
Visualization 4       Sample : the mesencephalon activity of Tg (elavl3:H2B-GCaMP6s) zebrafish larva (7dpf) Image size : 1928 (x) × 1928 (y) × 81 (z) pixels Pixel size : 0.1625 × 0.1625 × 1 µm^3 Scale bar: 50 µm Video frame rate : 10 fps Total frames : 81 Original video
Visualization 5       Sample : the mesencephalon activity of Tg (elavl3:H2B-GCaMP6s) zebrafish larva (7dpf) Image size : 1928 (x) × 1928 (y) × 81 (z) pixels Pixel size : 0.1625 × 0.1625 × 1 µm^3 Scale bar: 50 µm Video frame rate : 10 fps Total frames : 81 Original video
Visualization 6       Sample : the mesencephalon activity of Tg (elavl3:H2B-GCaMP6s) zebrafish larva (7dpf) Image size : 1928 (x) × 1928 (y) × 81 (z) pixels Pixel size : 0.1625 × 0.1625 × 1 µm^3 Scale bar: 50 µm Video frame rate : 10 fps Total frames : 81 Original video

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Configuration of the Gaussian, Bessel, and Airy beams with the same light-sheet FOV.
Fig. 2.
Fig. 2. Schematic of Gaussian beam light-sheet z-stacking imaging. (a) z-stacking imaging by moving light-sheet and objective. (b) Illumination region and system DOF under the large FOV imaging.
Fig. 3.
Fig. 3. Flowchart of 3D image deblurring processing.
Fig. 4.
Fig. 4. Schematic of our light-sheet microscope setup. Galvo z and Galvo y are used to scan the beam along the z-axis and y-axis, respectively. IO and DO are the illumination and detection objectives, respectively.
Fig. 5.
Fig. 5. Comparisons of different deconvolution methods. (a) Simulated 3D image (ground truth) in the xy, xz, and yz sections, and three colored subregions enlarged for a detailed observation at the bottom. (b) Blurred 3D image after forward 3D convolution and Gaussian and Poisson mixed noise addition. (c)–(f) Four reconstruction results using the 2D RL method, the 3D Wiener method, the 3D RL method, and our 3D method, respectively. The R value in the title represents the correlation coefficient of the 3D distribution between the ground truth and each deblurred image.
Fig. 6.
Fig. 6. Contrast comparison of 3D deconvolution methods for imaging a 6 μm hollow fluorescence microsphere. (a) Middle xy sections of the observed 3D image and three reconstructed images from the 3D RL method, the 3D Wiener method, and our 3D method. Scale bar: 3 μm. (b) Four normalized profiles corresponding to the colored dashed lines in (a).
Fig. 7.
Fig. 7. Comparison of 2D and 3D deconvolution for imaging the rhombencephalon activity of 7 dpf Tg (elavl3:GCaMP6s) zebrafish larva, recorded by 1328(x)×1328(y)×81(z) voxels. (a) Selected xy, xz, and yz sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison between 2D (2D RL method) and 3D deconvolution (our 3D method).
Fig. 8.
Fig. 8. Comparison of different 3D deconvolution methods for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928(x)×1928(y)×81(z) voxels. (a) Selected xy, xz, and yz sections of the raw image (observed image) and our image (our 3D method). Scale bar: 50 μm. (b) The corresponding cyan, yellow, and magenta subregions in (a) were enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). (c) Power spectral distributions (8×8×1 binning). Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 1, Visualization 2, and Visualization 3.
Fig. 9.
Fig. 9. SNR comparison of the 3D deconvolution methods for imaging the mesencephalon activity of 7 dpf Tg (elavl3:H2B-GCaMP6s) zebrafish larva, recorded by 1448(x)×1448(y)×81(z) voxels. (a) The 32nd xy section of the raw image (observed image) and our image (our 3D method). The corresponding cyan subregion of the xy section on the left was enlarged for a comparison of three reconstruction results (3D Wiener method, 3D RL method, and our 3D method). Scale bar: 50 μm. (b) The 724th xz section of the observed image (3D Wiener method, 3D RL method, and our 3D method), where the magenta and yellow subregions were enlarged for a clear observation. Scale bar: 50 μm. (c) Normalized distribution of the yellow profiles labeled in (a), where the blue bars in (c) mark all the regions of the suspected neuron boundary by manual identification. (d) Average modified signal-to-noise ratio (MSNR) of the fluorescence peaks along lines across the neuron from images reconstructed with the 3D Wiener method, the 3D RL method, and our 3D method (n=9). Centerline: medians. Limits: 75% and 25%. Whiskers: maximum and minimum. Three z-stacking movies corresponding to three reconstruction results are provided in Visualization 4, Visualization 5, and Visualization 6.
Fig. 10.
Fig. 10. Destriping results for imaging the rhombencephalon structure of 6 dpf Tg (elavl3:EGFP) zebrafish larva, recorded by 1928(x)×1928(y) pixels. (a) Image after using the destriping algorithm. Scale bar: 50 μm. (b) The corresponding subregions before and after destriping in (a) were enlarged for a comparison. (c) Two normalized profiles corresponding to the colored dashed lines in (b).

Tables (2)

Tables Icon

Table 1. Evaluation Indicators of Different 3D Images in Fig. 5

Tables Icon

Algorithm 1. Split Bregman (ADMM) Algorithm

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

2z0=2πnw02λ,
g(x,y,z=zj)=wf(p,q,r)ILS(p,q,rz)h(xp,yq,zr)dwwf(p,q,r)ILS(xp,yq,zr)h(xp,yq,zr)dw=(f*hLS)(x,y,z),x,y,zR,
minfα2hLS*fg22+RHessian(f),
RHessian(f)=αhfxxαhfxyαzfxzαhfyxαhfyyαzfyzαzfzxαzfzyαzfzz1=αhfxx1+αhfyy1+αzfzz1+2αhfxy1+2αzfxz1+2αzfyz1,
(fk+1,dk+1)=minf,d{α2hLS*fg22+φ(d)+ρ2[dxxαhfxxbxxk22+dyyαhfyybyyk22+dzzαzfzzbzzk22+dxy2αhfxybxyk22+dxz2αzfxzbxzk22+dyz2αzfyzbyzk22]},
bik+1=bik+δ(cbifik+1dik+1),i=xx,yy,zz,xy,xz,yz,
φ(d)=dxx1+dyy1+dzz1+dxy1+dxz1+dyz1,
ILS(x,y,z)=ILS(x,yq,z),qR.
ILS(x,y,z)=ILS(x,y,z).
ILS(x,y,z)=ILS(xp,y,z),pR.
gAi(x,y,z=zj)=wf(p,q,r)ILSBj(p,q,rz)hAi(xp,yq,zr)dw=wf(p,q,r)ILSBj(xp,yq,zr)hAi(xp,yq,zr)dw=wf(p,q,r)hLSAiBj(xp,yq,zr)dw=(f*hLSAiBj)(x,y,z),x,y,zR.
minf,d{α2hLS*fg22+φ(d)},
φ(d)=dxx1+dyy1+dzz1+dxy1+dxz1+dyz1,
dxx=αhfxx,dyy=αhfyy,dzz=αzfzz,
dxy=2αhfxy,dxz=2αzfxz,dyz=2αzfyz.
(fk+1,dk+1)=minf,d{α2hLS*fg22+φ(d)+ρ2·[dxxαhfxxbxxk22+dyyαhfyybyyk22+dzzαzfzzbzzk22+dxy2αhfxybxyk22+dxz2αzfxzbxzk22+dyz2αzfyzbyzk22]},
bik+1=bik+δ(cbifik+1dik+1),i=xx,yy,zz,xy,xz,yz.
fk+1=minf{α2hLS*fg22+ρ2·[dxxkαhfxxbxxk22+dyykαhfyybyyk22+dzzkαzfzzbzzk22+dxyk2αhfxybxyk22+dxzk2αzfxzbxzk22+dyzk2αzfyzbyzk22]},
dxxk+1=mindxx{dxx1+ρ2dxxαhfxxk+1bxxk22},dyyk+1=mindyy{dyy1+ρ2dyyαhfyyk+1byyk22},dzzk+1=mindzz{dzz1+ρ2dzzαzfzzk+1bzzk22},dxyk+1=mindxy{dxy1+ρ2dxyαhfxyk+1bxyk22},dxzk+1=mindxz{dxz1+ρ2dxz2αzfxzk+1bxzk22},dyzk+1=mindyz{dyz1+ρ2dyz2αzfyzk+1byzk22},
bxxk+1=bxxk+δ(αhfxxk+1dxxk+1),byyk+1=byyk+δ(αhfyyk+1dyyk+1),bzzk+1=bzzk+δ(αzfzzk+1dzzk+1),bxyk+1=bxyk+δ(2αhfxyk+1dxyk+1),bxzk+1=bxzk+δ(2αzfxzk+1dxzk+1),byzk+1=byzk+δ(2αzfyzk+1dyzk+1),
fk+1=ifft{αρfft(hLS)¯fft(g)+fft(Lk)αρ|fft(hLS)|2+|fft(C)|2},
C=αhxx2+αhyy2+αzzz2+2αhxy2+2αzxz2+2αzyz2,Lk=αh(xx2)T(dxxkbxxk)+αh(yy2)T(dyykbyyk)+αz(zz2)T(dzzkbzzk)+2αh(xy2)T(dxykbxyk)+2αz(xz2)T(dxzkbxzk)+2αz(yz2)T(dyzkbyzk),
dxxk+1={αhfxxk+1+bxxk1ρ,αhfxxk+1+bxxk(1ρ,)0,αhfxxk+1+bxxk(1ρ,1ρ)αhfxxk+1+bxxk+1ρ,αhfxxk+1+bxxk(,1ρ)=S1/ρ(αhfxxk+1+bxxk),dyyk+1=S1/ρ(αhfyyk+1+byyk),dzzk+1=S1/ρ(αzfzzk+1+bzzk),dxyk+1=S1/ρ(2αhfxyk+1+bxyk),dxzk+1=S1/ρ(2αzfxzk+1+bxzk),dyzk+1=S1/ρ(2αzfyzk+1+byzk).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.