Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate 3D morphological computational model reconstruction of suspended cells imaged through stratified media by the precise depth-varying point spread function method

Open Access Open Access

Abstract

Accurate three-dimensional (3D) morphological computational models of cells are important in a number of biological studies. This study proposes a precise depth-varying point spread function (PDV-PSF) method for reconstructing 3D computational models of suspended cells from two-dimensional (2D) confocal image stacks. Our approach deblurs the 2D images in horizontal plane and corrects the deformation in vertical direction to overcome the refractive index mismatch problem caused by suspended cells imaging through stratified media. Standard fluorescent polystyrene spheres and Jurkat T-lymphocytes are selected to evaluate the validity and accuracy of this PDV-PSF method. Qualitative and quantitative results demonstrate that our approach has superior performance in 3D morphological computational models reconstruction of suspended cells.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Cell morphology has a strong correlation with the pathophysiology of many diseases and plays an important role in the diagnosis of diseases and the research of drug development [1,2]. Changes in the morphology of human peripheral blood lymphocytes, for example, can be used to detect malignant tumors [3], autoimmune and immune deficiency diseases [4]. Due to cell morphology closely related to its optical characteristics, light scattered from its internal and external structures has been thought as cell’s "fingerprint" [5], which can provide valuable information of cell’s pathological and physiological states. Therefore, detecting the morphology changes of cells, especially label-free living cells, is of great significance for the early diagnosis of critical diseases. In order to inverse the pathological and physiological states of a cell from its optical characteristics, it is necessary to clarify what kinds of structures, compositions, and sub-organelle distributions inside a cell can scatter light. So three-dimensional (3D) morphological computational models of real cells should be obtained accurately. Besides the inversion methods of their morphological and optical characteristics, 3D morphological computational models are also very important in other studies, such as computational imaging [68], cellular dynamics [9], mechanical responses [10], and so on. However, the morphology complexity of single living cells in suspension is considered to be a major hurdle in developing 3D computational modeling method. In common modeling methods [1113], the morphologies of external membranes and internal organelles are simplified to be spheres or ellipsoids, which are quite different from the real cell morphologies.

Recently, laser scanning confocal microscopy (LSCM) is proposed to reconstruct the 3D computational modeling of cells [14,15]. A stack of two-dimensional (2D) image layers are obtained by LSCM. And then 3D cell morphological computational models can be reconstructed by stitching these 2D image stacks. Reconstructed 3D morphological computational models by using LSCM can also be applied in micromorphology studies for plant organs [16], insect tissues [17], and even bovine scleral fibroblasts [9].

However, when LSCM imaging suspended living cells, aberration problems always exist due to refractive index mismatch among objective immersion medium, coverslip, and sample medium. There are two main two deleterious effects induced by the aberration problem [1821]. First, a focal shift leads to deformation along the axial direction ($z$-direction), resulting in elongated or compressed 3D computational models. Second, images in each layer of LSCM image stacks become dim and blurred because of the point spread function (PSF) broadening in the image plane ($x$-$y$ plane) and PSF deteriorating with penetration depth into samples. The aberration problem, therefore, cause a considerable loss of lateral and axial resolution, which may introduce great errors, even mistakes in 3D morphological computational models in the inversion study of cell morphological and optical characteristics. In order to solve these problems, the researchers have made some attempts to correct 3D morphological computational models along the $z$-direction and on the $x$-$y$ plane, respectively.

In $z$-direction correction, the mounting medium with a refractive index matching the immersion medium is put forward [18]. By inserting inactivated and fixed samples into such a mounting medium, the axial resolution of LSCM imaging has been improved effectively. Unfortunately, it is impossible to put living single cells in such a mounting medium, because all the samples are suspended in a survival liquid medium, such as phosphate buffered saline. In addition, it is common to use a correction coefficient for $z$-direction correction, which can be obtained by experimental calibration [19] and theoretical analysis [20,22,23]. The experimental calibration method can obtain a more accurate correction coefficient under real imaging conditions, but its calibration processes are very complicated. Once the imaging conditions change, the correction coefficient needs to be recalibrated. For theoretical analysis, the geometrical optics approach is applied to calculate the correction coefficient by Carlsson [22] for the first time. Although this method does not consider the NA of the objectives, it indeed provides a good approximation for low NA objectives. But for high NA objectives, it increases the axial distortion obviously. Visser et. al [23] then developed a method to calculate the correction coefficient considering the NA of the objectives. Later, the problem of overestimating the correction coefficient for high NA objectives is proposed. After that, Diel et. al [20] improved Visser’s method and obtained two simple formulas for calculating the correction coefficient. However, these studies only focused on the $z$-direction correction, which was not enough to reconstruct the accurate 3D morphological computational models of suspended living cells.

In $x$-$y$ plane correction, the blind deconvolution methods are commonly used if PSF is unknown [24]. In fact, the realistic and precise knowledge of PSF is necessary to obtain 3D morphological computational models with high accuracy. For wide-field microscopy, the Gibson-Lanni PSF model [25] is commonly used because of its simplicity and advantages that the objective’s standard parameters and sample’s optical properties are only relied on. However, the large limitation of the Gibson-Lanni PSF model is the computation that Kirchhoff’s diffraction integral is required. To solve this limitation, Li et al. [26,27] carried out two aspects of work. First, they proposed a fast and accurate approximation to the Gibson-Lanni model, and then put forward a method to estimate PSF directly from observed samples. Different from wide-field microscopy, PSF of LSCM has completely various shape. Compared with the Gibson-Lanni model, the vectorial diffraction theory, which provides a rigorous treatment of diffraction, is thought by us to be more suitable for LSCM. Although the vector diffraction theory for PSF has been studied by some researchers in principle analysis [2830], it is seldom applied to correct the 3D morphological computational models. In fact, our previous work also considered the problem of missing image information in the $x$-$y$ plane and proposed an "area and shape constraint method" (ASCM) to restore the defective image layers [15].

Previous studies have aimed at only one aspect of the $z$-direction or the $x$-$y$ plane. However, in order to reconstruct the accurate 3D morphological computational models of suspended cells, both of them need to be taken into account. In this paper, a precise depth-varying point spread function (PDV-PSF) method is proposed to reconstruct the 3D morphological computational models of suspended cells imaged through stratified media, considering both the $z$-direction and $x$-$y$ plane. Firstly, a ray-tracing model is developed to obtain the $z$-direction correction coefficient. And then based on the vector diffraction theory, the PDV-PSFs are calculated using the corrected $z$-direction distance and applied for the $x$-$y$ plane correction. Finally, accurate 3D morphological computational models of cells can be obtained by stitching 2D image layers corrected by PDV-PSF with the $z$-direction correction distance. To the best of our knowledge, it is the first time to correct the deformed problem of 3D morphology both along the $z$-direction and on the $x$-$y$ plane, considering the two aspects of the refractive index mismatch problem. Our experiment results for the fluorescent polystyrene spheres and the Jurkat T-lymphocytes in suspension prove that the reconstructed 3D morphological computational models by PDV-PSF method have superior performance.

2. Methods

2.1 Ray-tracing model

During the LSCM imaging through stratified media, the reconstructed 3D model is normally elongated or compressed due to the discrepancy between LSCM scanning step size and the distance among each 2D image layer. Consequently, the ray-tracing model is proposed to analyze the behavior of different light rays propagating through three-layer stratified media which are the immersion medium ($n_1$, oil or air), the coverslip ($n_2$, glass), and the sample medium ($n_3$, air or aqueous medium). In Fig. 1(a), light rays from green, red, and blue circles on the LSCM objective lens are displayed. Using the green one as an example, light rays from green circle have the same incident angle $\theta _1$ on interface $S_1$. The green dotted lines show the propagation process of light rays when the refractive indexes $n_1$, $n_2$, and $n_3$ of these stratified media are equal. In this case, light rays with different incident angles from green, red, and blue circles are all focused on the same focus position, which is named the nominal focus position (NFP, labeled as $O$). The distance from NFP to interface $S_2$ is $z_0$. When $n_1$, $n_2$, and $n_3$ are unequal, the refraction phenomenon occurs (seeing the solid lines in Fig. 1(a)). Light rays are refracted twice on $S_1$ and $S_2$ with the incident angels $\theta _1$ and $\theta _2$, respectively, and then focused on the point $o_g$ with the angle $\theta _3$. According to Snell’s law, the angles $\theta _1$, $\theta _2$, and $\theta _3$ have the following relationship:

$$n_1\sin\theta_1=n_2\sin\theta_2=n_3\sin\theta_3$$

 figure: Fig. 1.

Fig. 1. Schematic diagram of the ray-tracing model. (a) Rays at all angles without total internal reflection (represented by three rays) are refracted at the interfaces $S_1$ and $S_2$ and then focused to different positions on the optical axis, resulting in a deviation of the nominal focusing position (NFP). Therefore, the equivalent focus position (EFP) is obtained to equivalent this deviation. (b) The relationship between the moving distance of NFP and that of EFP during the LSCM scanning process.

Download Full Size | PDF

If the thickness of coverslip is set to $h$, the distance $z$ ($z_g$, $z_r$, and $z_b$ in Fig. 1(a)) between any solid line’s focus position $o$ ($o_g$, $o_r$ and $o_b$ in Fig. 1(a)) and interface $S_2$ along the optical axis can be obtained from the equation:

$$z=\frac{\tan\theta_1z_0+\left(\tan\theta_1-\tan\theta_2\right)h}{\tan\theta_3}$$

After being refracted by $S_1$ and $S_2$, light rays with different incident angles are focused on the different positions of the optical axis. If all the light rays refracted by stratified media are considered for every 2D image layer, it is impossible for 3D morphology reconstruction because of the huge calculation quality and time. As a result, the method with an equivalent focus position (EFP) is proposed, in which the contributions of light rays with different incident angles to total imaging efficiency are assessed using a weighting method.

Light rays from the same circle (such as the green one) have the same incident angle $\theta _1$ on $S_1$ and focused distance $z$. The circumference of the circle can be expressed as $l = 2\pi f\tan \theta _1$. Wherein, $f$ is the nominal focused length of objective lens. The circle with a larger $l$ emits more light rays, implying that it contributes more to the total imaging process. In the meantime, light rays from the same circle focus on the same position $z$. They are then considered to contribute the same amount of weight to EFP. Since $l$ is proportional to $\tan \theta _1$, $\tan \theta _1$ is selected as the weight coefficient. The equivalent focus position $z_{\mathit {EFP}}$ by all the circles on objective lens can be defined as:

$$z_{\mathit{EFP}}=\int_0^{\theta_{max}}\frac{\tan\theta_1}{\int_0^{\theta_{max}}\tan\theta_1\,d\theta_1} z\,d\theta_1$$

In Eq. (3), $\theta _{max}$ is the maximum incident angle on $S_1$ without total refraction phenomenon on stratified interfaces. It can be calculated by $\sin ^{-1}\left [ \min \left (\text {NA},n_1,n_2,n_3\right )/n_1 \right ]$, where NA is the numerical aperture of the objective lens. $z_{\mathit {EFP}}$ then can be obtained by substituting Eq. (2) into Eq. (3):

$$z_{\mathit{EFP}}=k_1z_0+k_2h$$
wherein, $k_1$ and $k_2$ are:
$$k_1=\int_{0}^{\theta_{max}}\frac{\tan\theta_1}{-\ln\cos\theta_{max}}\frac{\tan\theta_1}{\tan\theta_3}\,d\theta_1$$
$$k_2=\int_{0}^{\theta_{max}}\frac{\tan\theta_1}{-\ln\cos\theta_{max}}\frac{\tan\theta_1-\tan\theta_2}{\tan\theta_3}\,d\theta_1$$

During the detection for a stack of 2D image layers by LSCM, objective lens scans from the blue position to the red one along the $z$-direction as shown in Fig. 1(b). When objective lens moves with the distance $\delta _0$, NFP of the blue position changes to that of the red one, which means $\delta _0$ is equal to $z_{\mathit {NFP}}^{\prime }-z_{\mathit {NFP}}$.

From Eq. (4), the relationship between $z_{\mathit {EFP}}$ and $z_{\mathit {NFP}}$ in Fig. 1(b) of the blue position can be obtained by $z_{\mathit {EFP}}=k_1 z_{\mathit {NFP}}+k_2 h$. Similarly, the relationship between $z_{\mathit {EFP}}^{\prime }$ and $z_{\mathit {NFP}}^{\prime }$ of the red position is $z_{\mathit {EFP}}^{\prime }=k_1 z_{\mathit {NFP}}^{\prime }+k_2 h$. The distance $\delta$ between their EFPs, which is also the distance among each 2D image layer, can be calculated by:

$$\delta=z_{\mathit{EFP}}^{\prime}-z_{\mathit{EFP}}=k_1 \delta_0$$

From the analysis above, it is obvious that the LSCM scanning step size $\delta _0$ differs from the distance $\delta$ among each 2D image layer, which is mainly caused by the mismatch problem of the refractive index between immersion medium ($n_1$) and sample medium ($n_3$). If stitching 2D image layers with the distance $\delta _0$ without considering the refractive index mismatch, the reconstructed 3D morphology is elongated when $n_1>n_3$ or compressed when $n_1<n_3$. In fact, $\delta$ calculated by Eq. (7) should be applied to stitch 2D image layers in the axial direction.

On the other hand, the refractive index mismatch also induces aberration which makes 2D image layers blur. Combined with Eq. (7), the PDV-PSF can be obtained to deblur 2D image layers in the horizontal direction.

2.2 Precise depth-varying point spread function (PDV-PSF) of stratified media

Point spread function (PSF) describes the response of imaging system to a point light source. Due to the existence of optical diffraction, a point light source is not a point after being imaged, but a more or less extended, complicated 3D light spot. Generally, the response for a point light source in imaging system is thought to be linear. As shown in Fig. 2, the object $f(x,y)$ is imaged by a linear system with its PSF $h(x,y)$ and noise function $n(x,y)$. For image generation processes, the final image result $g(x,y)$ satisfies the following relationship:

$$g(x,y)=f(x,y)\otimes h(x,y)+n(x,y)$$
where $\otimes$ means the convolution operation. And $n(x,y)$ is assumed to be the additive Gaussian noise [31,32] in this paper. In addition, $n(x,y)$ can be assumed to be mixed Poisson-Gaussian noise if imaging in adverse conditions such as poorly illuminated environments or short exposure times [33]. If a high-precision recovered object $f'(x,y)$ needs to be obtained from the final image result $f(x,y)$, the recovery algorithm can be established accurately with the help of a known PSF. Thus, it is necessary to calculate the PSF during image generation.

 figure: Fig. 2.

Fig. 2. The image obtained by an imaging system is convolved with the point spread function (PSF) of the system and then added noise. In order to make the recovered object more accurate, it is necessary to analyze the PSF of the imaging system.

Download Full Size | PDF

For LSCM imaging, there are two processes: the illumination process, in which the labeled fluorescence is excited; and the detection process, in which the excited fluorescence is imaged. PSF in these two processes therefore should be analyzed separately. Based on the vector diffraction theory, the illumination process is analyzed firstly to deduce the illumination PSF through stratified media. The detection PSF for the second detection process is then calculated according to the illumination PSF.

2.2.1 Illumination PSF

The illumination light with wavelength $\lambda _{\mathit {ill}}$ through three-layer stratified media in the illumination process is demonstrated in Fig. 3(a). The illumination light passing through objective lens of LSCM is focused inside the sample. The intersection point $O$ of red dashed line is the NFP of objective lens. And then a right-handed coordinate system with origin $O$ is established as shown in Fig. 3(a). The $z$ coordinates of interfaces $S_1$ and $S_2$ are $-h_1$ and $-h_2$. Moreover, the thickness of the coverslip can be expressed as $h=h_1-h_2$, and $h_2$ is also the depth of detection. An arbitrary point $M$ in sample medium is selected with its coordinate $\left ( x_m,y_m,z_m\right )$. Its corresponding cylindrical coordinate is defined as $\left ( \rho _m,\phi _m,z_m\right )$ in which $\rho _m$ $\left ( \rho _m\geqslant 0\right )$ is the distance between $M$ and $z$-axis, $\phi _m$ $\left ( 0 \leqslant \phi _m<2\pi \right )$ is the azimuth. In the illumination process, the electric field vector $\textbf {e}_{\mathit {ill}}$ on the point $M$ is expressed as [28]:

$$\begin{aligned}\textbf{e}_{\mathit{ill}}\left(\rho_m,\phi_m,z_m\right)=&-\cfrac{ik_1}{2\pi}\int_0^{\theta_{max}}\int_{0}^{2\pi}\boldsymbol{\varepsilon}_3\exp\left(ik_0\Psi\right)\exp\left[ik_1\rho_m\sin\theta_1\cos\left(\phi-\phi_m\right)\right] \\ &\times\exp\left(ik_3z_m\cos\theta_3\right)\sin\theta_1\,d\phi\,d\theta_1\end{aligned}$$

The magnitude of wave vectors $k_j$ in different media $j\left (=1,2,3\right )$ with refractive index $n_j$ are given by $k_j=2\pi n_j/ \lambda _{\mathit {ill}}$. And $k_0$ is that in vacuum. $\theta _{max}$ has been mentioned in Section 2.1. $\Psi$ is the initial aberration function with the expression $h_2n_3\cos \theta _3-h_1n_1\cos \theta _1$. $\boldsymbol {\varepsilon }_0=( \varepsilon _{0,x},\varepsilon _{0,y},0)$ is the electric strength vector of illumination light, which could become linearly, elliptically, and circularly polarized light by changing $\varepsilon _{0,x}$ and $\varepsilon _{0,y}$. When illumination light in the immersion medium $(j=1)$, its electric strength vector $\boldsymbol {\varepsilon }_1$ can be obtained by:

$$\boldsymbol{\varepsilon}_1=A_{\mathit{ill}}\left(\theta_1\right)\textbf{R}^{{-}1}\textbf{P}_{1}\textbf{L}\textbf{R}\boldsymbol{\varepsilon}_0$$
wherein, $A_{\mathit {ill}}\left ( \theta _1\right )$ is the apodization function [29] for the illumination process which equals $\cos ^{1/2}\theta _1$. The bold letters $\textbf {R}$, $\textbf {P}_j$, and $\textbf {L}$ are $3\times 3$ tensor matrices. $\textbf {R}$ describes the coordinate transformation for rotation around the $z$-axis; $\textbf {L}$ is the changes in the electric field as it traverses the lens; $\textbf {P}_{j}$ describes the coordinate system rotation that generates $s$- and $p$-polarized vector components in medium $j$. The matrices $\textbf {R}$, $\textbf {P}_j$ and $\textbf {L}$ are given by [34]:
$$\textbf{R}=\begin{bmatrix} \cos\phi & \sin\phi & 0 \\ -\sin\phi & \cos\phi & 0 \\ 0 & 0 & 1 \end{bmatrix} \hspace{0.5cm} \textbf{L}=\begin{bmatrix} \cos\theta_1 & 0 & \sin\theta_1 \\ 0 & 1 & 0 \\ -\sin\theta_1 & 0 & \cos\theta_1 \end{bmatrix} \hspace{0.5cm} \textbf{P}_{j}=\begin{bmatrix} \cos\theta_j & 0 & -\sin\theta_j \\ 0 & 1 & 0 \\ \sin\theta_j & 0 & \cos\theta_j \end{bmatrix}$$

 figure: Fig. 3.

Fig. 3. (a) The illumination light is focused through three layers of stratified media. Establish cartesian and cylindrical coordinates of origin $O$ on NFP. (b) The dipole wave radiated from $M$ is focused on the detection plane through the stratified media, and the coordinate system is established with $O'$ on the detection plane as the origin.

Download Full Size | PDF

In the sample medium $(j=3)$, the electric strength vector $\boldsymbol {\varepsilon }_3$ can be derived from $\boldsymbol {\varepsilon }_1$ in Eq. (10) by generalized Jones matrices:

$$\boldsymbol{\varepsilon}_3=\textbf{R}^{{-}1}\left(\textbf{P}_{3}\right)^{{-}1}\textbf{I}_{\mathit{ill}}\textbf{R}\boldsymbol{\varepsilon}_1$$
where the $3\times 3$ diagonal matrix $\textbf {I}_{\mathit {ill}}=\mathit {diag}\left ( T_p,T_s,T_p\right )$ describes the influence of the stratified medium. $T_{s}$ and $T_{p}$ are the transmission coefficients of $s$- and $p$-polarizated vector components through three layers of media, respectively. They have the same mathematical expression as $T_{\mathit {pol}}$ which subscript "pol" means $s$- or $p$-polarizated:
$$T_{\mathit{pol}}=\cfrac{t_{\mathit{pol}}^{\left(1,2\right)} t_{\mathit{pol}}^{\left(2,3\right)}\exp\left( i\beta_2\right)}{1+r_{\mathit{pol}}^{\left(1,2\right)} r_{\mathit{pol}}^{\left(2,3\right)}\exp\left( 2i\beta_2\right)}$$
where $\beta _2=k_2\left ( h_1-h_2\right ) \cos \theta _2$. When illumination light propagates from medium $i$ to medium $j$, $t_{\mathit {pol}}^{\left (i,j\right )}$ and $r_{\mathit {pol}}^{\left (i,j\right )}$ are defined as the Fresnel coefficients of transmission and reflection for $s$- and $p$-polarizated illumination light, respectively:
$$\begin{aligned}t_s^{\left(i,j\right)}&=\cfrac{2n_i\cos\theta_i}{n_i\cos\theta_i+n_{j}\cos\theta_{j}} \hspace{1cm} t_p^{\left(i,j\right)}=\cfrac{2n_i\cos\theta_i}{n_{j}\cos\theta_i+n_{i}\cos\theta_{j}}\\ r_s^{\left(i,j\right)}&=\cfrac{n_i\cos\theta_i-n_{j}\cos\theta_{j}}{n_{i}\cos\theta_i+n_{j}\cos\theta_{j}} \hspace{1cm} r_p^{\left(i,j\right)}=\cfrac{n_{j}\cos\theta_i-n_{i}\cos\theta_{j}}{n_{j}\cos\theta_i+n_{i}\cos\theta_{j}}\end{aligned}$$

Till now, the components of the electric strength vector $\boldsymbol {\varepsilon }_3$ can be calculated by substituting Eq. (10) to (12):

$$\begin{aligned}\varepsilon_{3,x}&=A_{\mathit{ill}}\left( \theta_1\right) \left[\left( T_{p} \cos\theta_3-T_{s}\right)\left( \varepsilon_{0,x}\cos^{2}\phi + \varepsilon_{0,y}\sin\phi\cos\phi \right) +\varepsilon_{0,x}T_{s} \right] \\ \varepsilon_{3,y}&=A_{\mathit{ill}}\left( \theta_1\right) \left[\left( T_{p} \cos\theta_3-T_{s}\right)\left( \varepsilon_{0,x}\sin\phi\cos\phi - \varepsilon_{0,y}\cos^{2}\phi \right) +\varepsilon_{0,y}T_{p}\cos\theta_3 \right] \\ \varepsilon_{3,z}&=A_{\mathit{ill}}\left( \theta_1\right) \left[{-}T_{p}\sin\theta_3\left( \varepsilon_{0,x}\cos\phi+\varepsilon_{0,y}\sin\phi\right) \right]\end{aligned}$$
$\varepsilon _{3,x}$, $\varepsilon _{3,y}$, $\varepsilon _{3,z}$ in Eq. (15) then can be applied to solve Eq. (9). The double integral, on the other hand, may take some time to compute. As a result, the Bessel function can be used to turn the solution into a single integral. The integral of $\phi$ in Eq. (9) is substituted by the following Bessel functions $J_n\left ( \rho \right )$ of the first kind with the order $n$ [29]:
$$\begin{aligned}\int_0^{2\pi}\sin\left( n\phi\right) \exp\left[ i\rho\cos\left( \phi-\gamma\right) \right] d\phi&=2\pi i^{n}J_n\left( \rho\right) \sin\left( n\gamma\right) \\ \int_0^{2\pi}\cos\left( n\phi\right) \exp\left[ i\rho\cos\left( \phi-\gamma\right) \right] d\phi&=2\pi i^{n}J_n\left( \rho\right) \cos\left( n\gamma\right)\end{aligned}$$

Finally, the electric field vector $\textbf {e}_{\mathit {ill}}$ on arbitrary point $M$ in Eq. (9) can be solved as:

$$\begin{aligned}\textbf{e}_{\mathit{ill},x}&={-}\cfrac{ik_1}{2}\left[ I_{\mathit{ill}}^{\left( 2\right) }\cos\left( 2\phi_m\right) \varepsilon_{0,x}+I_{\mathit{ill}}^{\left( 2\right) }\sin\left( 2\phi_m\right) \varepsilon_{0,y}+I_{\mathit{ill}}^{\left( 0\right) }\varepsilon_{0,x}\right] \\ \textbf{e}_{\mathit{ill},y}&={-}\cfrac{ik_1}{2}\left[ I_{\mathit{ill}}^{\left( 2\right) }\sin\left( 2\phi_m\right) \varepsilon_{0,x}-I_{\mathit{ill}}^{\left( 2\right) }\cos\left( 2\phi_m\right) \varepsilon_{0,y}+I_{\mathit{ill}}^{\left( 0\right) }\varepsilon_{0,y}\right] \\ \textbf{e}_{\mathit{ill},z}&={-}k_1\left[ I_{\mathit{ill}}^{\left( 1\right) }\cos \phi_m \varepsilon_{0,x}+I_{\mathit{ill}}^{\left( 1\right) }\sin\phi_m \varepsilon_{0,y}\right]\end{aligned}$$
where $I_{\mathit {ill}}^{\left ( 0\right ) }$, $I_{\mathit {ill}}^{\left ( 1\right ) }$, and $I_{\mathit {ill}}^{\left ( 2\right ) }$ are the integral forms defined as:
$$\begin{aligned}I_{\mathit{ill}}^{\left( 0\right) }&=\int_0^{\theta_{max}} A_{\mathit{ill}}\left( \theta_1\right) J_0\left( k_1 \rho_m\sin\theta_1\right) \left( T_{s}+T_{p}\cos\theta_3 \right) \exp\left( ik_0\Psi\right) \exp\left( ik_3z_m\cos\theta_3\right) \sin\theta_1\,d\theta_1 \\ I_{\mathit{ill}}^{\left( 1\right) }&=\int_0^{\theta_{max}} A_{\mathit{ill}}\left( \theta_1\right) J_1\left( k_1 \rho_m\sin\theta_1\right) \left( T_{p}\sin\theta_3\right) \exp\left( ik_0\Psi\right) \exp\left( ik_3 z_m\cos\theta_3\right) \sin\theta_1\,d\theta_1 \\ I_{\mathit{ill}}^{\left( 2\right) }&=\int_0^{\theta_{max}} A_{\mathit{ill}}\left( \theta_1\right) J_2\left( k_1 \rho_m\sin\theta_1\right) \left( T_{s}-T_{p}\cos\theta_3\right) \exp\left( ik_0\Psi\right) \exp\left( ik_3 z_m\cos\theta_3\right) \sin\theta_1\,d\theta_1\end{aligned}$$

In the previous research works by Nasse et. al [29], we found there may be a mistake that the coefficient "-1/2" in above Eq. (17) is missing. It can cause the results of Eq. (17) are not equal to the solutions that substituting Eq. (15) directly to Eq. (9) without using Bessel functions.

2.2.2 Detection PSF

Because the electric field emitted by fluorescent molecule of samples is identical to the dipole wave emitted by a harmonically oscillating electric dipole, the dipole waves are often used to describe the imaging processes of fluorescent molecules. The dipole waves with wavelength $\lambda _{\mathit {det}}$ from $M$ in sample medium propagate through stratified media, as shown in Fig. 3(b). After being collected by objective lens, they are finally focused on the point $O'$ of detector plane by detector lens in the medium $d$ with refractive index $n_d$. A Cartesian coordinate system and the corresponding cylindrical coordinate system in which origins are on the point $O'$ are established. The Cartesian coordinate of the point $S$ on the detection plane is defined as $\left ( x_s,y_s,z_s\right )$, and its cylindrical coordinate is $\left ( \rho _s,\phi _s,z_s\right )$. On the detection plane, $z_s$ equals zero. In fact, considering the diffraction, the imaging result for point $M$ should be a 3D light spot, not a perfect point on the detection plane. However, the detection PSF for the electric filed $\textbf {e}_{\mathit {det}}\left (\rho _s,\phi _s,z_s\right )$ only on the detection plane $(z_s=0)$ is discussed in this study, it is for the reason that: stray light from non-focal planes is excellently suppressed due to the illumination and detection pinholes are located in the conjugate position. So on the detection plane, the electric field $\textbf {e}_{\mathit {det}}\left ( \rho _s,\phi _s\right )$ at point $S$ is given by:

$$\textbf{e}_{\mathit{det}}\left(\rho_s,\phi_s\right)={-}\cfrac{ik_1'}{2\pi}\int_0^{\theta_{d,max}}\int_{0}^{2\pi}\textbf{e}_d\exp\left(ik_0'\Psi\right)\exp\left[ik_d\rho_s\sin\theta_d\cos\left(\phi-\phi_s\right)\right]\sin\theta_d\,d\phi\,d\theta_d$$
where $k_j'=2\pi n_j/ \lambda _{\mathit {det}}$ is the magnitude of wave vector for detected light in different media $j(=1,2,3,d)$ with refractive index $n_j$, as shown in Fig. 3(b). $k_0'$ is that in vacuum. After the electric field vector $\textbf {e}_m$ emitted by point $M$ passes through the stratified media, in medium $d$ the detected electric field vector $\textbf {e}_d$ in Eq. (19) can be represented as:
$$\textbf{e}_d=A_{\mathit{det}}\left( \theta\right) \textbf{R}^{{-}1}\textbf{L}_{d}\textbf{L}^{{-}1} \left( \textbf{P}_1\right) ^{{-}1}\textbf{I}_{\mathit{det}}\textbf{P}_3\textbf{R} \textbf{e}_m$$
where $A_{\mathit {det}}\left ( \theta \right )=\left ( {\cos \theta _d}/{\cos \theta _1}\right )^{1/2}$ is the apodization function accounting for the two lens in Fig. 3(b). The electric field vector $\textbf {e}_m$ during detection process and the electric field $\textbf {e}_\mathit {ill}$ during illumination process at point $M$ have the same polarization state, and their amplitudes are proportional for each Cartesian component [30]. The $3\times 3$ matrix $\textbf {I}_{\mathit {det}}=\mathit {diag}\left ( T_p',T_s',T_p'\right )$ describes the effect of the dipole wave passing through stratified media. $T_{pol}'$ is the transmission coefficient of s- or p-polarized during detection process, which satisfies:
$$T_{\mathit{pol}}'=\cfrac{t_{\mathit{pol}}^{\left(3,2\right)} t_{\mathit{pol}}^{\left(2,1\right)}\exp\left( i\beta_2\right)}{1+r_{\mathit{pol}}^{\left(3,2\right)} r_{\mathit{pol}}^{\left(2,1\right)}\exp\left( 2i\beta_2\right)}$$
where $t_{\mathit {pol}}^{\left (i,j\right )}$ and $r_{\mathit {pol}}^{\left (i,j\right )}$ have the same meaning as Eq. (14). And then substituting Eq. (11) into Eq. (20), $\textbf {e}_{d}$ can be written as:
$$\begin{aligned}\textbf{e}_{d,x}=&A_{\mathit{det}}\left( \theta\right) \left\lbrace \dfrac{1}{2}\cos 2\phi\left[ T_p'\cos\left( \theta_3-\theta_d\right)-T_s'\right] \textbf{e}_{m,x}\ +\frac{1}{2}\left[ T_p'\cos\left( \theta_3-\theta_d\right)+T_s' \right] \textbf{e}_{m,x}\right. \\ &\left. +\frac{1}{2}\sin 2\phi\left[ T_p'\cos\left(\theta_3-\theta_d \right)-T_s' \right] \textbf{e}_{m,y}+\cos\phi T_p'\sin\left( \theta_d-\theta_3\right) \textbf{e}_{m, z}\right\rbrace \\ \textbf{e}_{d,y}=&A_{\mathit{det}}\left( \theta\right)\left\lbrace \frac{1}{2}\sin 2\phi\left[ T_p'\cos\left(\theta_3-\theta_d \right)-T_s' \right] \textbf{e}_{m,x} +\frac{1}{2}\left[ T_p'\cos\left( \theta_3-\theta_d\right)+T_s' \right]\textbf{e}_{m,y}\right. \\ &\left. -\dfrac{1}{2}\cos 2\phi\left[ T_p'\cos\left( \theta_3-\theta_d\right)-T_s'\right] \textbf{e}_{m,y}+\sin\phi T_p'\sin\left( \theta_d-\theta_3\right) \textbf{e}_{m,z}\right\rbrace \\ \textbf{e}_{d,z}=&A_{\mathit{det}}\left( \theta\right)\left[ \cos\phi T_p'\sin\left( \theta_3-\theta_d\right) \textbf{e}_{m,x}+\sin\phi T_p'\sin\left( \theta_3-\theta_d\right)\textbf{e}_{m,y}+T_p'\cos\left( \theta_3-\theta_d\right)\textbf{e}_{m,z} \right]\end{aligned}$$

In order to calculate $\textbf {e}_{\mathit {det}}$ at arbitrary point $S$ on detection plane, finally Eq. (19) can be calculated by Bessel function after substituting Eq. (22):

$$\begin{aligned}\textbf{e}_{\mathit{det},x}=&\dfrac{ik_1}{2}\left[ I_{\mathit{det}}^{\left( 3\right) }\cos 2\phi_s\textbf{e}_{\mathit{m},x}-\left(I_{\mathit{det}}^{\left( 0\right) }+I_{\mathit{det}}^{\left( 1\right) } \right)\textbf{e}_{\mathit{m},x}\ +I_{\mathit{det}}^{\left( 3\right) }\sin 2\phi_s\textbf{e}_{\mathit{m},y}\right] -k_1I_{\mathit{det}}^{\left( 2\right) }\cos\phi_s\textbf{e}_{\mathit{m},z} \\ \textbf{e}_{\mathit{det},y}=&\dfrac{ik_1}{2}\left[ I_{\mathit{det}}^{\left( 3\right) }\sin 2\phi_s\textbf{e}_{\mathit{m},x}-\left(I_{\mathit{det}}^{\left( 0\right) }+I_{\mathit{det}}^{\left( 1\right) } \right)\textbf{e}_{\mathit{m},y}\ -I_{\mathit{det}}^{\left( 3\right) }\cos 2\phi_s\textbf{e}_{\mathit{m},y}\right] -k_1I_{\mathit{det}}^{\left( 2\right) }\sin\phi_s\textbf{e}_{\mathit{m},z} \\ \textbf{e}_{\mathit{det},z}=&k_1I_{\mathit{det}}^{\left( 2\right) }\cos\phi_s\textbf{e}_{\mathit{m},x}+k_1I_{\mathit{det}}^{\left( 2\right) }\sin\phi_s\textbf{e}_{\mathit{m},y}-ik_1I_{\mathit{det}}^{\left( 0\right) }\textbf{e}_{\mathit{m},z}\end{aligned}$$
where $I_{\mathit {det}}^{\left ( 0\right ) }$, $I_{\mathit {det}}^{\left ( 1\right ) }$, $I_{\mathit {det}}^{\left ( 2\right ) }$ and $I_{\mathit {det}}^{\left ( 3\right ) }$ are defined as:
$$\begin{aligned}I_{\mathit{det}}^{\left( 0\right) }&=\int_0^{\theta_{d,max}} A_{\mathit{det}}\left( \theta\right) J_0\left( k_d \rho_s\sin\theta_d\right)\left[ T_{p}'\cos\left( \theta_3-\theta_d\right) \right] \exp\left( ik_0'\Psi\right) \sin\theta_d\,d\theta_d \\ I_{\mathit{det}}^{\left( 1\right) }&=\int_0^{\theta_{d,max}} A_{\mathit{det}}\left( \theta\right) J_0\left( k_d \rho_s\sin\theta_d\right) T_{s}' \exp\left( ik_0'\Psi\right) \sin\theta_d\,d\theta_d \\ I_{\mathit{det}}^{\left( 2\right) }&=\int_0^{\theta_{d,max}} A_{\mathit{det}}\left( \theta\right) J_1\left( k_d \rho_s\sin\theta_d\right) \left[ T_{p}'\sin\left( \theta_3-\theta_d\right) \right] \exp\left( ik_0'\Psi\right) \sin\theta_d\,d\theta_d \\ I_{\mathit{det}}^{\left( 3\right) }&=\int_0^{\theta_{d,max}} A_{\mathit{det}}\left( \theta\right) J_2\left( k_d \rho_s\sin\theta_d\right) \left[ T_{p}'\cos\left( \theta_3-\theta_d\right)-T_s' \right] \exp\left( ik_0'\Psi\right) \sin\theta_d\,d\theta_d \\\end{aligned}$$

Because the point $M$ is illuminated during illumination process and then its emitting light is detected during detection process, the electric field vector $\textbf {e}_m$ in Eq. (23) of Section 2.2.2 has the same polarization state and proportional amplitudes to the electric field $\textbf {e}_{\mathit {ill}}$ in Eq. (17) of Section 2.2.1. Therefore, the relationship of $\textbf {e}_{\mathit {det}}$ in Eq. (23) during the detection process and $\textbf {e}_{\mathit {ill}}$ in Eq. (17) during the illumination process can be established. Considering that PSF is the squared magnitude of the electric field vector, at last the illumination PSF and the detection PSF can be calculated by $\textbf {e}_{\mathit {ill}}$ in Eq. (17) and $\textbf {e}_{\mathit {det}}$ in Eq. (23), respectively.

2.3 3D morphological computational models reconstruction by PDV-PSF method

In order to reconstruct accurate suspended cells’ 3D morphological computational models, combined with the above analysis, we now summarize the detailed procedure of PDV-PSF method as follows:

  • Step 1: Obtaining 2D LSCM image stacks of suspended cells. Different cell structures (such as nucleus-membrane) are labeled and LSCM is used to collect their fluorescence images. Different color channels of 2D cells’ LSCM images represent corresponding cell structures.
  • Step 2: Obtaining the distance $\delta$ among image layers by Eq. (7) based on the ray-tracing model in Section 2.1.
  • Step 3: According to Section 2.2.1, calculating the illumination PSFs of different depths with the distance $\delta$.
  • Step 4: According to Section 2.2.2, calculating the depth-varying detection PSFs based on the illumination PSF results. The PDV-PSFs are calculated by Step 2-4, which can be used for the subsequent processing of cell images. Although it may take time to calculate PDV-PSFs, these three steps only need to be calculated once after the parameters of LSCM cell images are determined, and then can be used for reconstructing 3D morphological computational models of other cells in suspension.
  • Step 5: Deconvolution of the corresponding cell images by using the depth-varying detection PSFs. Different color channels are separated and transformed into grayscale images. The corresponding PDV-PSFs are used to deconvolution the images of each color channel.
  • Step 6: Image processing operations (median filtering, adaptive threshold segmentation, hole filling, max connected region). The purpose of these operations is to eliminate the image noise and determine the contour of cell structure in each image layer.
  • Step 7: Restoring the defective image layers on the top and bottom using our former ASCM method [15].
  • Step 8: Reconstructing 3D morphological computational models of cells in suspension by stitching each processed image layer with $\delta$. And finally, store it as a point cloud file.

3. Simulation experiments for PDV-PSF

3.1 Illumination PSF simulation

The MATLAB program for illumination PSF in Section 2.2.1 is developed by ourselves. The illumination light is set as left-handed circularly polarized light with wavelength $488nm$. As shown in Fig. 3(a), the objective lens with $\text {NA}=1.4$ is put in the immersion medium whose refractive index is $n_1$. The refractive indexes of the coverslip and the sample medium are $n_2$ and $n_3$. And the thickness $h$ of the coverslip is set as $170\mu m$. Considering the refractive indexes of stratified media in LSCM experiments, two different conditions are simulated numerically. For the first condition in which the refractive index mismatch problem does not occur, the refractive indexes $n_1$, $n_2$, and $n_3$ are equal and assumed to be $1.515$. And then, the normalized intensity distributions of the illumination PSF which equals the squared magnitude of the electric field $\textbf {e}_{\mathit {ill}}$ in Eq. (17) are given in Fig. 4(a1) and (c1). The other is the real working condition for LSCM in which the refractive indexes $n_1$, $n_2$, and $n_3$ are mismatched. In this case, the objective lens is immersed in oil $(n_1=1.515)$ which has the same refractive index as the coverslip $(n_2=1.515)$. And the aqueous sample medium $(n_3=1.33)$ is on the coverslip for inverted LSCM. For the refractive index mismatch condition, the normalized illumination PSFs are shown in Fig. 4(b1) and (d1). The scanning direction of LSCM is along the $z$-axis of $x$-$z$ plane $(y=0)$ in Fig. 4(a1)–(d1). The detection depth $z_d=2\mu m$ in Fig. 4(a1)–(b1) and $z_d=8\mu m$ in Fig. 4(c1)–(d1) are labeled by the dotted lines. Their corresponding normalized illumination PSFs in $x$-$y$ cross sections are given in Fig. 4(a2)–(d2).

 figure: Fig. 4.

Fig. 4. Normalized illumination PSF under different imaging conditions. (a1) Refractive index matching $(n_1=n_2=n_3=1.515)$, detection depth $z_d=2\mu m$; (b1) Refractive index mismatching $(n_1=n_2=1.515,n_3=1.33)$, $z_d=2\mu m$; (c1) Refractive index matching $(n_1=n_2=n_3=1.515)$, $z_d=8\mu m$; (d1) Refractive index mismatching $(n_1=n_2=1.515,n_3=1.33)$, $z_d=8\mu m$; (a2)-(d2) Corresponding illumination PSF ($x$-$y$ section) at the detection depth (dotted lines) in (a1)-(d1).

Download Full Size | PDF

As can be seen from Fig. 4(a1)-(a2) and (c1)-(c2), in the case of refractive index matching, no additional aberrations are introduced, and the illumination PSF is symmetric and does not change with the increase of detection depth. The line profiles of normalized illumination PSFs at $x=0$ in Fig. 4(a1) and (c1) are plotted by black and blue respectively in Fig. 5(a). Meanwhile, the line profiles of normalized illumination PSF at $y=0$ in Fig. 4(a2) and (c2) are plotted in Fig. 5(b1) and (b3), respectively. The full width at half maximum (FWHM) of illumination PSF is calculated when the detection depth $z_d=2\mu m$, the FWHM along z-direction is $0.476\mu m$ and the FWHM along $x$-direction is $0.197\mu m$. Similarly, the FWHM of illumination PSF at $z_d=8\mu m$ is $0.476\mu m$ along $z$-direction, and $0.197\mu m$ along the $x$-direction, which is consistent with the result of $z_d=2\mu m$. However, when $n_1$, $n_2$, and $n_3$ are not equal, the refractive index mismatch will introduce aberration. The illumination PSF becomes asymmetrical along the $z$-axis and complicated with the increase of the detection depth, as shown in Fig. 4(b1)–(b2) and (d1)–(d2). Likewise, the line profiles of normalized illumination PSFs at $x=0$ in Fig. 4(b1) and (d1) are plotted by red and green respectively in Fig. 5(a), and the line profiles of normalized illumination PSFs at $y=0$ in Fig. 4(b2) and (d2) are plotted in Fig. 5(b2) and (b4), respectively. When the detection depth $z_d=2\mu m$, the FWHM in $z$-direction is $0.558\mu m$, and the FWHM along $x$-direction is $0.344\mu m$. Also, the FWHM in $z$-direction of illumination PSF at $z_d=8\mu m$ is $0.879\mu m$, and that along $x$-direction is $0.947\mu m$. It can be seen from the above FWHM analysis that the refractive index mismatch will make the illumination PSF more complex and asymmetrical in the $z$-direction. Furthermore, as the detection depth increases, so does this complexity and asymmetry, and the illumination PSF becomes more diffuse.

 figure: Fig. 5.

Fig. 5. Line profiles of normalized illumination PSF in Fig. 4. (a) Line profiles at $x=0$ in Fig. 4 (a1)-(d1) are plotted in black, red, blue, and green, respectively. (b1)-(b4) Line profiles at $y=0$ in Fig. 4 (a2)-(d2).

Download Full Size | PDF

Considering the scanning process of LSCM, under the real imaging condition of $n_1=n_2=1.515$, $n_3 =1.33$, $\text {NA}=1.4$, the correction coefficient calculated by Eq. (5) is about 0.66. Thus, when the scanning step of LSCM is set to $0.2\mu m$, the distance among images is $0.132\mu m$. And then the illumination PSFs of 96 different detection depths are calculated layer by layer with a distance of $0.132\mu m$. The illumination PSFs of the $x$-$y$ cross section at detection depth $z_d$ are normalized and shown in Fig. 6. Subsequently, the depth-varying detection PSF can be calculated according to these illumination PSF results.

 figure: Fig. 6.

Fig. 6. Normalized illumination PSFs at $z=z_d$ of 96 different detection depths which the distance is $0.132\mu m$.

Download Full Size | PDF

3.2 Detection PSF simulation

The illumination PSF is a 3D spot structure formed by the excitation light of a point source passing through stratified media. The detection PSF is caused by a dipole wave excited by the detection point, which should also be 3D. Due to the pinhole at the conjugate position of the illumination path and the detection path in the LSCM imaging process, the detector only accepts the light intensity on the detection plane, which means that only the detection PSF on the detection plane will affect the final image quality. According to the results of illumination PSF in Section 3.1, the depth-varying detection PSF under real imaging condition can be calculated through the analysis in Section 2.2.2. The imaging parameters are the same as those in Section 3.1, except that the excitation wavelength is $525nm$. The depth-varying detection PSFs at the detection plane are normalized to $0$-$255$, which is shown as grayscale images in Fig. 7. It also can be seen that with the increase of detection depth, the detection PSFs become more diffuse. The image obtained through LSCM is the convolution of the object and the detection PSF, and such depth-varying PSF will make the image quality of the object decrease with the detection depth increase. These depth-varying PSF can be used to improve image quality and thus reconstruct more accurate 3D morphological computational models of cells in suspension.

 figure: Fig. 7.

Fig. 7. The 96 depth-varying detection PSFs on the detection plane, are normalized to $0$-$255$.

Download Full Size | PDF

4. 3D morphological computational model reconstruction experiments

4.1 Standard fluorescent polystyrene spheres in suspension

In order to evaluate our PDV-PSF method in 3D morphological computational model reconstruction for suspended objectives imaged through stratified media, the standard fluorescent polystyrene spheres (FPS) with diameters $10.0\pm 0.3\mu m$ (Shanghai Huge Biotechnology Co., Ltd., Shanghai, China) are selected which fluorescence excitation wavelength is $488nm$ and the emission wavelength is $525nm$. Fluorescent polystyrene sphere suspension diluted by deionized water $(n_3=1.33)$ is dropped on a glass-bottom cell culture dish ($n_2=1.515$, NEST Biotechnology Co., Ltd., Wuxi, China). A stack of image layers with the pixel size $512\times 512$ (pixel pitch: $0.04\mu m$) for each standard FPS is detected by LSCM (Nikon A1plus, Nikon Instrument Co., Ltd., Tokyo, Japan). In the experiments, the $60\times$ objective lens ($\text {NA}=1.4$) of LSCM is immersed in the oil $(n_1=1.515)$. And the scanning step of LSCM is set to $0.2\mu m$ along $z$-direction. The 2D image stack of a standard FPS with 86 image layers are shown in Fig. 8(a), and its 3D imaging result by the commercial software NIS-Elements Viewer (Nikon Instrument Co., Ltd., Tokyo, Japan) is given in Fig. 8(b). Without considering the refractive index mismatch problem, the reconstructed 3D morphological computational model by processing 2D original image layers is rendered in Fig. 8(c), in which the distance among each image layer along $z$-direction is equal to the scanning step size $0.2\mu m$.

 figure: Fig. 8.

Fig. 8. (a) The 2D LSCM image stack of one standard fluorescent polystyrene sphere with 86 layers. (b) 3D imaging result by the commercial software NIS-Elements Viewer. (c) Without considering the refractive index mismatch problem, the rendered 3D morphological computational model by processing 2D original image layers.

Download Full Size | PDF

Some problems can be found in Fig. 8. Firstly, the ideal 2D image results for solid FPS from its top layer to bottom layer should be a series of green fluorescent circles with different diameters, such as the subgraphs $\#12\sim \#22$ of Fig. 8(a). However, the results with hollow green circles ($\#23\sim \#63$ in Fig. 8(a)) can be observed in the middle section of FPS. The reason is during FPS manufacturing processes by swelling method, the content of fluorescent materials in the middle of FPS is less than that around its surface. When FPS is emitted in LSCM experiments, hollow green circle with less fluorescent inside (dark area) comes into being. This problem can be solved by image processing through the hole filling operation in step 6 of Section 2.3. Secondly, with the increase of scanning depth along $z$-axis, the images become so blurred that their boundaries are difficult to determine, such as $\#56\sim \#86$ in Fig. 8(a). The reason is that the divergence of the detection PSF increases continuously with scanning depth (seeing Fig. 7). Therefore, as the convolution results of the detection PSF and a detected sample, its images become more and more blurred when LSCM scanning depth increases. Thirdly, LSCM scanning step size is not equal to the distance among each image layer for 3D morphologies reconstruction. As a result, the elongated ellipsoid-like results in Fig. 8(b) and (c) can be found. Because of the last two problems caused by refractive index mismatch, the reconstructed FPS is no longer like a sphere, and its bottom reconstructed section is very different from the top one. Therefore, it is necessary to consider the refractive index mismatch problem in stratified media for 3D morphology reconstruction.

Our PDV-PSF method is applied to calculate the detection PSFs at different scanning depths along $z$-axis. LSCM images in Fig. 8(a) are deconvolved layer by layer with corresponding detection PSFs by Lucy-Richardson algorithm. In order to evaluate the effect of our PDV-PSF, the commonly used blind deconvolution method with a Gaussian function as the initial-input PSF is applied to Fig. 8(a). The blind deconvolution method has the advantage to deal with images with unknown PSF. Based on the initial-input PSF and the maximum likelihood algorithm, it updates PSFs iteratively in the process of deconvolution. In fact, the updated PSFs are obtained only by the final images in different layers without considering the physical process (in Fig. 3). Tenengrad function is applied to analyze the quality of original images in Fig. 8(a) and their processed images by blind deconvolution and our PDV-PSF method. Tenengrad function is a commonly used method to evaluate image definition based on their gradients. The Sobel operator is adopted to extract image gradients in horizontal and vertical directions. At point $(x,y)$ in the image $g$, its gradient $S(x,y)$ satisfies:

$$S(x,y)=\sqrt{G_x*g(x,y)+G_y*g(x,y)}$$
where $G_x$, $G_y$ is the Sobel convolution kernel. Tenengrad function $T$ for the image with pixel size $n\times n$ is then defined as:
$$T=\frac{1}{n^{2}} \sum_{x=1}^{n}\sum_{y=1}^{n}S(x,y)^{2}$$

Generally, high-quality images with clear edges have larger gradients, which makes their Tenengrad function values increase. Therefore, Tenengrad function values can be applied to evaluate the image definition. The normalized Tenengrad function values for three kinds of images (original images, blind deconvolution, and PDV-PSF method) in each layer $(\#1\sim \#86)$ are shown in Fig. 9(a). Tenengrad function values are small for images on the top and bottom of a sample. It is not because of the image quality, but the number of effective pixels. For example, the layers $\#1\sim \#8$ on the top and the layers $\#80\sim \#86$ on the bottom in Fig. 8(a), the effective pixels for emitted green fluorescent from the sample are less which makes a large number of $g(x,y)$ in Eq. (25) are equal to zero. For these image layers, their Tenengrad function values are decreased. As a result, images with similar effective pixels in the same layers should be selected to evaluate their definition by Tenengrad function. Image layer #40 in Fig. 9(a) is selected as an example. The original image of #40 is shown in Fig. 9(b1) with a Tenengrad function value 0.21. The region in the bashed box is zoomed in and shown in Fig. 9(c1). It can be seen obviously the original image is very blurred, especially for the edge of FPS. Then this original image is processed by the blind deconvolution and PDV-PSF method, respectively. The results are given in Fig. 9(b2) and (b3) with Tenengrad function value 0.34 and 0.98. In fact, not only for image layer #40 but also for all the layers, Tenengrad function values of images processed by PDV-PSF method (the red line) are the largest in Fig. 9(a), which means the improvement extent of the blue line (representing the blind deconvolution) for the black line (original images) is far smaller than that of the red line. Corresponding to the dashed box in Fig. 9(c1), the processed images are zoomed in and shown in Fig. 9(c2) by the blind deconvolution and Fig. 9(c3) by PDV-PSF method. Obviously, the best image definition is obtained by PDV-PSF method because its image is the clearest with the highest contrast edge of FPS.

 figure: Fig. 9.

Fig. 9. (a) Tenengrad function values for original images and images processed by blind deconvolution and our PDV-PSF method in all the layers. (b1)-(b3) The original image, blind deconvolution image, and PDV-PSF method image of layer #40. (c1)-(c3) The zoomed areas at the FPS edge in (b1)-(b3).

Download Full Size | PDF

The purpose of this paper is to reconstruct a precise 3D morphology for suspended samples in stratified media. Consequently, the reconstructed 3D FPS models are analyzed. According to the reconstruction method in Section 2.3, in horizontal imaging plane, Step 5-7 in Section 2.3 are used to process the 2D images. Along vertial $z$-direction, the corrected distance among each image layer computed by the ray-tracing model in Section 2.1 is used to stitch image layers and then reconstruct the 3D morphology of FPS. 86 image layers are collected by LSCM in FPS experiments. 74 effective original image layers (listed in Table 1) are obtained without deconvolution processing in Section 2.3 to reconstruct 3D morphology of FPS as shown in Fig. 10(a1). Reconstructed FPS model by the blind deconvolution images is obtained by the method in Section 2.3 replacing Step 3-5 by the blind deconvolution, as shown in Fig. 10(a2). Its effective image layer number is 76 given out in Table 1. Finally, Fig. 10(a3) shows the reconstructed FPS model by our PDV-PSF method whose effective image layer number is 77. And these reconstructed FPS 3D computational models can be seen in Visualization 1.

 figure: Fig. 10.

Fig. 10. (a1)-(a3) Reconstructed FPS models by original images, blind deconvolution images, and PDV-PSF method images. (b) Comparing the area of original images, blind deconvolution images, and PDV-PSF method images with the area of an ideal sphere (as a conic) in all the layers. (c) The relative error of each layer and mean relative error for all layers calculated from (b).

Download Full Size | PDF

Tables Icon

Table 1. Reconstructed FPS models in $z$-direction

By comparing three reconstructed results in Fig. 10(a1)-(a3), the top sections of these reconstructed results are consistent with the morphology of an ideal sphere. However, a cone shape on the bottom section in Fig. 10(a1) can be found, which makes its overall morphology no longer like a sphere. By the blind deconvolution method, this problem is better in Fig. 10(a2), but the best-reconstructed result should be Fig. 10(a3). In order to quantitative analysis of reconstructed accuracy, along vertical z-direction, the heights of three reconstructed FPS models are compared with the theoretical height $10.00\mu m$ (FPS’s diameter), shown in Table 1. Without the correction along z-direction, the scanning distance of LSCM $(0.2\mu m)$ is set as the distance among each image layer. So the heights of three reconstructed FPS models without the correction along z-direction are $14.60\mu m$, $15.00\mu m$, and $15.20\mu m$ with errors 46.0%, 50.0%, and 52.0%, respectively (labeled as error $\delta _1$ in Table 1). By multiplying the factor of 0.66 obtained from the ray-tracing model in Section 2.1, the heights of three reconstructed FPS models are corrected to $9.63\mu m$, $9.90\mu m$, and $10.03\mu m$, respectively. Obviously, the corrected height is closer to the theoretical value with the error decreasing to 3.10%, 1.00%, and 0.30% (labeled as error $\delta _2$ in Table 1), in which the reconstructed FPS by our PDV-PSF has the best accuracy along vertical $z$-direction.

On the other hand, in horizontal image plane, the area (= pixel numbers) of each image layer for three reconstructed FPSs are calculated as shown in Fig. 10(b) in which the black line is for reconstructed FPS by original images, the blue line for that by blind deconvolution and the red line for that by PDV-PSF method. Considering that the cross-section of an ideal sphere is a circle, the area pixel numbers of circles with different diameters in every image layer are calculated as their theoretical values, as the green line shown in Fig. 10(b). The relative errors for the area pixel number of three reconstructed FPS are shown in Fig. 10(c). Its mean error for PDV-PSF method is 9.75%, which is the minimum in three reconstructed FPS models. Also, from Fig. 10(b) it can be known on the top section of the reconstructed FPS that the errors of three reconstructed models are all less and their pixel area values are very close to the theoretical values. But on the bottom section, the deforming problem of the cone shape as shown in Fig. 10(a1)-(a3) makes the error increase. Therefore, on the bottom section, image layers #40, #55, and #65 are specially analyzed, and the results are given in Table 2.

Tables Icon

Table 2. Reconstructed FPS models in $x-y$ plane

Image layer #40 is almost in the middle of all the layers, which has the largest cross-section and the most effective pixels. So the area pixel numbers of three reconstructed FPS models are close to the theoretical area of image layer #40. In Table 2, error $\delta _3$ of #40 is quite small for all three reconstructed FPSs. Combining with Fig. 10(b), error $\delta _3$ increase with the scanning depth. For reconstructed FPS by original images, when the scanning depth changes from #55 to #60, its error increases from 18.07% to 25.10% just as the cone-like deformation on the bottom is shown in Fig. 10(a1). The blind deconvolution algorithm improves the deformation and makes $\delta _3$ decrease to 10.54% for #55 and 14.04% for #60. But this error is still larger than 10% and the deformation on the bottom in Fig. 10(a2) also can be observed. However, by our PDV-PSF method, $\delta _3$ is reduced by about two times to 5.29% for #55 and 6.29% for #60 compared with the blind deconvolution algorithm, by which the effectiveness of PDV-PSF method in this paper for reconstructing 3D morphology reconstruction for suspened samples in stratified media can be well proved.

4.2 Jurkat T-lymphocytes in suspension

Jurkat cells (China Infrastructure of Cell Line Resource), from human acute leukemia T lymphocytes, are selected and cultured for 3 days in RPMI 1640 medium (Hyclone, South Logan, UT, USA) with 10% fetal bovine serum (Gibco, Grand Island, NY, USA) and 1% penicillin-streptomycin solution (Hyclone, South Logan, UT, USA) in a culture incubator with $37^{\circ }C$ and 5% carbon dioxide. For LSCM experiments, cultured T-lymphocytes are taken out and diluted with phosphate buffered saline (PBS, pH = 7.4) to a concentration of $10^{5}$ to $10^{6}$ cells/mL, and then it is fixed by 4% paraformaldehyde. CM-Dil V22888 (Thermo Fisher Scientific, Waltham, MA, USA) for membranes and DAPI S36968 (Thermo Fisher Scientific, Waltham, MA, USA) for nucleus, which excitation wavelengths are $553nm$ and $364nm$ and the emission wavelengths are $570nm$ and $454nm$, respectively, are adopted to label T-lymphocytes. $5\mu L$ CM-Dil V22888 and $10\mu L$ DAPI S36968 are added to $1mL$ diluted sample, and finally, the sample is put in the water bath for 30 min at $37^{\circ }C$. A drop of labeled T-lymphocytes in suspension is detected by LSCM. For each cell, RGB 2D images whose size is $512\times 512$ pixels with pixel distance $0.04\mu m$ are obtained to form image stacks for each T-lymphocyte when LSCM scans along $z$-direction with step $0.2\mu m$. One of the T-lymphocyte’s image stacks is shown in Fig. 11(a), where the red color represents cell membrane and the blue color represents its nucleus. Fig. 11(b) is its 3D imaging result by the commercial software NIS-Elements Viewer. Fig. 11(c) is the reconstructed 3D morphological computational model by directly stitching 2D image layers between which the distance is equal to the scanning step $0.2\mu m$ along $z$-direction without considering the refractive index mismatch problem. Obviously, the same problems of image blurring and elongating as analyzed in Section 4.1 for FPS occurs herein.

 figure: Fig. 11.

Fig. 11. (a) The 2D LSCM image stack of one Jurkat T-lymphocyte. (b) The cell’s 3D imaging result by the commercial software. (c) Without considering the refractive index mismatch problem, the rendered cell’s 3D morphological computational model by processing 2D original image layers.

Download Full Size | PDF

Considering the refractive index mismatch problem, the reconstructed results for T-lymphocytes by original images, blind deconvolution, and our PDV-PSF method are shown in Fig. 12 (The reconstructed 3D computational models are available in Visualization 1). Note that the excitation and emission wavelengths for the membrane (red color) and nucleus (blue color) are different, so the images for membrane and nucleus should be processed by deconvolution operation separately. Especially, for PDV-PSF methods the illumination and detection PSFs of the membrane and nucleus also need to be calculated respectively. The normalized Tenengrad function values for the membrane and nucleus are shown in Fig. 12(a-1) and (a-2), in which the red, blue and black lines represent the results of each image layer (totally 65 layers) by original images, blind deconvolution, and PDV-PSF method. 2D images of these three methods in Layer #30, #35, #40, #45 are given out and displayed in Fig. 12(b1)–(b3), (c1)–(c3), (e1)-(e3), and (f1)–(f3), respectively. It can be observed obviously from the zoom-in images of layer #35 in Fig. 12(d1)–(d3) that our PDV-PSF method makes 2D images much clearer. In order to quantitatively analyze the effect of PDV-PSF method, the Tenengrad function values of 2D images for the membrane (labeled by red color) and nucleus (labeled by blue color) are listed at the bottom of those images. For image layer #35, the Tenengrad function value of the membrane is improved from 0.11 by original images to 0.96 by PDV-PSF. For the nucleus, it is improved from 0.15 to 0.89. For all the image layers in Fig. 12(a-1) and (a-2), PDV-PSF makes the tenengrad function values increase by $6\sim 8$ times averagely compared with the original images for both the membrane and nucleus and $2\sim 4$ times compared with the blind deconvolution methods. Finally, the reconstructed 3D morphological computational models of Jurkat T-lymphocytes by these three methods are shown in Fig. 12(g1)–(g3). In addition, we also reconstructed the 3D morphological computational models of other cells, which can be found in Visualization 1.

 figure: Fig. 12.

Fig. 12. (a1) Tenengrad function values for original membrane channel images and those images processed by blind deconvolution and our PDV-PSF method. (a2) Tenengrad function values for original nucleus channel images and those images processed by blind deconvolution and our PDV-PSF method. (b1)-(b3) The original image, blind deconvolution image, and PDV-PSF method image of layer #30. (c1)-(c3) Three images of layer #35. (d1)-(d3) The zoomed areas at the cell edge in (c1)-(c3). (e1)-(e3) Three images of layer ${\$}$40. (f1)-(f3) Three images of layer #45. (g1)-(g3) Reconstructed cell models by original images, blind deconvolution images, and PDV-PSF method images.

Download Full Size | PDF

5. Conclusion

We proposed a PDV-PSF method for reconstructing the 3D morphological computational models of suspended cells imaged through stratified media, considering the deformation both in the $z$-direction and on the $x$-$y$ plane. According to the experimental results of model reconstruction of $10\mu m$ standard FPS and Jurkat T-lymphocytes, our method is proved to be able to obtain accurate 3D morphological computational models of suspended cells. These models can then be used in studies such as cell dynamics and light-cell interactions. Although our experiments demonstrate the 3D computational model reconstruction of suspended cells’ nucleus-membrane morphology, this method can also be extended to other subcellular structures (e.g. mitochondria) with appropriate treatment of cells. Our method takes into account the refractive index mismatch of three-layer stratified media in the LSCM imaging process but ignores the effect of the refractive index of biological samples, which is effective in dealing with single cells and thin biological samples. For thick biological samples, our method may need some changes, such as adding prior knowledge of the sample refractive index.

Funding

National Natural Science Foundation of China (61875160).

Acknowledgments

The authors would like to thank the three anonymous reviewers and the associate editor for their comments that contributed to meaningful improvements in this manuscript.

Disclosures

The authors declare no conflicts of interest.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

1. E. Pennisi, “’cell painting’ highlights responses to drugs and toxins,” Science 352(6288), 877–878 (2016). [CrossRef]  

2. D. Wang and S. Bodovitz, “Single cell analysis: the new frontier in ‘omics’,” Trends Biotechnol. 28(6), 281–290 (2010). [CrossRef]  

3. K. Shanta, K. Nakayama, M. Ishikawa, T. Ishibashi, H. Yamashita, S. Sato, H. Sasamori, K. Sawada, S. Kurose, H. M. Mahmud, S. Razia, K. Iida, N. Ishikawa, and S. Kyo, “Prognostic value of peripheral blood lymphocyte telomere length in gynecologic malignant tumors,” Cancers 12(6), 1469 (2020). [CrossRef]  

4. S. G. Tangye, S. J. Pelham, E. K. Deenick, and C. S. Ma, “Cytokine-mediated regulation of human lymphocyte development and function: insights from primary immunodeficiencies,” J. Immunol. 199(6), 1949–1958 (2017). [CrossRef]  

5. V. P. Maltsev, “Scanning flow cytometry for individual particle analysis,” Rev. Sci. Instrum. 71(1), 243–255 (2000). [CrossRef]  

6. T.-A. Pham, E. Soubies, A. Goy, J. Lim, F. Soulez, D. Psaltis, and M. Unser, “Versatile reconstruction framework for diffraction tomography with intensity measurements and multiple scattering,” Opt. Express 26(3), 2749–2763 (2018). [CrossRef]  

7. H. Zhang, F. Yang, and L. Cao, “Phase recovery technology of a dual-frame phase-shifting interferogram based on first-order norm vector normalization,” Appl. Opt. 61(5), B200–B205 (2022). [CrossRef]  

8. S. Smith-Dryden, S. Fan, B. E. Saleh, and G. Li, “Optical diffraction tomography by use of optimization and phase-based fidelity criterion,” IEEE J. Sel. Top. Quantum Electron. 27(4), 1–9 (2020). [CrossRef]  

9. P. Markov, A. J. Hayes, H. Zhu, C. Boote, and E. J. Blain, “3d immuno-confocal image reconstruction of fibroblast cytoskeleton and nucleus architecture,” J. Biophotonics 14(1), e202000202 (2021). [CrossRef]  

10. S. Barreto, C. M. Perrault, and D. Lacroix, “Structural finite element analysis to explain cell mechanics variability,” J. Mech. Behav. Biomed. Mater. 38, 219–231 (2014). [CrossRef]  

11. D. I. Strokotov, M. A. Yurkin, K. V. Gilev, D. R. Van Bockstaele, A. G. Hoekstra, N. Rubtsov, and V. P. Maltsev, “Is there a difference between t-and b-lymphocyte morphology?” J. Biomed. Opt. 14(6), 064036 (2009). [CrossRef]  

12. H. Shahin, M. Gupta, A. Janowska-Wieczorek, W. Rozmus, and Y. Y. Tsui, “Physical characterization of hematopoietic stem cells using multidirectional label-free light scatterings,” Opt. Express 24(25), 28877–28888 (2016). [CrossRef]  

13. L. Bi and P. Yang, “Modeling of light scattering by biconcave and deformed red blood cells with the invariant imbedding t-matrix method,” J. Biomed. Opt. 18(5), 055001 (2013). [CrossRef]  

14. J. Zhang, Y. Feng, W. Jiang, J. Q. Lu, Y. Sa, J. Ding, and X.-H. Hu, “Realistic optical cell modeling and diffraction imaging simulation for study of optical and morphological parameters of nucleus,” Opt. Express 24(1), 366–377 (2016). [CrossRef]  

15. L. Zhang, Y. Xie, Y. Tu, L. Luo, K. Li, L. Yuan, W. Chen, H. Zhao, and Z. Zhang, “Clinical lymphocytes construction for light scattering inversion study: a three-dimensional morphology constructed method from defective confocal images,” J. Biomed. Opt. 23(08), 1 (2018). [CrossRef]  

16. G. Zellnig, J. Peters, M. Jiménez, D. Morales, D. Grill, and A. Perktold, “Three-dimensional reconstruction of the stomatal complex in pinus canariensis needles using serial sections,” Plant Biol. (Berlin, Ger.) 4(1), 70–76 (2002). [CrossRef]  

17. A. Klaus, V. Kulasekera, and V. Schawaroch, “Three-dimensional visualization of insect morphology using confocal laser scanning microscopy,” J. Microsc. 212(2), 107–121 (2003). [CrossRef]  

18. C. Fouquet, J.-F. Gilles, N. Heck, M. Dos Santos, R. Schwartzmann, V. Cannaya, M.-P. Morel, R. S. Davidson, A. Trembleau, and S. Bolte, “Improving axial resolution in confocal microscopy with new high refractive index mounting media,” PLoS One 10(3), e0121096 (2015). [CrossRef]  

19. T. H. Besseling, J. Jose, and A. V. Blaaderen, “Methods to calibrate and scale axial distances in confocal microscopy as a function of refractive index,” J. Microsc. 257(2), 142–150 (2015). [CrossRef]  

20. E. E. Diel, J. W. Lichtman, and D. S. Richardson, “Tutorial: avoiding and correcting sample-induced spherical aberration artifacts in 3d fluorescence microscopy,” Nat. Protoc. 15(9), 2773–2784 (2020). [CrossRef]  

21. A. Katiyar, J. D. Antani, B. P. McKee, R. Gupta, P. P. Lele, and T. P. Lele, “A method for direct imaging of x–z cross-sections of fluorescent samples,” J. Microsc. 281(3), 224–230 (2021). [CrossRef]  

22. K. Carlsson, “The influence of specimen refractive index, detector signal integration, and non-uniform scan speed on the imaging properties in confocal microscopy,” J. Microsc. 163(2), 167–178 (1991). [CrossRef]  

23. T. Visser, J. Oud, and G. Brakenhoff, “Refractive index and axial distance measurements in 3-d microscopy,” Optik 90, 17–19 (1992).

24. P. Sarder and A. Nehorai, “Deconvolution methods for 3-d fluorescence microscopy images,” IEEE Signal Process. Mag. 23(3), 32–45 (2006). [CrossRef]  

25. S. F. Gibson and F. Lanni, “Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy,” J. Opt. Soc. Am. A 8(10), 1601–1613 (1991). [CrossRef]  

26. J. Li, F. Xue, and T. Blu, “Fast and accurate three-dimensional point spread function computation for fluorescence microscopy,” J. Opt. Soc. Am. A 34(6), 1029–1034 (2017). [CrossRef]  

27. J. Li, F. Xue, F. Qu, Y.-P. Ho, and T. Blu, “On-the-fly estimation of a microscopy point spread function,” Opt. Express 26(20), 26120–26133 (2018). [CrossRef]  

28. P. Török, “Propagation of electromagnetic dipole waves through dielectric interfaces,” Opt. Lett. 25(19), 1463–1465 (2000). [CrossRef]  

29. M. J. Nasse and J. C. Woehl, “Realistic modeling of the illumination point spread function in confocal scanning optical microscopy,” J. Opt. Soc. Am. A 27(2), 295–302 (2010). [CrossRef]  

30. O. Haeberlé, M. Ammar, H. Furukawa, K. Tenjimbayashi, and P. Török, “Point spread function of optical microscopes imaging through stratified media,” Opt. Express 11(22), 2964–2969 (2003). [CrossRef]  

31. F. Xue, F. Luisier, and T. Blu, “Multi-wiener sure-let deconvolution,” IEEE Trans. on Image Process. 22(5), 1954–1968 (2013). [CrossRef]  

32. H. Takeda, S. Farsiu, and P. Milanfar, “Deblurring using regularized locally adaptive kernel regression,” IEEE Trans. on Image Process. 17(4), 550–563 (2008). [CrossRef]  

33. J. Li, F. Luisier, and T. Blu, “Pure-let image deconvolution,” IEEE Trans. on Image Process. 27(1), 92–105 (2017). [CrossRef]  

34. P. Török, P. Higdon, and T. Wilson, “On the general properties of polarised light conventional and confocal microscopes,” Opt. Commun. 148(4-6), 300–315 (1998). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       3D display of morphological computational models of standard fluorescent polystyrene spheres and Jurkat T-lymphocytes.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schematic diagram of the ray-tracing model. (a) Rays at all angles without total internal reflection (represented by three rays) are refracted at the interfaces $S_1$ and $S_2$ and then focused to different positions on the optical axis, resulting in a deviation of the nominal focusing position (NFP). Therefore, the equivalent focus position (EFP) is obtained to equivalent this deviation. (b) The relationship between the moving distance of NFP and that of EFP during the LSCM scanning process.
Fig. 2.
Fig. 2. The image obtained by an imaging system is convolved with the point spread function (PSF) of the system and then added noise. In order to make the recovered object more accurate, it is necessary to analyze the PSF of the imaging system.
Fig. 3.
Fig. 3. (a) The illumination light is focused through three layers of stratified media. Establish cartesian and cylindrical coordinates of origin $O$ on NFP. (b) The dipole wave radiated from $M$ is focused on the detection plane through the stratified media, and the coordinate system is established with $O'$ on the detection plane as the origin.
Fig. 4.
Fig. 4. Normalized illumination PSF under different imaging conditions. (a1) Refractive index matching $(n_1=n_2=n_3=1.515)$, detection depth $z_d=2\mu m$; (b1) Refractive index mismatching $(n_1=n_2=1.515,n_3=1.33)$, $z_d=2\mu m$; (c1) Refractive index matching $(n_1=n_2=n_3=1.515)$, $z_d=8\mu m$; (d1) Refractive index mismatching $(n_1=n_2=1.515,n_3=1.33)$, $z_d=8\mu m$; (a2)-(d2) Corresponding illumination PSF ($x$-$y$ section) at the detection depth (dotted lines) in (a1)-(d1).
Fig. 5.
Fig. 5. Line profiles of normalized illumination PSF in Fig. 4. (a) Line profiles at $x=0$ in Fig. 4 (a1)-(d1) are plotted in black, red, blue, and green, respectively. (b1)-(b4) Line profiles at $y=0$ in Fig. 4 (a2)-(d2).
Fig. 6.
Fig. 6. Normalized illumination PSFs at $z=z_d$ of 96 different detection depths which the distance is $0.132\mu m$.
Fig. 7.
Fig. 7. The 96 depth-varying detection PSFs on the detection plane, are normalized to $0$-$255$.
Fig. 8.
Fig. 8. (a) The 2D LSCM image stack of one standard fluorescent polystyrene sphere with 86 layers. (b) 3D imaging result by the commercial software NIS-Elements Viewer. (c) Without considering the refractive index mismatch problem, the rendered 3D morphological computational model by processing 2D original image layers.
Fig. 9.
Fig. 9. (a) Tenengrad function values for original images and images processed by blind deconvolution and our PDV-PSF method in all the layers. (b1)-(b3) The original image, blind deconvolution image, and PDV-PSF method image of layer #40. (c1)-(c3) The zoomed areas at the FPS edge in (b1)-(b3).
Fig. 10.
Fig. 10. (a1)-(a3) Reconstructed FPS models by original images, blind deconvolution images, and PDV-PSF method images. (b) Comparing the area of original images, blind deconvolution images, and PDV-PSF method images with the area of an ideal sphere (as a conic) in all the layers. (c) The relative error of each layer and mean relative error for all layers calculated from (b).
Fig. 11.
Fig. 11. (a) The 2D LSCM image stack of one Jurkat T-lymphocyte. (b) The cell’s 3D imaging result by the commercial software. (c) Without considering the refractive index mismatch problem, the rendered cell’s 3D morphological computational model by processing 2D original image layers.
Fig. 12.
Fig. 12. (a1) Tenengrad function values for original membrane channel images and those images processed by blind deconvolution and our PDV-PSF method. (a2) Tenengrad function values for original nucleus channel images and those images processed by blind deconvolution and our PDV-PSF method. (b1)-(b3) The original image, blind deconvolution image, and PDV-PSF method image of layer #30. (c1)-(c3) Three images of layer #35. (d1)-(d3) The zoomed areas at the cell edge in (c1)-(c3). (e1)-(e3) Three images of layer ${\$}$40. (f1)-(f3) Three images of layer #45. (g1)-(g3) Reconstructed cell models by original images, blind deconvolution images, and PDV-PSF method images.

Tables (2)

Tables Icon

Table 1. Reconstructed FPS models in z -direction

Tables Icon

Table 2. Reconstructed FPS models in x y plane

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

n 1 sin θ 1 = n 2 sin θ 2 = n 3 sin θ 3
z = tan θ 1 z 0 + ( tan θ 1 tan θ 2 ) h tan θ 3
z E F P = 0 θ m a x tan θ 1 0 θ m a x tan θ 1 d θ 1 z d θ 1
z E F P = k 1 z 0 + k 2 h
k 1 = 0 θ m a x tan θ 1 ln cos θ m a x tan θ 1 tan θ 3 d θ 1
k 2 = 0 θ m a x tan θ 1 ln cos θ m a x tan θ 1 tan θ 2 tan θ 3 d θ 1
δ = z E F P z E F P = k 1 δ 0
g ( x , y ) = f ( x , y ) h ( x , y ) + n ( x , y )
e i l l ( ρ m , ϕ m , z m ) = i k 1 2 π 0 θ m a x 0 2 π ε 3 exp ( i k 0 Ψ ) exp [ i k 1 ρ m sin θ 1 cos ( ϕ ϕ m ) ] × exp ( i k 3 z m cos θ 3 ) sin θ 1 d ϕ d θ 1
ε 1 = A i l l ( θ 1 ) R 1 P 1 L R ε 0
R = [ cos ϕ sin ϕ 0 sin ϕ cos ϕ 0 0 0 1 ] L = [ cos θ 1 0 sin θ 1 0 1 0 sin θ 1 0 cos θ 1 ] P j = [ cos θ j 0 sin θ j 0 1 0 sin θ j 0 cos θ j ]
ε 3 = R 1 ( P 3 ) 1 I i l l R ε 1
T p o l = t p o l ( 1 , 2 ) t p o l ( 2 , 3 ) exp ( i β 2 ) 1 + r p o l ( 1 , 2 ) r p o l ( 2 , 3 ) exp ( 2 i β 2 )
t s ( i , j ) = 2 n i cos θ i n i cos θ i + n j cos θ j t p ( i , j ) = 2 n i cos θ i n j cos θ i + n i cos θ j r s ( i , j ) = n i cos θ i n j cos θ j n i cos θ i + n j cos θ j r p ( i , j ) = n j cos θ i n i cos θ j n j cos θ i + n i cos θ j
ε 3 , x = A i l l ( θ 1 ) [ ( T p cos θ 3 T s ) ( ε 0 , x cos 2 ϕ + ε 0 , y sin ϕ cos ϕ ) + ε 0 , x T s ] ε 3 , y = A i l l ( θ 1 ) [ ( T p cos θ 3 T s ) ( ε 0 , x sin ϕ cos ϕ ε 0 , y cos 2 ϕ ) + ε 0 , y T p cos θ 3 ] ε 3 , z = A i l l ( θ 1 ) [ T p sin θ 3 ( ε 0 , x cos ϕ + ε 0 , y sin ϕ ) ]
0 2 π sin ( n ϕ ) exp [ i ρ cos ( ϕ γ ) ] d ϕ = 2 π i n J n ( ρ ) sin ( n γ ) 0 2 π cos ( n ϕ ) exp [ i ρ cos ( ϕ γ ) ] d ϕ = 2 π i n J n ( ρ ) cos ( n γ )
e i l l , x = i k 1 2 [ I i l l ( 2 ) cos ( 2 ϕ m ) ε 0 , x + I i l l ( 2 ) sin ( 2 ϕ m ) ε 0 , y + I i l l ( 0 ) ε 0 , x ] e i l l , y = i k 1 2 [ I i l l ( 2 ) sin ( 2 ϕ m ) ε 0 , x I i l l ( 2 ) cos ( 2 ϕ m ) ε 0 , y + I i l l ( 0 ) ε 0 , y ] e i l l , z = k 1 [ I i l l ( 1 ) cos ϕ m ε 0 , x + I i l l ( 1 ) sin ϕ m ε 0 , y ]
I i l l ( 0 ) = 0 θ m a x A i l l ( θ 1 ) J 0 ( k 1 ρ m sin θ 1 ) ( T s + T p cos θ 3 ) exp ( i k 0 Ψ ) exp ( i k 3 z m cos θ 3 ) sin θ 1 d θ 1 I i l l ( 1 ) = 0 θ m a x A i l l ( θ 1 ) J 1 ( k 1 ρ m sin θ 1 ) ( T p sin θ 3 ) exp ( i k 0 Ψ ) exp ( i k 3 z m cos θ 3 ) sin θ 1 d θ 1 I i l l ( 2 ) = 0 θ m a x A i l l ( θ 1 ) J 2 ( k 1 ρ m sin θ 1 ) ( T s T p cos θ 3 ) exp ( i k 0 Ψ ) exp ( i k 3 z m cos θ 3 ) sin θ 1 d θ 1
e d e t ( ρ s , ϕ s ) = i k 1 2 π 0 θ d , m a x 0 2 π e d exp ( i k 0 Ψ ) exp [ i k d ρ s sin θ d cos ( ϕ ϕ s ) ] sin θ d d ϕ d θ d
e d = A d e t ( θ ) R 1 L d L 1 ( P 1 ) 1 I d e t P 3 R e m
T p o l = t p o l ( 3 , 2 ) t p o l ( 2 , 1 ) exp ( i β 2 ) 1 + r p o l ( 3 , 2 ) r p o l ( 2 , 1 ) exp ( 2 i β 2 )
e d , x = A d e t ( θ ) { 1 2 cos 2 ϕ [ T p cos ( θ 3 θ d ) T s ] e m , x   + 1 2 [ T p cos ( θ 3 θ d ) + T s ] e m , x + 1 2 sin 2 ϕ [ T p cos ( θ 3 θ d ) T s ] e m , y + cos ϕ T p sin ( θ d θ 3 ) e m , z } e d , y = A d e t ( θ ) { 1 2 sin 2 ϕ [ T p cos ( θ 3 θ d ) T s ] e m , x + 1 2 [ T p cos ( θ 3 θ d ) + T s ] e m , y 1 2 cos 2 ϕ [ T p cos ( θ 3 θ d ) T s ] e m , y + sin ϕ T p sin ( θ d θ 3 ) e m , z } e d , z = A d e t ( θ ) [ cos ϕ T p sin ( θ 3 θ d ) e m , x + sin ϕ T p sin ( θ 3 θ d ) e m , y + T p cos ( θ 3 θ d ) e m , z ]
e d e t , x = i k 1 2 [ I d e t ( 3 ) cos 2 ϕ s e m , x ( I d e t ( 0 ) + I d e t ( 1 ) ) e m , x   + I d e t ( 3 ) sin 2 ϕ s e m , y ] k 1 I d e t ( 2 ) cos ϕ s e m , z e d e t , y = i k 1 2 [ I d e t ( 3 ) sin 2 ϕ s e m , x ( I d e t ( 0 ) + I d e t ( 1 ) ) e m , y   I d e t ( 3 ) cos 2 ϕ s e m , y ] k 1 I d e t ( 2 ) sin ϕ s e m , z e d e t , z = k 1 I d e t ( 2 ) cos ϕ s e m , x + k 1 I d e t ( 2 ) sin ϕ s e m , y i k 1 I d e t ( 0 ) e m , z
I d e t ( 0 ) = 0 θ d , m a x A d e t ( θ ) J 0 ( k d ρ s sin θ d ) [ T p cos ( θ 3 θ d ) ] exp ( i k 0 Ψ ) sin θ d d θ d I d e t ( 1 ) = 0 θ d , m a x A d e t ( θ ) J 0 ( k d ρ s sin θ d ) T s exp ( i k 0 Ψ ) sin θ d d θ d I d e t ( 2 ) = 0 θ d , m a x A d e t ( θ ) J 1 ( k d ρ s sin θ d ) [ T p sin ( θ 3 θ d ) ] exp ( i k 0 Ψ ) sin θ d d θ d I d e t ( 3 ) = 0 θ d , m a x A d e t ( θ ) J 2 ( k d ρ s sin θ d ) [ T p cos ( θ 3 θ d ) T s ] exp ( i k 0 Ψ ) sin θ d d θ d
S ( x , y ) = G x g ( x , y ) + G y g ( x , y )
T = 1 n 2 x = 1 n y = 1 n S ( x , y ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.