Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D fluorescence imaging through scattering medium using transport of intensity equation and iterative phase retrieval

Open Access Open Access

Abstract

In this paper, we have proposed a method of three-dimensional (3D) fluorescence imaging through a scattering medium. The proposed method combines the numerical digital phase conjugation propagation after measurement of the complex amplitude distribution of scattered light waves by the transport of intensity equation (TIE) with followed iterative phase retrieval to achieve 3D fluorescence imaging through a scattering medium. In the experiment, we present the quantitative evaluation of the depth position of fluorescent beads. In addition, for time-lapse measurement, cell division of tobacco-cultured cells was observed. Numerical results presented the effective range of the phase amount in the scattering medium. From these results, the proposed method is capable of recovering images degraded by a thin scattering phase object beyond a small phase change approximation.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical bioimaging provides structural and functional information in vivo using a non-destructive and minimally invasive method [1,2]. For example, fluorescence observation using calcium-sensitive fluorescent proteins enables live observation of cellular activity [3], and two-photon excitation has enabled Ca2+ imaging from the surface to a depth of 0.5 mm in the mouse brain [4,5]. In mice or rats, the scattering phenomena are much larger than absorption [6,7], and therefore, the fluorescence images are degraded. In the strong scattering region, speckle deconvolution [8], cross-correlation of scattered light intensity distribution and phase recovery [911], and transmission matrix [1214] have been proposed as methods to recover fluorescence scattered images with two-dimensional distribution on the observation surface. However, few studies have successfully presented 3D fluorescence image recovery through a scattering medium. Using the memory effect, strong cross-correlations hold with the expansion and contraction of the point-spread function of scattering phenomena [15].

We summarize the 3D fluorescence imaging methods that have the potential for solving scattering problems. The transport of intensity equation (TIE) [16,17] can measure the complex amplitude distribution at the arbitral depth position by taking two or more intensity images along the propagation direction. Even in a low-coherence light source such as fluorescence light, it can be applied to the phase measurement of a fluorescence light wave where the partially spatial coherence can be appeared by spreading the wave after propagation. Similarly, holography can be applied to measure the partially coherent light wave by making the interference pattern of a fluorescence light wave by using a spatial light modulator or a specialized optical component [1820]. Low light energy efficiency is a problem for bioimaging. In both methods, to our best knowledge, no article about 3D fluorescence imaging in a scattering medium has been reported except our preliminary proposal using TIE [21]. Single-pixel imaging, such as scanning holography, is one of the promising methods of 3D fluorescence imaging [22]. However, it cannot be used when the illumination pattern is disturbed by the scattering medium. Light-field imaging is also a 3D fluorescence imaging based on ray tracing. Scattering causes the change of propagation directions into many directions. Therefore, reconstructing 3D imaging in a scattering medium is very challenging. Speckle correlation reconstructs object images using autocorrelation and phase recovery method in the Fourier domain. It can be applied to 3D imaging by using the memory effect of the scattering medium [15]. Along the depth direction, it uses the expansion and contraction of the point spread function of scattered light [23]. Although there are studies that have extended the field-of-view to a region three times larger than the memory effect, the range of observation is still limited [2].

In this paper, we propose a method that combines complex amplitude measurements of scattered light based on TIE with subsequent 3D fluorescence imaging based on numerical digital phase conjugate propagation. Furthermore, in the proposed method, iterative phase retrieval in the Fourier domain is applied to improve the image quality of the reconstructed images. In scattering imaging, the characteristics of the scattering medium determine the quality of the reconstructed image. To quantitatively evaluate the effective characteristics of the proposed method, we investigate the effect of the amount of phase modulation of the thin scattering medium on the reconstructed results. The results show that the proposed method is effective even when the amount of phase modulation is large beyond the approximation of weak phase objects. In the experiment, a diffuser is used as a scattering medium, and 3D imaging properties, especially in the depth position measurement are presented using fluorescent beads and plant cells.

2. Principle of the TIE-based 3D fluorescence imaging with iterative phase retrieval through a scattering medium

Figure 1 shows a conceptual diagram of the proposed 3D fluorescence imaging where the object light is passing through a scattering medium. Here, we assume that it consists of three planes: the object plane, the plane of thin scattering medium, and the recording plane. The object is assumed as fluorescent beads or cells stained with fluorescent proteins where the size is about 10 µm. Because we use the fluorescent signal, it is a non-negative real-valued amplitude distribution, denoted by ${g_1}({{u_1},{v_1}} )$. Fluorescence light propagates to the scattering medium placed by a distance d1 from the object. The incident light wave in front of the scattering medium is represented by ${g_2}({{u_2},{v_2}} )$ and the output wave just after the scattering medium is represented by ${g_3}({{u_2},{v_2}} )$. Here we assume that the scattering medium is thin. So, the amplitude distribution is not changed after passing through a scattering medium. Furthermore, after the propagation by a distance d2 to the recording plane, a complex amplitude distribution is described by ${g_4}({{u_3},{v_3}} )$. Using TIE, the complex amplitude distribution of $g_4^{\prime}({{u_3},{v_3}} )$ is reconstructed by calculating the phase distribution from two or more recorded intensity distributions along the propagation direction. When there is no scattering medium, the original object can be reconstructed by back propagating by a distance of (d1 + d2) by using the complex amplitude distribution obtained by TIE [16]. This reconstruction is the same as the digital phase conjugation propagation. Therefore, we call this phase conjugate reconstruction the conventional method. When there is a scattering medium, the back propagation calculation is performed using $g_4^{\prime}({{u_3},{v_3}} )$ from the recording plane to the plane where the original object was located leads to the reconstructed image with distortion due to the scattering medium. When the scattering medium is considered as weak phase modulation, non-modulated and modulated light waves caused by a scattering medium exist in the reconstructed image. Noise components due to the modulated wave by the scattering medium are superimposed on this reconstructed image. To remove this noise component, an iterative phase retrieval in the Fourier domain that is the spatial frequency domain is applied when we assume that the amplitude spectrum of the object is less affected, but the phase spectrum is much influenced by the scattering. The validation of this assumption is discussed in Sec. 4. The image recovery can be achieved even when the amount of the phase modulation is larger than the weak phase modulation condition.

 figure: Fig. 1.

Fig. 1. Schematic of the proposed 3D fluorescence imaging. (a) is the concept of TIE-based 3D fluorescence imaging through a scattering medium and (b) is the schematic of iterative phase retrieval

Download Full Size | PDF

The principle of the proposed method is explained mathematically in the following equations. The propagation of ${g_1}({{u_1},{v_1}} )$ to ${g_2}({{u_2},{v_2}} )$ in air can be expressed by the following Eq. (1).

$$\begin{array}{l} {g_2}({{u_2},{v_2}} )= \mathop \smallint \limits_{ - \infty }^\infty \mathop \smallint \limits_{ - \infty }^\infty {g_1}({{u_1},{v_1}} )\textrm{exp}\left\{ {i\frac{\pi }{{\lambda {d_1}}}({{{({{u_2} - {u_1}} )}^2} + {{({{v_2} - {v_1}} )}^2}} )} \right\}d{u_1}d{v_1}\\ = \textrm{}{g_1}({{u_2},{v_2}} )\otimes Q\left( {{u_2},{v_2};\textrm{}\frac{1}{{{d_1}}}} \right) \end{array}$$
where $Q\left( {{u_2},{v_2};\textrm{}\frac{1}{{{d_1}}}} \right)$ is given by
$$Q\left( {{u_2},{v_2};\textrm{}\frac{1}{{{d_1}}}} \right) = \textrm{exp}\left\{ {i\frac{\pi }{{\lambda {d_1}}}({u_2^2 + v_2^2} )} \right\}$$

Assuming that the modulation by the scattering medium is ${\phi _2}({{u_2},{v_2}} )$ and that it is a weak scattering medium, we can express using the Taylor expansion under weak phase modulation condition as follows.

$$\begin{aligned}& g_3\left( {u_2,v_2} \right) = g_2\left( {u_2,v_2} \right){\rm exp}\left( {i\phi _2\left( {u_2,v_2} \right)} \right)\\ &\quad = g_2\left( {u_2,v_2} \right)\left( {1 + i\phi _2\left( {u_2,v_2} \right)-\displaystyle{1 \over 2}\phi _2^2 \left( {u_3,v_3} \right) + \cdots } \right)\\ &\quad\approx g_2\left( {u_2,v_2} \right)\left( {1 + i\phi _2\left( {u_2,v_2} \right)} \right){\rm \; }\end{aligned}$$

Propagating further by d2, we obtain ${g_4}({{u_3},{v_3}} )$ that is recorded by the image sensor and is processed in TIE for obtaining the phase distribution.

$$\begin{aligned}& g_4\left( {u_3,v_3} \right) = {\rm \; }g_3\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right)\\ & \approx g_2\left( {u_3,v_3} \right)\left( {1 + i\phi _2\left( {u_3,v_3} \right)-\displaystyle{1 \over 2}\phi _2^2 \left( {u_3,v_3} \right) + \cdots } \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right)\\ & = g_2\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right) + ig_2\left( {u_3,v_3} \right)\phi _2\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right) \\ & \quad + \left( {-\displaystyle{1 \over 2}\phi _2^2 \left( {u_3,v_3} \right) + \cdots } \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right) \\ & = g_2\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right) + ig_2\left( {u_3,v_3} \right)\phi _2\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right)\\ &\quad + {\rm O}\left( {\phi _2^2 \left( {u_3,v_3} \right)} \right) \\ & = g_1\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_1 + d_2}}} \right) + ig_2\left( {u_3,v_3} \right)\phi _2\left( {u_3,v_3} \right)\otimes Q\left( {u_3,v_3;{\rm \; }\displaystyle{1 \over {d_2}}} \right) \\ & \quad+ {\rm O}\left( {\phi _2^2 \left( {u_3,v_3} \right)} \right)\end{aligned}$$
where $\textrm{O}({\phi_2^2({{u_3},{v_3}} )} )$ is given by
$$\textrm{O}({\phi_2^2({{u_3},{v_3}} )} )= \left( { - \frac{1}{2}\phi_2^2({{u_3},{v_3}} )+ \cdots } \right) \otimes Q\left( {{u_3},{v_3};\textrm{}\frac{1}{{{d_2}}}} \right)$$

If a measured complex amplitude distribution obtained by TIE is close to ${g_4}({{u_3},{v_3}} )$, the original object image is recovered by digital phase conjugation to the object plane by using the back propagation calculation of the first term on the right side of Eq. (4). This is the same process of digital phase conjugation. Due to the scattering medium, here, the second and higher order terms on the right side are superimposed as noise in the reconstructed image. This noise term degrades the reconstructed image quality. Therefore, further improvement of reconstructed image quality is required. Here, we use the iterative phase retrieval in the spatial frequency domain of the reconstructed object wave [11]. A schematic of iterative phase retrieval is shown in Fig. 1(b). When the phase modulation by a diffuser is sufficiently small, the amplitude spectrum distribution is less influenced than the phase spectrum distribution. This property was confirmed in Sec. 4.2. We assume that the reconstructed amplitude and phase distributions of the light wave in the object plane after backpropagation are |$g_1^{\prime}({{u_1},{v_1}} )$| and ∠$g_1^{\prime}({{u_1},{v_1}} )$, respectively, and the amplitude and phase spectrum distributions of the light wave in the Fourier domain are |$\widetilde {G_1^{\prime}}({{f_x},{f_y}} )$| and ∠$\widetilde {G_1^{\prime}}({{f_x},{f_y}} )$, respectively. Therefore, $g_1^{\prime}({{u_1},{v_1}} )$ and $\widetilde {G_1^{\prime}}({{f_x},{f_y}} )$ of the light wave in the object plane and Fourier plane are expressed by Eqs. (6) and (7). Here FT denotes the Fourier transform operator.

$$g_1^{\prime}({{u_1},{v_1}} )= |{g_1^{\prime}({{u_1},{v_1}} )} |\textrm{exp}({i\angle g_1^{\prime}({{u_1},{v_1}} )} )$$
$$\widetilde {G_1^{\prime}}({{f_x},{f_y}} )= |{\widetilde {G_1^{\prime}}({{f_x},{f_y}} )} |\textrm{exp} ({i\angle \widetilde {G_1^{\prime}}({{f_x},{f_y}} )} )= FT[{g_1^{\prime}({{u_1},{v_1}} )} ]$$

We keep the amplitude spectrum distribution, $|{\widetilde {G_{1,0}^{\prime}}({{f_x},{f_y}} )} |$ as the second constraint, that is the amplitude spectral distribution of light waves obtained by Fourier transform of the reconstructed light wave by backpropagation from the recording plane. In the iterative phase retrieval, the initial phase spectral distribution $\angle \widetilde {G_1^{\prime}}({{f_x},{f_y}} )\; $ is used as the phase spectral distribution of light wave obtained by the Fourier transform of the reconstructed light wave by backpropagation from the recording plane. Then, $g_1^{\prime}({{u_1},{v_1}} )$ is calculated by the inverse Fourier transform of Eq. (7). The first constraint in the object plane is a non-negative real number. So, Eq. (8) is applied.

$$g_1^{\prime}({{u_1},{v_1}} )= R\; \; \textrm{where}\; R = \left\{ {\begin{array}{{cc}} {\textrm{real}({g_1^{\prime}({{u_1},{v_1}} )} )}&{(\textrm{if}\; \textrm{real}({g_1^{\prime}({{u_1},{v_1}} )} )> 0)}\\ 0&{({\textrm{if}\; \textrm{real}({g_1^{\prime}({{u_1},{v_1}} )} )\le 0} )} \end{array}} \right.$$

By taking the Fourier transform of Eq. (8), the amplitude and phase spectrum distributions are obtained as in Eq. (7). The amplitude spectrum distribution is replaced by $|{\widetilde {G_{1,0}^{\prime}}({{f_x},{f_y}} )} |.\; $The phase spectral distribution remains as it is. The iterative process continues until the number of iterations reaches the maximum value or the update amount falls below a set value.

3. Experiment

3.1 Experimental system

Figure 2 shows the experimental system. The light from the LED light source (THORLABS, M470L4) at a central wavelength of 470nm passes through a bandpass filter (BPF1) with a 39nm bandwidth, is expanded, and is collimated by a lens (L1), and then passes through a focusing lens L2. The light transmitted through L2 is reflected by a dichroic mirror (DM, Thorlabs, DMLP490R) and focused onto the pupil of the microscope objective lens (MO, Nikon, LU Plan 20x, NA = 0.45, working distance of 4.5 mm). The light passing through the MO becomes a collimated plane wave and illuminates the sample through a diffuser (D) with a diffused angle of 1 degree. The relationship between the focal plane of MO, the diffuser, and an object is shown in the figure. Fluorescent beads and tobacco-cultured cells were used as objects to be measured. Fluorescent beads (Nile red) were 10 µm in diameter with a central wavelength of 560nm in fluorescent light. The fluorescence from the object is magnified by the MO and a tube lens (TL), then passes through a 4f imaging system consisting of two lenses L3 and L4, passes through the BPF2, and the image is formed on the image sensor. By changing the focal length of the electronically tunable lens (ETL, Optotune, EL-16-40-TC) placed at the Fourier plane of the 4f imaging system, it is possible to shift the depth position observed by the image sensor without changing the image magnification. The image sensor (Hamamatsu Photonics, C13440-20CU) has 2048 × 2048 pixels, pixel size of 6.5 µm, and a binning of 2 × 2. When acquiring fluorescence from fluorescent beads or plant cells, we use two bandpass filters BPF2 of 560nm ± 12nm or 525nm ± 39nm, respectively. In the following experiments, five images were recorded at 2 µm intervals by controlling the focal length of ETL. From the five images, the phase distribution ${\phi _z}({\boldsymbol x} )$ was measured based on a Fourier-transform-based TIE solver [24] shown in Eq. (9). FT denotes the Fourier transform and FT-1 denotes the inverse Fourier transform. k is the wavenumber and u is the Fourier plane. $\varepsilon > 0$ is a small constant. ${I_0}({\boldsymbol x} )$ is the center image among the five images. $\frac{{\partial I({\boldsymbol x} )}}{{\partial z}}$ is calculated from the five images. This calculation is based on a polynomial fitting as described in Ref. [25]. The interval of adjacent two images is 2 µm. For single-shot TIE, a method using multiple image sensors can be used as in Ref. [26]. For the time-lapse imaging of plant cell phenomena addressed in this study, a single image sensor was used because image recording at intervals of a few minutes is sufficient.

$${\phi _z}({\boldsymbol x} )={-} kF{T^{ - 1}}\left\{ {\frac{{j2\pi {\boldsymbol u}}}{{4{\pi^2}{{|{\boldsymbol u} |}^2} + \varepsilon }}FT\left[ {\frac{1}{{{I_0}({\boldsymbol x} )}}F{T^{ - 1}}\left\{ {\frac{{j2\pi {\boldsymbol u}}}{{4{\pi^2}{{|{\boldsymbol u} |}^2} + \varepsilon }}FT\left[ {\frac{{\partial I({\boldsymbol x} )}}{{\partial z}}} \right]} \right\}} \right]} \right\}$$

 figure: Fig. 2.

Fig. 2. Experimental system. BPF1 and BPF2: bandpass filter, L1, L2, L3, and L4: lens, M: mirror, MO: microscope objective lens, D: diffuser, DM: dichroic mirror, TL: tube lens, and ETL: electrically tunable lens.

Download Full Size | PDF

3.2 Evaluation of depth measurement using fluorescent beads

Figure 3 shows the intensity image of fluorescent beads without a diffuser when the beads are located at the focal plane under the microscope objective lens. We investigated the possibility of depth measurement using the proposed method with fluorescent beads that can be shifted along the depth direction by a movable stage. Especially, we investigated whether the depth position of the beads could be determined by fixing the position of the diffuser and varying the distance between the fluorescent beads and the diffuser. For the evaluation of depth measurement for 3D object reconstruction, Fig. 4 shows the defocused images by shifting the depth position of the fluorescent beads by 5 µm, 10 µm, and 15 µm. By increasing the shift amount, the spread of the fluorescent bead image was enlarged. In Fig. 4, the brightness is adjusted for better visibility. Five intensity images shifted by the depth positions, which are used to solve the TIE, were acquired using an electrically tunable focal length lens (ETL). Figure 5 shows the reconstructed images by the proposed method. By comparing Figs. 5(b-1) and 4(b-1), 5(b-2) and 4(b-2), 5(b-3) and 4(b-3), two beads are resolved clearly in the proposed method. Figures 5(c-1), (c-2) and (c-3) show the reconstructed images by the conventional method. Comparing Figs. 5(b) and (c), the proposed method significantly suppresses the noise around the beads. For quantitative evaluation, the full width at a half maximum (FWHM) of the reconstructed intensity distribution of the fluorescent beads is calculated and the results are shown in Fig. 6, where the vertical axis represents FWHM of the reconstructed intensity distribution of the fluorescent beads, the horizontal axis represents the distance of the propagation calculation, the legend represents the position shifted from the focal plane, and the solid black line represents FWHM of the fluorescent beads without the diffuser in the imaging plane. As reference data, we use a method called the conventional method which is the reconstructed image by the complex amplitude measurement by TIE and followed digital phase conjugation [16]. The FWHM was calculated by approximating the intensity distribution of the fluorescent beads with a Gaussian distribution, focusing on the single bead indicated by the arrow in Fig. 5(b-1). We confirmed that similar results were obtained for beads at other locations. Figures 6(a) and (b) show the results of the FWHMs of the reconstructed intensity distributions of the fluorescent beads by the conventional method and the proposed method. The FWHM reconstructed by the conventional method in Fig. 6(a) is larger than the solid black line (ideal case). This means that the conventional method could not retrieve the original beads effectively. On the other hand, Fig. 6(b) shows that the three positions of the fluorescent beads cross the solid black line at the same propagation distance as the shifted distance of beads. This verifies that the depth of the fluorescent beads through the diffuser can effectively be measured using the proposed method. The FWHM of the bead images in Fig. 6(b) becomes smaller than the beam size as the propagation distance increases. We believe that this is due to the fact that the beads are not thin phase objects but spherical, and thus become smaller due to lensing effects during propagation calculation. Regarding the reconstruction range along the depth direction, as the diffuse angle increases, the light energy is diffused by scattering and the detected intensity becomes smaller. Therefore, the quality of the reconstructed image is degraded because it is experimentally more difficult to detect scattered light. Furthermore, as the depth position becomes further away, the angular width of the image sensor that captures the object light becomes smaller, which widens the spot diameter of the reconstructed image. Therefore, the depth range that can be reconstructed is considered to become narrower.

 figure: Fig. 3.

Fig. 3. Fluorescent beads without a diffuser in the imaging plane. (a) is the acquired image and (b) is a cropped image of (a) as indicated by the red box.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Intensity images of fluorescent beads with diffusers that are displaced from the focal plane. (a) are acquired images and (b) are cropped images of (a). -1,2,3) are images where the sample depth positions are displaced by 5 µm, 10 µm, and 15 µm, respectively.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Reconstructed images of fluorescent beads by the proposed method. (a) are reconstructed images and (b) are cropped images of (a). (c) are the reconstructed images by the conventional method. -1,2,3) are the images where the sample depth positions are displaced by 5 µm, 10 µm, and 15 µm, respectively.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. FWHM of reconstructed fluorescent beads by applying (a) the conventional method and (b) the proposed method. The black solid line indicates the FWHM of the recorded fluorescent bead. The horizontal axis shows the propagation distances used in the propagation calculations during phase conjugate reconstruction.

Download Full Size | PDF

3.3 Applications to plant cell observation

We employed the proposed method for the observation of living plant cells. As living plant cells, tobacco suspension culture cells expressing EGFP-β-tubulin were used. Microtubules are the major component of mitotic apparatus. Therefore, the cell division process is easily observed by GFP-tubulin. The results of using tobacco-cultured cells without a diffuser are shown in Fig. 7. Figures 7(a) and (b) are images acquired at different locations in the field of view. The images show different shapes of tubulin before the cell division. In Fig. 7, it can be observed that the cells are located at different depths and the reconstruction based on TIE and followed digital phase conjugation calculation can retrieve the cells at appropriate depth positions. We investigated the possibility of recovering from the scattered image while maintaining the depth information. Figure 8 shows fluorescent images of tobacco-cultured cells where the object is visually most in focus when observed through a diffuser. The position is considered at 0 µm. After taking the complex amplitude distribution by TIE, -5 µm, 0 µm, 5 µm, and 10 µm propagation calculations were performed, and iterative phase retrieval was used to improve the image quality. The reconstructed images by the proposed method are shown in Fig. 9, which shows that the blurring of cell nuclei changes with the propagation distance in both Fig. 9(a) and Fig. 9(b). In Fig. 9(a-1), the mitotic apparatus in the upper position and bottom position are focused and defocused, respectively, and vice versa in Fig. 9(a-4). In Fig. 9(b-4), the cell nuclei in the upper left and bottom right are focused. However, in Fig. 9(b-1), these are defocused. These results indicate that the information of depth position can be recovered even in the living plant cells through a diffuser with a significant image quality improvement. Nonetheless, further development in improving the image quality through the diffuser is necessary while comparing the results of Fig. 9(b) to Fig. 7(b).

 figure: Fig. 7.

Fig. 7. Images of tobacco-cultured without diffuser. (a) and (b) are cropped images of a portion of the image sensor image, respectively. -1,2,3,4) are the positions displaced by -5 µm, 0 µm, 5 µm, and 10 µm, respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Fluorescent images of tobacco-cultured cells where the object is visually most in focus when observed through a diffuser. (a) and (b) are cropped images of a part of the image acquired by the image sensor that corresponds to Figs. 7(a) and (b), respectively.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Images of tobacco-cultured cells reconstructed by the proposed method. (a) and (b) are cropped images of a part of the image acquired by the image sensor. -1,2,3,4) are the positions displaced by -5 µm, 0 µm, 5 µm, and 10 µm, respectively.

Download Full Size | PDF

3.4 Time-lapse measurement using tobacco-cultured cells

The behavior of microtubules in tobacco-cultured cells changes over time. The microtubules elongate and separate, resulting in cell division. The applicability of the proposed method to time-lapse measurements is described using images of tobacco-cultured cells taken every 10 min for 50 min. Figure 10 shows the fluorescent images of tobacco-cultured cells recorded in the image sensor and numerically reconstructed by the proposed method. Figure 10(a) is a fluorescent image of a tobacco-cultured cell acquired through a diffuser, and Figs. 10(b) and (c) are reconstructed images by the proposed method after ±50 µm propagation, respectively. Focused cell nuclei can be seen in the blue and red boxes. Figures 10(b) and (c) indicate that the proposed method can reconstruct the appropriate images of nuclei at different depth positions.

 figure: Fig. 10.

Fig. 10. Fluorescent images of tobacco-cultured cells recorded by the image sensor and numerically reconstructed. (a) is the fluorescent image of a tobacco-cultured cell acquired through a diffuser. (b) and (c) are reconstructed images after iterative phase retrieval after ±50 µm propagation. In blue and red boxes, the focused nuclei are appeared

Download Full Size | PDF

Figure 11 shows the results of the time-lapse measurement of the scattered images of the plant cells, lying in a region in the blue box in Fig. 10(a), without the numerical reconstruction. Those are obtained by the propagation distance of 50 µm. Figure 11(a) depicts the initial image acquisition (say at t = 0 min), and Figs. 11(b), (c), (d), (e), and (f) show the images recorded after 10 min 20 min, 30 min, 40 min, and 50 min. Due to the scattering, it is difficult to observe the cell division process clearly. See the movie file as Visualization 1.

 figure: Fig. 11.

Fig. 11. Time-lapse measurement results of the scattered images obtained in the blue box in Fig. 10(a). (a), (b), (c), (d), (e), and (f) are the fluorescent images (a) at the initial time as 0 min, and after (b) 10 min, (c) 20 min, (d) 30 min, 40 min, and 50 min, respectively (see Visualization 1).

Download Full Size | PDF

Figure 12(a) shows the recovered image at the beginning of the acquisition by the proposed method, and Figs. 12(b), (c), (d), (e), and (f) show the recovered time-lapse images at 10 min, 20 min, 30 min, 40 min, and 50 min, respectively. See the movie file as Visualization 2. In Figs. 12(d) and (e), the microtubule extending can be observed which is confirmed by the structure of the mitotic reserve zone. Figure 12(f) depicts that the cell structure transforming to circular shape after the cell division.

 figure: Fig. 12.

Fig. 12. Time-lapse measurement results of numerically reconstructed images in the blue box by the proposed method. (a), (b), (c), (d), (e), and (f) are the images after 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively (see Visualization 2).

Download Full Size | PDF

Figure 13 shows the results of the time-lapse measurement of the scattered images in the red box of Fig. 10. Figures 13(a), (b), (c), (d), (e), and (f) show the images recorded at 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively. See the movie file as Visualization 3.

 figure: Fig. 13.

Fig. 13. Time-lapse measurement results of the scattered images obtained in the red box in Fig. 10(a). (a), (b), (c), (d), (e), and (f) are the fluorescent images (a) at the initial time as 0 min, and after (b) 10 min, (c) 20 min, (d) 30 min, 40 min, and 50 min, respectively (see Visualization 3).

Download Full Size | PDF

The reconstructed images are recovered by employing the proposed method and after propagation of +50 µm and followed by iterative phase retrieval. The obtained experimental results are shown in Fig. 14. Figures 14(a), (b), (c), (d), (e), and (f) show the time-lapse reconstructed images at 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively. See the movie file as Visualization 4. In Fig. 14(b), microtubules are clustered in a band (phragmoplast) and the structure of the preparatory zone can be seen. Microtubules are extended as shown in Figs. 14(c) and (d), and the microtubules spreading throughout the cell can be observed in Figs. 14(e) and (f).

 figure: Fig. 14.

Fig. 14. Time-lapse measurement results of the reconstructed image in the red box of Fig. 10 by the proposed method. (a), (b), (c), (d), (e), and (f) are the images after 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively (see Visualization 4).

Download Full Size | PDF

4. Discussion

4.1 Evaluation of the effectiveness of the proposed method to the amount of phase range

In Sec. 2, we treated a scattering medium as a weakly phase-modulated object. In this section, we investigate the phase modulation range of the scattering medium to which the proposed method is applicable. The phase modulation range of the diffuser used in this experiment was measured by off-axis digital holography for quantitative phase measurement. The experimental system and the measurement results are shown in Figs. 15(a) and (b), respectively. In Fig. 15(a), a Mach-Zehnder interferometer is used. The beam splitter, BS2, is slightly tilted to create the fringe pattern for off-axis digital holography. There is no imaging system between the diffuser with a diffused angle of 1 degree and an image sensor. The distance between the diffuser and the image sensor is 200 mm. Figure 15(b) shows the cross-sectional phase distribution, where the horizontal and vertical axes indicate the number of pixels and the phase, respectively. The results show that in the diffuser used in this experiment, a phase change ranges from -1 to 1 [rad]. The phase modulation range of the diffuser at different locations is measured. The standard deviation of the phase distribution dataset is measured and found 0.0307 [rad]. The sub-image in Fig. 15(b) shows that the phase change range at 30 pixels is about 0.9 [rad].

 figure: Fig. 15.

Fig. 15. Measured phase amount range of the diffuser used in the experiment. (a) is the experimental system of digital holography and (b) is the measured phase profile. BS1 and BS2: beam splitter, M: mirror, D: diffuser.

Download Full Size | PDF

Next, the relationship between the range of phase change of the scattering medium and the accuracy of reconstruction was numerically evaluated by simulation. As shown in Eq. (4) and in Fig. 16(a), the phase distribution of a diffuser is given by ${\phi _2}({{u_2},{v_2}} ).\; $ Here ${g_3}({{u_2},{v_2}} )= {g_2}({{u_2},{v_2}} )\textrm{exp}({i{\phi_2}({{u_2},{v_2}} )} ).\; \; {\phi _2}({{u_2},{v_2}} )$ is generated by a random number, where the phase amount of ${\phi _2}({{u_2},{v_2}} )$ is varied within 0 to 2π [rad]. The amplitude distribution of the input object is given by a Gaussian distribution with a 1/e width of 10 µm and a center wavelength of 560 nm as the same in the experiment, and shown in Fig. 16(b). Figure 16(c) shows the image of ${g_3}({{u_2},{v_2}} )$ propagated by a distance of -d1 before iterative phase retrieval, and Fig. 16(d) is the phase recovered image of Fig. 16(c). In Fig. 16(d), the central peak of the Gaussian appears more pronounced than in Fig. 16(c).

 figure: Fig. 16.

Fig. 16. Numerical simulation for quantitative evaluation of the effect of phase amount range to the reconstructed image. (a) is a conceptual diagram of the simulation. (b) is an intensity distribution of the input object. (c) and (d) are the reconstructed images before and after iterative phase retrieval, respectively.

Download Full Size | PDF

The reconstructed images were evaluated by the structural similarity index measure (SSIM). The number of each trial is 100 by changing the initial phase distribution. SSIM was used to compare the quality of the reconstructed images before and after iterative phase retrieval when ${g_3}({{u_2},{v_2}} )$ was propagated by -d1. The results are shown in Fig. 17. In Fig. 17, the red line represents SSIMs after iterative phase retrieval and blue represents those before iterative phase retrieval. The straight line represents SSIM, and the chain line represents the standard deviation of SSIM. Focusing on the area enclosed by the orange dotted line in Fig. 17, the SSIMs of the proposed method are larger than those of the digital phase conjugate reconstruction without iterative phase retrieval which is the conventional method when the phase range is less than 1.3 [rad]. The standard deviation is sufficiently small. This phase range is beyond the small phase modulation in Eq. (4). This indicates that the proposed method is effective in the phase range beyond the small phase modulation.

 figure: Fig. 17.

Fig. 17. Evaluation of the reconstructed image by SSIM with respect to the phase range. (a), (b), and (c) are the results for d1 = 50 µm, 100 µm, 150 µm, respectively.

Download Full Size | PDF

4.2 Evaluation of the effectiveness of the iterative phase retrieval

For the improvement of image quality by the iterative phase retrieval, the required condition is that the amplitude spectrum distribution after digital phase conjugation should be close to that without the scattering medium that is the original amplitude spectrum distribution of the object. We will discuss the effect of the amount of phase change of the scattering medium on the amplitude spectrum using experimental and simulation results.

Firstly, the experimental results are used to examine the influence of a scattering medium on the amplitude spectrum. As shown in Fig. 18(a), the image of fluorescent beads back propagating to the object plane is masked and removed, except for one fluorescent bead. The masked area is moved to the center of the image and then is Fourier transformed to obtain the amplitude and phase spectrum distributions. As a result, the amplitude spectrum distribution is obtained as shown in Fig. 18(b). The similarity to the amplitude spectrum distribution of the reproduced light without scattering medium is compared using SSIM. In this process, 5 µm, 10 µm, and 15 µm shifted images were used, and the measured SSIM was 0.969 for all of them. The SSIM at the position displaced by 20 µm was 0.813, and no depth measurement was possible. These results suggest that the effect of the iterative phase retrieval method is apparent when the amplitude spectrum distribution is preserved. The phase change of the diffuser used in the experiment was 0.9 [rad] from Fig. 15.

 figure: Fig. 18.

Fig. 18. Amplitude spectrum. (a) is the masked image after back propagation. (b) is the amplitude spectrum distribution of (a).

Download Full Size | PDF

Secondly, the difference of the amplitude spectrum distributions with and without scattering media was evaluated by the mean squared error (MSE) in the simulation. The amplitude spectrum was obtained by Fourier transform of the reconstructed light wave with and without scattering medium. The results are shown in Fig. 19. The straight line represents the MSE, and the chain line represents the standard deviation obtained by changing the initial phase distributions by 50 times. For an image of 128 × 128 pixels with the intensity of 0 to 1 per pixel, the MSE is approximately 50 that is 0.003 per pixel for a phase range of 1.3 [rad]. This result suggests that for phase changes of 1.3 [rad] or less, the quality of the reconstructed image is improved by applying the iterative phase retrieval. This is consistent with the experimental results. The phase change of 1.3 [rad] indicates that the proposed method is effective even in the region where the weak phase approximation does not hold. The standard deviation is sufficiently small. Comparing Figs. 19(a) to (c), there are a few differences, but Figs. 19(a), (b), and (c) are almost identical plots. This is because the amplitude spectrum is preserved under the Fresnel approximation for propagation at distance d1.

 figure: Fig. 19.

Fig. 19. Evaluation of the amplitude spectrum of the reconstructed light wave as a function of the phase range of the scattering medium. (a), (b), and (c) are MSEs for d1 = 50 µm, 100 µm, and 150 µm, respectively.

Download Full Size | PDF

Finally, we evaluate the obtained phase spectrum distribution that is recovered by the iterative phase retrieval. Phase spectrum distributions before and after iterative phase retrieval were compared. Fluorescent beads were used as objects. The cross-section of the phase spectrum along the horizontal spatial frequency at the vertical spatial frequency of 0 is shown in Fig. 20. Here, only the phase profile of 40 pixels where the amplitude spectra have some values is shown. The vertical and horizontal axes represent the phase and the number of pixels. The straight line (appeared as “Before” in the legend of Fig. 20) shows the phase distribution before iterative phase retrieval, and the dotted line (appeared as “After” in the legend of Fig. 20) shows the phase distribution after the iterative phase retrieval. Red, blue, and green indicate the results for images taken at 5 µm, 10 µm, and 15 µm displacement, respectively. The fluorescent beads are circular in shape and are non-negative real numbers. Therefore, ideally, the imaginary part of its Fourier transform should be zero and the phase spectrum should be zero. From Fig. 20, after the iterative phase retrieval, the phase profile is close to zero. Therefore, iterative phase retrieval works well to remove the resultant noise caused by the scattering.

 figure: Fig. 20.

Fig. 20. Profile of phase profiles of fluorescent bead before and after the phase retrieval. Before 5, 10, and 15 show the phase distribution before the iterative phase retrieval when the images are obtained at shifts of 5 µm, 10 µm, 15 µm, respectively, and After 5, 10, and 15 show the phase distribution after the iterative phase retrieval when the images are obtained at shifts of 5 µm, 10 µm, 15 µm, respectively.

Download Full Size | PDF

5. Conclusions

To reconstruct 3D information of an object passing through the scattering medium, we proposed the method combining the object wave recovery based on TIE-based object wave reconstruction and followed iterative phase retrieval. We verified the effectiveness of the proposed method by experiments and simulations. The wavefront information of fluorescent light passing through a scattering medium was measured by TIE. Depth information was recovered by digital phase conjugation propagation, and the recovered image was successfully improved by removing the residual phase noise in the spatial frequency domain by the iterative phase retrieval. Simulations demonstrate the effectiveness of the proposed method in the range where the phase change of the scattering medium is smaller than 1.3 [rad]. From these results, it can be said that the proposed method can recover images degraded by scattering beyond small phase approximation.

Funding

Adaptable and Seamless Technology Transfer Program through Target-Driven R and D (JPMJTR204C); Core Research for Evolutional Science and Technology (JPMJCR22P6); Japan Society for the Promotion of Science (20H05886, 21H04663, 21K18724, 23KJ1570).

Acknowledgments

We would like to thank Dr. Xiangyu Quan and Dr. Kaoru Ohta for helpful discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Yoon, M. Kim, M. Jang, et al., “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2(3), 141–158 (2020). [CrossRef]  

2. L. Zhu, F. Soldevila, C. Moretti, et al., “Large field-of-view non-invasive imaging through scattering layers using fluctuating random illumination,” Nat. Commun. 13(1), 1447 (2022). [CrossRef]  

3. R. Yuste, “Fluorescence microscopy today,” Nat. Methods 2(12), 902–904 (2005). [CrossRef]  

4. D. Kato, H. Wake, P. R. Lee, et al., “Motor learning requires myelination to reduce asynchrony and spontaneity in neural activity,” Glia 68(1), 193–210 (2020). [CrossRef]  

5. Y. Masamizu, Y. R. Tanaka, Y. H. Tanaka, et al., “Two distinct layer-specific dynamics of cortical ensembles during learning of a motor task,” Nat Neurosci 17(7), 987–994 (2014). [CrossRef]  

6. E. A. Genina, A. N. Bashkatov, D. K. Tuchina, et al., “Optical properties of brain tissues at the different stages of glioma development in rats: pilot study,” Biomed. Opt. Express 10(10), 5182–5197 (2019). [CrossRef]  

7. S. Gigan, O. Katz, H. B. de Aguiar, et al., “Roadmap on wavefront shaping and deep imaging in complex media,” J. Phys. Photonics 4(4), 042501 (2022). [CrossRef]  

8. X. Wang, D. Li, Z. Liu, et al., “Spectra-separated depth-of-field extended fluorescence imaging through scattering media using speckle deconvolution,” Opt. Lasers Eng. 161, 107393 (2023). [CrossRef]  

9. J. Bertolotti, E. G. van Putten, C. Blum, et al., “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

10. O. Katz, P. Heidmann, M. Fink, et al., “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

11. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758 (1982). [CrossRef]  

12. S. Popoff, G. Lerosey, M. Fink, et al., “Image transmission through an opaque material,” Nat. Commun. 1(1), 81 (2010). [CrossRef]  

13. H. Zhang, B. Zhang, Q. Feng, et al., “Self-reference method for measuring the transmission matrices of scattering media,” Appl. Opt. 59(25), 7547–7552 (2020). [CrossRef]  

14. A. Boniface, J. Dong, and S. Gigan, “Non-invasive focusing and imaging in scattering media with a fluorescence-based transmission matrix,” Nat. Commun. 11(1), 6154 (2020). [CrossRef]  

15. Y. Okamoto, R. Horisaki, and J. Tanida, “Noninvasive three-dimensional imaging through scattering media by three-dimensional speckle correlation,” Opt. Lett. 44(10), 2526–2529 (2019). [CrossRef]  

16. S. K. Rajput, M. Kumar, X. Quan, et al., “Three-dimensional fluorescence imaging using the transport of intensity equation,” J. Biomed. Opt. 25(03), 1 (2019). [CrossRef]  

17. S. K. Rajput, O. Matoba, M. Kumar, et al., “Multi-physical parameter cross-sectional imaging of quantitative phase and fluorescence by integrated multimodal microscopy,” IEEE J. Sel. Top. Quantum Electron. 27(4), 1–9 (2021). [CrossRef]  

18. J. Rosen, A. Vijayakumar, M. Kumar, et al., “Recent advances in self-interference incoherent digital holography,” Adv. Opt. Photonics 11(1), 1–66 (2019). [CrossRef]  

19. X. Quan, K. Nitta, O. Matoba, et al., “Phase and fluorescence imaging by combination of digital holographic microscopy and fluorescence microscopy,” Opt. Rev. 22(2), 349–353 (2015). [CrossRef]  

20. L. T. Bang, H. Y. Wu, Y. Zhao, et al., “Depth enhancement of 3D microscopic living-cell image using incoherent fluorescent digital holography,” J. Microsc. 265(3), 372–385 (2017). [CrossRef]  

21. M. Shoda, X. Quan, T. Murata, et al., “Measurement of Scattered Fluorescence Light by TIE-based 3D Fluorescence Imaging Technique,” in Proceedings of the 2022 Conference on Lasers and Electro-Optics Pacific Rim, Technical Digest Series (Optica Publishing Group, 2022), paper P_CM15_04. [CrossRef]  

22. N. Yoneda, Y. Saita, and T. Nomura, “Three-dimensional fluorescence imaging through dynamic scattering media by motionless optical scanning holography,” Appl. Phys. Lett. 119(16), 161101 (2021). [CrossRef]  

23. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014). [CrossRef]  

24. C. Zuo, J. Li, J. Sun, et al., “Transport of intensity equation: a tutorial,” Opt. Lasers Eng. 135, 106187 (2020). [CrossRef]  

25. N. Yoneda, A. Onishi, Y. Saita, et al., “Single-shot higher-order transport-of-intensity quantitative phase imaging based on computer-generated holography,” Opt. Express 29(4), 4783–4801 (2021). [CrossRef]  

26. X. Tian, W. Yu, X. Meng, et al., “Real-time quantitative phase imaging based on transport of intensity equation with dual simultaneously recorded field of view,” Opt. Lett. 41(7), 1427–1430 (2016). [CrossRef]  

Supplementary Material (4)

NameDescription
Visualization 1       Fig. 11. Time-lapse measurement results of the scattered images obtained in the blue box in Fig. 10
Visualization 2       Fig. 12. Time-lapse measurement results of numerically reconstruct images in the blue box by the proposed method. (a), (b), (c), (d), (e), and (f) are the images after 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively.
Visualization 3       Fig. 13. Time-lapse measurement results of the scattered images obtained in the red box in Fig. 10
Visualization 4       Fig. 14. Time-lapse measurement results of the reconstructed image in the red box of Fig. 10 by proposed method. (a), (b), (c), (d), (e), and (f) are the images after 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1.
Fig. 1. Schematic of the proposed 3D fluorescence imaging. (a) is the concept of TIE-based 3D fluorescence imaging through a scattering medium and (b) is the schematic of iterative phase retrieval
Fig. 2.
Fig. 2. Experimental system. BPF1 and BPF2: bandpass filter, L1, L2, L3, and L4: lens, M: mirror, MO: microscope objective lens, D: diffuser, DM: dichroic mirror, TL: tube lens, and ETL: electrically tunable lens.
Fig. 3.
Fig. 3. Fluorescent beads without a diffuser in the imaging plane. (a) is the acquired image and (b) is a cropped image of (a) as indicated by the red box.
Fig. 4.
Fig. 4. Intensity images of fluorescent beads with diffusers that are displaced from the focal plane. (a) are acquired images and (b) are cropped images of (a). -1,2,3) are images where the sample depth positions are displaced by 5 µm, 10 µm, and 15 µm, respectively.
Fig. 5.
Fig. 5. Reconstructed images of fluorescent beads by the proposed method. (a) are reconstructed images and (b) are cropped images of (a). (c) are the reconstructed images by the conventional method. -1,2,3) are the images where the sample depth positions are displaced by 5 µm, 10 µm, and 15 µm, respectively.
Fig. 6.
Fig. 6. FWHM of reconstructed fluorescent beads by applying (a) the conventional method and (b) the proposed method. The black solid line indicates the FWHM of the recorded fluorescent bead. The horizontal axis shows the propagation distances used in the propagation calculations during phase conjugate reconstruction.
Fig. 7.
Fig. 7. Images of tobacco-cultured without diffuser. (a) and (b) are cropped images of a portion of the image sensor image, respectively. -1,2,3,4) are the positions displaced by -5 µm, 0 µm, 5 µm, and 10 µm, respectively.
Fig. 8.
Fig. 8. Fluorescent images of tobacco-cultured cells where the object is visually most in focus when observed through a diffuser. (a) and (b) are cropped images of a part of the image acquired by the image sensor that corresponds to Figs. 7(a) and (b), respectively.
Fig. 9.
Fig. 9. Images of tobacco-cultured cells reconstructed by the proposed method. (a) and (b) are cropped images of a part of the image acquired by the image sensor. -1,2,3,4) are the positions displaced by -5 µm, 0 µm, 5 µm, and 10 µm, respectively.
Fig. 10.
Fig. 10. Fluorescent images of tobacco-cultured cells recorded by the image sensor and numerically reconstructed. (a) is the fluorescent image of a tobacco-cultured cell acquired through a diffuser. (b) and (c) are reconstructed images after iterative phase retrieval after ±50 µm propagation. In blue and red boxes, the focused nuclei are appeared
Fig. 11.
Fig. 11. Time-lapse measurement results of the scattered images obtained in the blue box in Fig. 10(a). (a), (b), (c), (d), (e), and (f) are the fluorescent images (a) at the initial time as 0 min, and after (b) 10 min, (c) 20 min, (d) 30 min, 40 min, and 50 min, respectively (see Visualization 1).
Fig. 12.
Fig. 12. Time-lapse measurement results of numerically reconstructed images in the blue box by the proposed method. (a), (b), (c), (d), (e), and (f) are the images after 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively (see Visualization 2).
Fig. 13.
Fig. 13. Time-lapse measurement results of the scattered images obtained in the red box in Fig. 10(a). (a), (b), (c), (d), (e), and (f) are the fluorescent images (a) at the initial time as 0 min, and after (b) 10 min, (c) 20 min, (d) 30 min, 40 min, and 50 min, respectively (see Visualization 3).
Fig. 14.
Fig. 14. Time-lapse measurement results of the reconstructed image in the red box of Fig. 10 by the proposed method. (a), (b), (c), (d), (e), and (f) are the images after 0 min, 10 min, 20 min, 30 min, 40 min, and 50 min, respectively (see Visualization 4).
Fig. 15.
Fig. 15. Measured phase amount range of the diffuser used in the experiment. (a) is the experimental system of digital holography and (b) is the measured phase profile. BS1 and BS2: beam splitter, M: mirror, D: diffuser.
Fig. 16.
Fig. 16. Numerical simulation for quantitative evaluation of the effect of phase amount range to the reconstructed image. (a) is a conceptual diagram of the simulation. (b) is an intensity distribution of the input object. (c) and (d) are the reconstructed images before and after iterative phase retrieval, respectively.
Fig. 17.
Fig. 17. Evaluation of the reconstructed image by SSIM with respect to the phase range. (a), (b), and (c) are the results for d1 = 50 µm, 100 µm, 150 µm, respectively.
Fig. 18.
Fig. 18. Amplitude spectrum. (a) is the masked image after back propagation. (b) is the amplitude spectrum distribution of (a).
Fig. 19.
Fig. 19. Evaluation of the amplitude spectrum of the reconstructed light wave as a function of the phase range of the scattering medium. (a), (b), and (c) are MSEs for d1 = 50 µm, 100 µm, and 150 µm, respectively.
Fig. 20.
Fig. 20. Profile of phase profiles of fluorescent bead before and after the phase retrieval. Before 5, 10, and 15 show the phase distribution before the iterative phase retrieval when the images are obtained at shifts of 5 µm, 10 µm, 15 µm, respectively, and After 5, 10, and 15 show the phase distribution after the iterative phase retrieval when the images are obtained at shifts of 5 µm, 10 µm, 15 µm, respectively.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

g 2 ( u 2 , v 2 ) = g 1 ( u 1 , v 1 ) exp { i π λ d 1 ( ( u 2 u 1 ) 2 + ( v 2 v 1 ) 2 ) } d u 1 d v 1 = g 1 ( u 2 , v 2 ) Q ( u 2 , v 2 ; 1 d 1 )
Q ( u 2 , v 2 ; 1 d 1 ) = exp { i π λ d 1 ( u 2 2 + v 2 2 ) }
g 3 ( u 2 , v 2 ) = g 2 ( u 2 , v 2 ) e x p ( i ϕ 2 ( u 2 , v 2 ) ) = g 2 ( u 2 , v 2 ) ( 1 + i ϕ 2 ( u 2 , v 2 ) 1 2 ϕ 2 2 ( u 3 , v 3 ) + ) g 2 ( u 2 , v 2 ) ( 1 + i ϕ 2 ( u 2 , v 2 ) )
g 4 ( u 3 , v 3 ) = g 3 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 2 ) g 2 ( u 3 , v 3 ) ( 1 + i ϕ 2 ( u 3 , v 3 ) 1 2 ϕ 2 2 ( u 3 , v 3 ) + ) Q ( u 3 , v 3 ; 1 d 2 ) = g 2 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 2 ) + i g 2 ( u 3 , v 3 ) ϕ 2 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 2 ) + ( 1 2 ϕ 2 2 ( u 3 , v 3 ) + ) Q ( u 3 , v 3 ; 1 d 2 ) = g 2 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 2 ) + i g 2 ( u 3 , v 3 ) ϕ 2 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 2 ) + O ( ϕ 2 2 ( u 3 , v 3 ) ) = g 1 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 1 + d 2 ) + i g 2 ( u 3 , v 3 ) ϕ 2 ( u 3 , v 3 ) Q ( u 3 , v 3 ; 1 d 2 ) + O ( ϕ 2 2 ( u 3 , v 3 ) )
O ( ϕ 2 2 ( u 3 , v 3 ) ) = ( 1 2 ϕ 2 2 ( u 3 , v 3 ) + ) Q ( u 3 , v 3 ; 1 d 2 )
g 1 ( u 1 , v 1 ) = | g 1 ( u 1 , v 1 ) | exp ( i g 1 ( u 1 , v 1 ) )
G 1 ~ ( f x , f y ) = | G 1 ~ ( f x , f y ) | exp ( i G 1 ~ ( f x , f y ) ) = F T [ g 1 ( u 1 , v 1 ) ]
g 1 ( u 1 , v 1 ) = R where R = { real ( g 1 ( u 1 , v 1 ) ) ( if real ( g 1 ( u 1 , v 1 ) ) > 0 ) 0 ( if real ( g 1 ( u 1 , v 1 ) ) 0 )
ϕ z ( x ) = k F T 1 { j 2 π u 4 π 2 | u | 2 + ε F T [ 1 I 0 ( x ) F T 1 { j 2 π u 4 π 2 | u | 2 + ε F T [ I ( x ) z ] } ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.