Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast compressive lens-free tomography for 3D biological cell culture imaging

Open Access Open Access

Abstract

We present a compressive lens-free technique that performs tomographic imaging across a cubic millimeter-scale volume from highly sparse data. Compared with existing lens-free 3D microscopy systems, our method requires an order of magnitude fewer multi-angle illuminations for tomographic reconstruction, leading to a compact, cost-effective and scanning-free setup with a reduced data acquisition time to enable high-throughput 3D imaging of dynamic biological processes. We apply a fast proximal gradient algorithm with composite regularization to address the ill-posed tomographic inverse problem. Using simulated data, we show that the proposed method can achieve a reconstruction speed ∼10× faster than the state-of-the-art inverse problem approach in 3D lens-free microscopy. We experimentally validate the effectiveness of our method by imaging a resolution test chart and polystyrene beads, demonstrating its capability to resolve micron-size features in both lateral and axial directions. Furthermore, tomographic reconstruction results of neuronspheres and intestinal organoids reveal the potential of this 3D imaging technique for high-resolution and high-throughput biological applications.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recently, imaging and analysis of 3-dimensional (3D) cell cultures has led to new opportunities in stem cell research, personalized medicine for cancer or hereditary disorders and drug discovery [14]. Consequently, there is an increasing need for high speed, high resolution optical systems that can image microscopic structures of biological samples in 3D over a large field-of-view (FOV). Confocal fluorescence microscopy is currently one of the main tools for 3D cell imaging [5,6]. However, it requires pointwise scanning of the sample with a focused laser beam, leading to photobleaching and phototoxicity which prevents this technique from being used in long term live cell imaging [7]. More advanced fluorescence microscopy techniques such as multi-photon and light sheet microscopy have been developed to overcome this issue [8,9]. Unfortunately, the use of extrinsic fluorophores in these methods may alter normal cellular physiology, making them unsuitable for many real-world applications [10,11]. In contrast, optical diffraction tomography (ODT) offers a label-free solution to the 3D imaging problem by illuminating the sample from different directions and reconstructing its 3D refractive index (RI) from the resulting scattered optical fields [1216]. Through exploiting the RI contrast of the intrinsic cellular or tissue structures, ODT has shown its potential in cell pathophysiology [1721]. Nevertheless, most ODT approaches rely on interferometric imaging to obtain both the amplitude and the phase of the scattered field, resulting in a complicated system that demands careful calibration and suffers from coherent noise and phase instability. To this end, several intensity diffraction tomography (IDT) methods have been proposed to infer the 3D RI distribution of the object from multi-angle phaseless measurements [2225]. Despite its success, IDT still requires a standard light microscope to collect light, limiting its FOV and scalability.

Lens-free holographic microscopy (LFHM) is an emerging technique which has opened a new opportunity for compact and cost-effective imaging platforms with submicron resolution [2630]. In LFHM, the image of the sample is formed by a numerical reconstruction algorithm based on the principle of digital in-line holography without using any lenses [31]. This lensless feature, together with a unit optical magnification, allows LFHM to achieve a large and scalable FOV only limited by the size of the image sensor. However, conventional LFHM techniques lack the axial resolution, making it a poor candidate for studying the 3D characteristics of a given sample. To expand its utility from 2D imaging to 3D, Isikman et al. combined the basic LFHM concept with multi-angle illumination and reconstructed the volumetric image of the object using a filtered back-projection (FBP) algorithm [32]. The main drawback of this approach is that it assumes straight-line light propagation, whereas for most biological samples the diffraction of the light cannot be ignored in the visible spectrum [33]. Zuo et al. tackled this issue by replacing FBP with ODT in the tomographic reconstruction process and implementing a multi-wavelength light source at each illumination angle to retrieve the phase of the hologram [34]. In addition, a non-negative constraint was applied iteratively to mitigate the well-known “missing cone” problem, which is caused by the limited angular coverage of the illumination and is responsible for the axial elongation of the reconstructed shape in ODT [35]. As opposed to the work of Zuo et al. where the phase retrieval, tomographic reconstruction, and a priori constraints of the object are treated separately, Berdeu et al. formulated 3D LFHM as an inverse problem that conducts these procedures in a unified framework [36]. A superior performance of the inverse problem approach has been demonstrated. However, this is at the cost of a drastically longer computation time for reconstruction, which might be further worsened by the use of smooth approximation of the non-differentiable terms in the reconstruction algorithm [37], rendering this approach impractical for high throughput 3D imaging. Moreover, an important disadvantage of the existing 3D LFHM systems lies in their demand for a large number of multi-angle measurements (e.g., >50). Although this strategy can alleviate the ill-posedness of the tomographic reconstruction induced by the above-mentioned “missing cone” problem and the lack of phase information of the holograms, it will also lead to a prolonged data acquisition time, and therefore is ill-suited to image fast dynamic biological processes. Additionally, scanning the sample with many frames can produce large datasets which could be cumbersome to store and analyze for long term time-lapse experiments.

Here we report a fast 3D LFHM approach that achieves label-free 3D imaging over a large FOV based on highly sparse data, i.e. using single wavelength illumination from only 4 angular directions. To address the ill-posed tomographic inverse problem, a proximal gradient algorithm is devised, in which Wirtinger derivatives are deployed to perform phase retrieval whilst a primal-dual splitting method is applied to provide a versatile and effective implementation of commonly used a priori constraints. From numerical simulation, we observe a 10× improvement in the reconstruction speed compared with the state-of-the-art inverse problem approach in 3D lens-free microscopy [36]. Furthermore, experimental results on the US Air Force (USAF) resolution test chart, polystyrene beads, neuronspheres and intestinal organoids demonstrate the performance of our approach in optical depth sectioning and shape distortion suppression. The proposed method, with its reduced requirement for multi-angle measurements and increased analysis speed, can offer a practical solution to compact and cost-effective tomographic microscopy for applications requiring high throughput, label free 3D imaging of dynamic processes such as cardiac contractility [38] and endothelial cell invasion [39].

2. Methods

2.1 Experimental setup

Figure 1 shows the experimental setup of our 3D LFHM system in which four laser diodes (Egismos, D6-5-635-5) are mounted on a 3D printed support for scanning-free multi-angle illumination. Specifically, each laser diode is located ∼10 cm away from the sample with a polar angle of 45° and an azimuthal angle of i × 120° (i = 0, 1, 2), given that the sample is at the origin of the spherical coordinate. During data acquisition, the laser diodes are turned on sequentially and the four holograms formed by the interference between the incident light and the scattered light are captured by a CMOS image sensor (TOSHIBA TC358743XBG, monochrome, 13-megapixel, pixel size: 1.12 µm) beneath the sample. Since the sensitivity of the sensor decreases with a larger incident angle of the light, a calibration procedure for the laser diodes is applied before image acquisition to compensate for this variation. Specifically, the illumination intensity of individual laser diode is adjusted to ensure that each captured image has similar brightness value to one another at a given frame exposure time.

 figure: Fig. 1.

Fig. 1. The proposed lens-free tomographic microscopy. (a) The experimental setup. (b) The schematic diagram.

Download Full Size | PDF

2.2 Forward model

The forward model in the 3D LFHM maps the 3D scattering potential of the sample to the amplitude of the total electrical field in the imager plane. Let $n(\boldsymbol{r})$, ${n_m}$ denote the RI of the sample and its surrounding medium respectively, the scattering potential is then defined as:

$$f(\boldsymbol{r}) = k_m^2\left[ {{{\left( {\frac{{n(\boldsymbol{r})}}{{{n_m}}}} \right)}^2} - 1} \right]$$
where ${k_m}$ is the wavenumber of the incident light in the medium. When the sample is illuminated by the incident light ${u^{inc}}(\boldsymbol{r})$, the resulting total electrical field $u(\boldsymbol{r})$ can be described by the Lippmann-Schwinger equation [33]:
$$u(\boldsymbol{r}) = {u^{inc}}(\boldsymbol{r}) + \int_V {G(\boldsymbol{r} - {\boldsymbol{r}^{\prime}})f({\boldsymbol{r}^{\prime}})u({\boldsymbol{r}^{\prime}})d{\boldsymbol{r}^{\prime}}}$$
with V the region that encompasses the sample. $G(\boldsymbol{r}) = {e^{j{k_m}R}}/(4\pi R)$ is the Green’s function in 3D, where $R$ denotes the length of the vector $\boldsymbol{r}$. Note that this non-linear equation has no analytical solution. Although various inversion methods have been proposed to solve it by accounting for multiple scattering [4042], it would be impractical to use these methods in a cubic-millimeter-scale FOV due to their considerable demand for computational resources and run-time. As a result, here we adopt the first Born approximation [33] and rewrite Eq. (2) as follows:
$$u(\boldsymbol{r}) \approx {u^{inc}}(\boldsymbol{r}) + \int_V {G(\boldsymbol{r} - {\boldsymbol{r}^{\prime}})f({\boldsymbol{r}^{\prime}}){u^{inc}}({\boldsymbol{r}^{\prime}})d{\boldsymbol{r}^{\prime}}}$$

The implication and limitation of this approximation will be addressed in the discussion section. For numerical calculation, we discretize Eq. (3) by first dividing region V into $N = {N_x} \times {N_y} \times {N_z}$ pixels where ${N_x}$, ${N_y}$, ${N_z}$ denote the number of pixels in the x, y, z direction. Accordingly, the incident field ${u^{inc}}({\boldsymbol{r}^{\prime}})$ and the scattering potential $f({\boldsymbol{r}^{\prime}})$ within V can be represented by vectors ${\textbf{u}^{inc}} \in {{\mathbb C}^N}$ and $\textbf{f} \in {{\mathbb C}^N}$ respectively. Next, the image sensor plane is discretized into $M = {M_x} \times {M_y}$ pixels so that the incident field and the total field at the sensor plane can be denoted by vectors $\textbf{u}_s^{inc} \in {{\mathbb C}^M}$ and $\textbf{u} \in {{\mathbb C}^M}$. Furthermore, we define a discrete version of the Green’s function $\textbf{G} \in {{\mathbb C}^{M \times N}}$ to perform the convolution operation. As a result, Eq. (3) can now be formulated as below:

$$\textbf{u} = \textbf{u}_s^{inc} + \textbf{G diag}(\textbf{f}){\textbf{u}^{inc}}$$
where $\textbf{diag}(\textbf{f}\textrm{)} \in {{\mathbb C}^{N \times N}}$ denotes a diagonal matrix, whose main diagonal entries consist of elements from the vector $\textbf{f}$. Finally, at the imager plane, the amplitude of the total field at the ${p^{th}}$ illumination angle can be expressed by:
$${\textbf{a}_p}(\textbf{f}) = |{\textbf{u}_{sp}^{inc} + \textbf{G diag}(\textbf{f}\textrm{) }\textbf{u}_p^{inc}} |= |{{{\cal{ L}}_p}(\textbf{f})} |$$
where ${{\cal{L}}_p}({\cdot} )$ represents the light propagation model.

2.3 Inverse problem

We formulate the inverse problem in lens-free tomographic imaging as follows:

$${\textbf{f}^\ast } = \textrm{arg}\mathop {\textrm{min}}\limits_\textbf{f} \{{{\cal{D}}(\textbf{f}) + {\cal{R}}(\textbf{f})} \}$$
where ${\cal{D}}(\textbf{f})$ is the data fidelity term that evaluates the difference between the measured light amplitude and the light amplitude predicted by the forward model, ${\cal{R}}(\textbf{f})$ is the regularization term that imposes a priori constraints of the sample to the reconstruction process. Let ${\textbf{y}_p}$ denote the measured light amplitude under the pth illumination, the data fidelity term can then be expressed by:
$${\cal{D}}(\textbf{f}) = \frac{1}{2}{\sum\limits_{p = 1}^P {||{{\textbf{a}_p}(\textbf{f}) - {\textbf{y}_p}} ||} ^2}$$
where $||\textbf{x} ||$ denotes the ${L_2}$ norm of $\textbf{x}$.

The regularization term ${\cal{R}}(\textbf{f})$, on the other hand, can vary depending on the specific sample to be imaged [43]. For instance, the sparsity constraint using ${L_1}$ norm is more effective for particle-like objects while the bound constraint performs better if the lower/upper limit of the object’s scattering potential is known. In consequence, the inverse problem might be reformulated for samples with different a priori constraints, which requires the reconstruction algorithm to have the flexibility to handle different combinations of regularizations including those that are non-differentiable.

oe-28-18-26935-i001

2.4 Reconstruction algorithm

In this work, the fast iterative shrinkage-thresholding algorithm (FISTA) is implemented as the framework to solve the tomographic inverse problem (Algorithm 1). The original inverse problem is now reduced into two subproblems as indicated by Eq. (8) and Eq. (9). For the first subproblem, we calculate the gradient of the data fidelity term by using the Wirtinger derivatives:

$$\nabla {\cal{D}}({\textbf{f}^k}) = \sum\limits_{p = 1}^P {\overline {\textbf{u}_p^{inc}} {\textbf{G}^H}\left\{ {{{\cal{L}}_p}({\textbf{f}^k}) - {\textbf{y}_p} \odot \frac{{{{\cal{L}}_p}({\textbf{f}^k})}}{{|{{{\cal{L}}_p}({\textbf{f}^k})} |}}} \right\}}$$
where $\overline {\textbf{u}_p^{inc}}$ is the complex conjugate of $\textbf{u}_p^{inc}$, ${\textbf{G}^H}$ is the Hermitian conjugate of $\textbf{G}$, $A \odot B$ denotes the Hadamard product of A and B. Similar to the Wirtinger flow algorithm [44,45], Eq. (10) serves as a phase retrieval step in the 3D reconstruction.

oe-28-18-26935-i002

For the second subproblem, we assume that ${\cal{R}}(\textbf{f})$ is a certain combination of commonly used regularization terms such that Eq. (9) can be expressed as the sum of a smooth function with Lipschitzian gradient $S(\textbf{v}) = \frac{1}{2}{||\textbf{v} - {\hat{\textrm{f}}^{k}} ||^2}$, a proximable function $T(\textbf{v})$ (e.g. the ${L_1}$ norm) and a composition of convex functions with a linear operator $\sum\nolimits_q^Q {{F_q}({K_q}\textbf{v})}$ (e.g. the total variation). Accordingly, we reformulate Eq. (9) as follows:

$$pro{x_{\cal{R}}}({\textbf{f}^k},\gamma ) = \textrm{arg}\mathop {\textrm{min}}\limits_\textbf{v} \left\{ {S(\textbf{v}) + T(\textbf{v}) + \sum\nolimits_q^Q {{F_q}({K_q}\textbf{v})} } \right\}$$
where ${K_q}$ is the corresponding linear operator for the convex function ${F_q}$. Note that both $T(\textbf{v})$ and ${F_q}({K_q}\textbf{v})$ can be non-differentiable. Therefore, we apply a primal-dual splitting method [46,47] (Algorithm 2) to solve Eq. (11) where the non-differentiable functions can be tackled with its Moreau proximity operator [48]. In this way, our algorithm can allow for general combinations of regularizations including but not limited to ${L_1}$, ${L_2}$, total variation, bound and support constraints.

3. Results

3.1 Simulation

To demonstrate the effectiveness of the proposed method, we compared its performance with the 3D inverse problem approach reported in Berdeu et al. using simulated data. Specifically, we applied both methods to reconstruct the 3D RI of a numerically generated 3D cell phantom that mimics a biological cell.

The cell phantom is a spherical object with randomly distributed spheres inside which represent subcellular structures (Fig. 2(a)). Each region within the phantom is assigned to a different value of RI ranging from 1.365 to 1.558 while the surrounding medium of the phantom has a RI of 1.333 [49]. In the simulation setup, the region of interest is discretized with a 100 × 100 × 100 grid whose pixel size is 80 nm. Plane waves with a wavelength of 600 nm in vacuum are utilized to illuminate the phantom from 4 directions as described in the experimental setup section. A 200 × 200 grid detector is placed 500 nm beneath the object to capture the amplitude of the total electrical field which is calculated from the Lippmann-Schwinger equation by applying an iterative conjugate gradient algorithm [50]. Based on these simulated measurements, the 3D scattering potential of the phantom is reconstructed with the proposed method here and the method presented in [36].

 figure: Fig. 2.

Fig. 2. Simulation results of a cell phantom. (a) Cell phantom GT. (b) Reconstructed cell phantom by the quasi-newton method. (c) Reconstructed cell phantom by the proposed method. (d), (e), (f) Cross-section images at z = 6.4 µm of the cell phantoms in (a), (b) and (c) respectively. Scale bar: 2 µm. (g) Profiles of the cell phantoms along the green dashed line in (d). (g) Profiles of the cell phantoms along the line in the z direction passing through the red dot in (d).

Download Full Size | PDF

To obtain optimal results, we exploit the fact that the object has real-valued scattering potential by taking the real part of $\nabla {\cal{D}}({\textbf{f}^k})$ in Eq. (10) as the gradient. In addition, the regularization term consisting of the ${L_1}$ norm, the total variation (TV) and the bound constraint is used in the reconstruction as follows:

$${\cal{R}}(\textbf{f}) = {\mu _{{L_1}}}{||\textbf{f} ||_{{L_1}}} + {\mu _{TV}}{||\textbf{f} ||_{TV}} + {{\cal{I}}_{[0,0.5]}}(\textbf{f})$$
where ${\mu _{{L_1}}}$ and ${\mu _{TV}}$ denote the weights of the ${L_1}$ regularization and the TV regularization respectively. By implementing the last term ${{\cal{I}}_{[0,0.5]}}(\textbf{f})$, we assume that the phantom’s scattering potential falls within the range between 0 and 0.5 $ra{d^2} \cdot {m^{ - 2}}$. During the reconstruction, the regularization weights ${\mu _{{L_1}}}$ and ${\mu _{TV}}$ for the proposed method are set to 0.02 with a gradient descent step $\gamma = 0.005$.

For the approach reported by Berdeu et al., a smooth approximation is applied to the ${L_1}$ and TV regularization terms by introducing a small number to mitigate the non-differentiability in the vicinity of 0 [36]. A limited-memory quasi-newton algorithm [36,51] is used to solve the inverse problem whose hyper-parameters are pre-determined by grid search which finds the ${\mu _{{L_1}}}$ and ${\mu _{TV}}$ that yield the minimal mean squared error between the reconstructed image and the ground truth (GT).

The GT of the 3D cell phantom, the reconstructed volumes by the quasi-newton method and the proposed method along with their cross-section images and line profiles are shown in Figs. 2(a)–2(h). The results from the proposed method clearly exhibit fewer artifacts in the background, less shape elongation in the axial direction, as well as a more accurate representation of the phantom’s internal structures. To evaluate the performance of these algorithms quantitatively, we compared the reconstructed 3D RI $\textbf{n}$ with respect to the ground truth ${\textbf{n}_{GT}}$ using the relative error defined below:

$$\varepsilon = \frac{{||{\textbf{n} - {\textbf{n}_{GT}}} ||}}{{||{{\textbf{n}_{GT}}} ||}} \times 100\%$$
Table 1 shows the relative errors of different approaches, together with their computing run-time and memory consumption. To reach the same level of accuracy (SLA), our approach takes only ∼1/10 of the time required by the quasi-newton approach while given the same amount of time (SAT), it can achieve a significantly lower relative error. Moreover, the proposed method also excels in the computing memory consumption which is crucial for 3D imaging over large volumes where storing the image alone could already consume Gigabytes of memory space. The lower memory consumption of this method also makes it a more suitable candidate for graphics processing unit (GPU) acceleration since the memory of graphics cards is often very limited. Accordingly, we further developed a GPU version of our algorithm and included its result in Table 1. With a speed boost of over 30×, this method can become a practical tool in high throughput 3D imaging.

Tables Icon

Table 1. Comparison between the quasi-newton method and the proposed method in PSNR, reconstruction time and computer memory consumption

3.2 USAF

Next, we characterized the lens-free tomographic microscopy by imaging a USAF resolution test chart made of patterned metallic elements on a transparent glass substrate. The sample-to-sensor distance is ∼ 298 µm. The ${L_1}$ norm and TV regularization terms were deployed in the 3D reconstruction whose weights are set to 0.01 and 0.001 respectively. Note that such a strongly scattering object is far beyond the range of validity of the Born approximation used in the forward model [33]. In consequence, only the intensity of the recovered scattering potential is displayed in the following results as suggested in [52].

The captured holograms under multi-angle illumination are displayed in Fig. 3(a). Figure 3(b) shows the reconstructed 3D scattering potential of the chart after 30 iterations with a gradient step $\gamma = 1$. Using an NVIDIA Tesla K80 GPU, the tomographic reconstruction takes ∼ 52 seconds to complete. As a comparison, we also implemented the single wavelength 2D lensfree imaging technique [31] using z-stacking, i.e. by backpropagating the hologram captured under the normal direction illumination to successive focal depths and by stacking the resultant intensity images to form a 3D volume. To evaluate the performance of both methods in suppressing out-of-focus objects, we selected the cross-section images from their 3D reconstruction at different focal depths. Figure 3(c) demonstrates an improved optical sectioning capability of the proposed method compared to the lens-free z-stacking approach in the sense that it successfully rejects the defocused signals from the other imaging planes. Additionally, to study the lateral resolution of our 3D tomographic microscope, a linecut was made along group 8 elements of the resolution chart in each image plane, yielding profiles in Fig. 3(d). The smallest resolvable element indicates a half-pitch resolution of ∼ 1.1 µm which matches the previously reported resolution limit of the 2D lens-free imaging with similar system specifications [53].

 figure: Fig. 3.

Fig. 3. Experimental results of a USAF resolution test chart. (a) Captured holograms under multi-angle illumination with the corresponding azimuthal and polar angles. Scale bar: 85 µm. (b) 3D reconstruction by the proposed method. (c) Left column: cross-section images of the 3D reconstruction at different focal distances. Right column: the corresponding intensity images obtained by lens-free z-stacking. Scale bar: 30 µm. (d) The normalized intensity profiles along group 8 elements (indicated by the dashed line in (c)) of the resolution chart for both approaches in each image plane (z=-5.6 µm, z=0 µm, z=+5.6 µm).

Download Full Size | PDF

3.3 Polystyrene beads

To further characterize the performance of our system, we imaged a volume containing dispersed 10 µm polystyrene microspheres randomly distributed in a 300 µm thick chamber filled with cross-linked polymer formed by mixing 2% sodium alginate solution and 5% calcium chloride solution [54]. We compared the 3D reconstruction results of the sample over a volume of 224 µm × 224 µm × 224 µm between the 2D lens-free z-stacking and the proposed method. The hyper-parameters of the latter ${\mu _{{L_1}}}$, ${\mu _{TV}}$ and $\gamma $ were chosen to be 0.02, 0.03 and 0.1 respectively. The distance between the imager and the center of the volume is ∼ 1025 µm. The captured holograms under multi-angle illumination are displayed in Fig. 4(a). As shown in Fig. 4(b), the conventional 2D lens-free method exhibits poor z-sectioning capability, resulting in a much more extended profile in the axial direction than in the lateral direction (Fig. 4(c)). Figure 4(d) shows the reconstruction result of our approach after 30 iterations which corresponds to a runtime of ∼ 231 seconds. Compared with the 2D lens-free z-stack, all the individual beads are differentiated with localized spherical shape at various focal depths. The cross-section images of the bead in the x-y, x-z and y-z plane (Fig. 4(e)) demonstrate that the proposed method can suppress the shape elongation caused by the “missing cone” problem, thanks to the effective implementation of the TV regularization. Besides 10 µm beads, microspheres with a diameter of 3 µm were also imaged to validate the capability of our approach in reconstructing objects smaller than eukaryotic cells. Figure 4(f) displays the cross-section images of these beads which, similar to the 10 µm case, exhibit only minor axial elongation. The shape distortion can be quantified by studying the full width at half maximum (FWHM) of the reconstructed beads in each direction. For a more accurate FWHM estimation, we upscaled the holograms of 3 µm beads by a factor of 2 so that the reconstructed objects are sufficiently sampled. Figure 4(g) shows the averaged FWHM results of eight 10 µm beads and eight 3 µm beads where the ratio between the axial FWHM and the lateral FWHM is smaller than 1.3, indicating a significant improvement over the conventional method (Fig. 4(c)).

 figure: Fig. 4.

Fig. 4. Experimental results of polystyrene beads. (a) Captured holograms under multi-angle illumination with the corresponding azimuthal and polar angles. Scale bar: 100 µm. (b) 3D reconstruction by the 2D lens-free z-stacking approach. (c) The axial and lateral profiles of 10 µm beads obtained by the 2D lens-free z-stacking approach. (d) 3D reconstruction by the proposed method. (e) Cross-section images of 10 µm beads. Scale bar: 5 µm. (f) Cross-section images of 3 µm beads. Scale bar: 3 µm. (g) The averaged FWHM results of 10 µm beads and 3 µm beads.

Download Full Size | PDF

Next, we studied the impact of the regularization strength and the number of multi-angle measurements on the reconstruction quality of our approach using experimental data. For the former, we applied different values of ${\mu _{TV}}$ to the proposed algorithm while keeping the other hyper-parameters the same to simplify the analysis. For the latter, we first define the set of the azimuthal and polar angles $(\theta ,\phi )$ used for the multi-angle illumination as follows:

$$\{{(\theta = i \times 120^\circ ,\phi = 45^\circ{-} j \times 9^\circ ):i = 0,1,2\textrm{ and }j = 0,1, \cdots ,{J_{max}}} \}$$
By varying ${J_{max}}$ from 5 to 0, the number of measurements is gradually reduced from 16 to 4 by a step size of 3. Based on these settings, the tomographic reconstruction of a 10 µm bead was performed, and the results were compared according to the metric of elongation ratio (ER) defined as: $(2 \times FWH{M_z})/(FWH{M_x} + FWH{M_y})$, where $FWH{M_x}$ and $FWH{M_y}$ are the FWHM of the reconstructed bead in lateral directions, $FWH{M_y}$ denotes the FWHM in the axial direction. A larger ER indicates a more severe shape distortion of the reconstructed bead. As illustrated in Fig. 5, under the same strength of the TV regularization, the ER increases with a reduced number of measurements. However, note that the ER deterioration between the results obtained with 16 and 4 measurements is only ∼ 10% when large values of ${\mu _{TV}}$ (0.03, 0.003) are used, indicating the effectiveness of the TV regularization in mitigating the “missing cone” problem. On the other hand, at a small regularization strength (${\mu _{TV}} = 0.0003$), an ER surge is observed when the number of multi-angle measurements decreases from 7 to 4.

 figure: Fig. 5.

Fig. 5. Impact of the regularization strength and the number of multi-angle measurements. (a) The elongation ratio of the reconstructed bead using different values of ${\mu _{TV}}$ and different numbers of measurements. (b) 3D reconstruction result under the condition indicated by the yellow dashed box in (a). Size of the bounding box: 20 µm in each direction. (c) 3D reconstruction result under the condition indicated by the blue dashed box in (a). Size of the bounding box: 20 µm in each direction.

Download Full Size | PDF

3.4 Biological samples

To demonstrate the potential of the lens-free tomographic microscope in biomedical applications, we imaged neuronspheres composed of multiple neuronal cells (see Appendix for details of sample preparation) and compared this result with the standard 2D lens-free z-stacking. Moreover, the sample is stained with phalloidin-488 (for visualizing actin) and DAPI (for nuclei) for parallel 3D confocal fluorescence imaging. For the confocal image acquisition, we used a 10× 0.45 NA objective with a pinhole diameter of 0.78 Airy unit together with a scanning step size of 0.208 µm in lateral directions and 2 µm in the axial direction. The sample-to-sensor distance is ∼ 661 µm. The hyper-parameters ${\mu _{{L_1}}}$, ${\mu _{TV}}$ and $\gamma $ for the tomographic reconstruction were set to 0.02, 0.002 and 1.0 respectively. Using 30 iterations, the reconstruction process takes ∼ 78 seconds.

As shown in Figs. 6(a) and 6(b), due to its poor optical sectioning performance, the 2D lens-free technique fails to retrieve useful information about the morphology or the axial position of the neuronsphere. By contrast, the lens-free tomographic approach (Figs. 6(c) and 6(d)) generates a 3D volume of the sample that agrees well with the confocal image (Figs. 6(e) and 6(f)) in terms of size, shape and spatial dimension, which can already provide valuable insight in many 3D cell culture studies [55]. Another comparison between the 3D image obtained by the proposed method and the confocal microscopy is given by Figs. 6(g) and 6(h) in which two neuronspheres are close to each other. This result further demonstrates the potential of our approach in reconstructing 3D morphology of biological samples comparable to a confocal imaging system. However, it is also worth mentioning that 3D lens-free tomography and confocal fluorescence microscopy are different imaging modalities. Specifically, the green channel and the blue channel in the confocal images represent the cell actin and nuclei respectively whereas the color displayed in the 3D reconstructed image represents the strength of the scattering potential of the sample thus it provides information over the refractive index contrast.

 figure: Fig. 6.

Fig. 6. Experimental results of neuronspheres by: (a), (b) 2D lens-free z-stacking. Color bar: normalized light field intensity. (c), (d), (g) the proposed method. Color bar: normalized scattering potential. (e), (f), (h) confocal fluorescence microscopy. Blue: DAPI (nuclei). Green: phalloidin (actin).

Download Full Size | PDF

Last, we imaged a 3D cell culture of intestinal organoids embedded in Matrigel without any fluorescence labels to verify the potential of our 3D LFHM technique for high throughput biomedical applications (see Appendix for details of sample preparation). Figure 7 shows the reconstructed 3D image of the sample over a volume of more than 3.4 mm × 2.3 mm × 0.3 mm. Due to memory limitation in the computational reconstruction, a grid size of 2.24 µm was applied instead of 1.12 µm. Using 10 iterations, the reconstruction of the whole volume takes about 12 mins.

 figure: Fig. 7.

Fig. 7. 3D reconstruction of intestinal organoids embedded in Matrigel over a volume of more than 3.4 mm × 2.3 mm × 0.3 mm. Color bar: normalized scattering potential.

Download Full Size | PDF

4. Conclusion and discussion

In this paper, a compact and cost-effective tomographic microscopy technique has been demonstrated. By combining the principles of ODT and LFHM, the proposed method has achieved label-free 3D imaging across a cubic-millimeter-scale volume with a spatial resolution allowing to differentiate features smaller than a single cell. Moreover, the tomographic reconstruction has been performed with reduced artifacts and shape distortion using single wavelength illumination from only 4 directions, thanks to the regularized proximal gradient algorithm that tackles the phase retrieval and the “missing cone” problem. The performance of our method has been evaluated by both simulation and experimental study on various types of samples, indicating its capability in fast 3D imaging with highly sparse data, allowing easy transition to high-throughput set-up. This non-invasive tomographic microscopy technique, with its potential to visualize dynamic biological samples over a large volume, could provide a promising tool for biomedical applications such as drug screening and personalized medicine approaches for acquired or hereditary disorders.

It is worth mentioning that the reconstruction volume of our system can be easily expanded by using a larger image sensor. Additionally, placing the sample closer to the sensor or reducing the polar angle of the illumination can also increase the achievable FOV and depth of field of the 3D LFHM by limiting the lateral shift of the holograms so that they can fall within the active area of the imager. However, a smaller polar angle of the illumination might result in a lower axial resolution as discussed in [56].

We should emphasize that the results showed in the paper cannot reflect the RI of the objects quantitatively since the proposed algorithm is built on the Born approximation which assumes the sample to be a weakly scattering thin object that introduces negligible phase delay. For most of the samples tested in this work, such a condition is not met. In the future, it would be interesting to incorporate more accurate physics models such as the Rytov approximation [33] or the non-linear scattering model to achieve quantitative RI imaging which could be beneficial for discovering correlations between the reconstructed RI and the markers or structures in the cell.

It should also be noted that in our current algorithm, the hyper-parameters, i.e. the weights of the regularization terms, the step size and the iteration number, are determined by manual fine-tuning. Although this approach has been widely used in many image restoration and machine learning problems, it could become time-consuming when a large amount of data need to be processed. Therefore, automated hyper-parameter selection procedures [5759] might prove very useful for our future work in high throughput 3D imaging.

In this work, we have shown a ten-fold reconstruction speed improvement of our method compared to the state-of-the-art by numerical simulation and have demonstrated reconstructing a sample across a volume of more than 3.4 mm × 2.3 mm × 0.3 mm within several minutes. However, such a speed could still be insufficient for high throughput biomedical applications which may require the 3D reconstruction to be completed within seconds or milliseconds. This challenge can be addressed by implementing multiple GPU cards to process data from independent regions of the volume in a parallel way. Nevertheless, a more interesting solution would be to implement algorithms with higher computational efficiency. To this end, deep learning could provide a promising way to significantly improve the reconstruction speed because of its capability to solve image restoration problems in an end-to-end fashion [6062]. Moreover, as shown in [53,63,64], deep learning might also have the potential to further reduce the number of measurements, leading to an even faster imaging speed as well as a more compact optical setup.

Appendix

For neurospheres formation, primary rat hippocampal neurons were isolated from E19 embryonic brains. Cells were cultured in Neurobasal (B-27 Electrophysiology Kit, Thermo Fisher Scientific) medium for 4-5 days in low-adhesion petri dishes, without any ECM coating. For the organoids sample, human intestinal organoids from a cystic fibrosis subject (F508del; R117H) were mechanically dissociated, resuspended in 40% Matrigel (#356231, Corning) and grown in complete organoid medium [65]. Ethical approval is in place at UZ Leuven and rectal biopsies for organoid generation were taken after informed consent.

Funding

European Research Council (617312).

Acknowledgments

The authors thank Olga Krylychkina for preparing the neuronsphere cultures and Ziduo Lin for his help in building the setup.

Disclosures

The authors declare no conflicts of interest.

References

1. R. Edmondson, J. J. Broglie, A. F. Adcock, and L. Yang, “Three-dimensional cell culture systems and their applications in drug discovery and cell-based biosensors,” Assay Drug Dev. Technol. 12(4), 207–218 (2014). [CrossRef]  

2. S. Nath and G. R. Devi, “Three-dimensional culture systems in cancer research: Focus on tumor spheroid model,” Pharmacol. Ther. 163, 94–108 (2016). [CrossRef]  

3. A. C. Rios and H. Clevers, “Imaging organoids: a bright future ahead,” Nat. Methods 15(1), 24–26 (2018). [CrossRef]  

4. J. F. Dekkers, G. Berkers, E. Kruisselbrink, A. Vonk, H. R. De Jonge, H. M. Janssens, I. Bronsveld, E. A. van de Graaf, E. E. S. Nieuwenhuis, and R. H. J. Houwen, “Characterizing responses to CFTR-modulating drugs using rectal organoids derived from subjects with cystic fibrosis,” Sci. Transl. Med. 8(344), 344ra84 (2016). [CrossRef]  

5. R. H. Webb, “Confocal optical microscopy,” Rep. Prog. Phys. 59(3), 427 (1996). [CrossRef]  

6. J. F. Dekkers, M. Alieva, L. M. Wellens, H. C. R. Ariese, P. R. Jamieson, A. M. Vonk, G. D. Amatngalim, H. Hu, K. C. Oost, and H. J. G. Snippert, “High-resolution 3D imaging of fixed and cleared organoids,” Nat. Protoc. 14(6), 1756–1771 (2019). [CrossRef]  

7. J. Icha, M. Weber, J. C. Waters, and C. Norden, “Phototoxicity in live fluorescence microscopy, and how to avoid it,” BioEssays 39(8), 1700003 (2017). [CrossRef]  

8. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]  

9. J. Huisken and D. Y. R. Stainier, “Selective plane illumination microscopy techniques in developmental biology,” Development 136(12), 1963–1975 (2009). [CrossRef]  

10. E. C. Jensen, “Use of fluorescent probes: their effect on cell biology and limitations,” Anat. Rec. 295(12), 2031–2036 (2012). [CrossRef]  

11. R. Alford, H. M. Simpson, J. Duberman, G. C. Hill, M. Ogawa, C. Regino, H. Kobayashi, and P. L. Choyke, “Toxicity of organic fluorophores used in molecular imaging: literature review,” Mol. Imaging 8(6), 729 (2009). [CrossRef]  

12. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1(4), 153–156 (1969). [CrossRef]  

13. A. J. Devaney, “A filtered backpropagation algorithm for diffraction tomography,” Ultrason. Imaging 4(4), 336–350 (1982). [CrossRef]  

14. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4(9), 717–719 (2007). [CrossRef]  

15. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17(1), 266–277 (2009). [CrossRef]  

16. J. Li, Q. Chen, J. Sun, J. Zhang, J. Ding, and C. Zuo, “Three-dimensional tomographic microscopy technique with multi-frequency combination with partially coherent illuminations,” Biomed. Opt. Express 9(6), 2526–2542 (2018). [CrossRef]  

17. K. Lee, K. Kim, J. Jung, J. Heo, S. Cho, S. Lee, G. Chang, Y. Jo, H. Park, and Y. Park, “Quantitative phase imaging techniques for the study of cell pathophysiology: from principles to applications,” Sensors 13(4), 4170–4191 (2013). [CrossRef]  

18. K. Kim, H. Yoon, M. Diez-Silva, M. Dao, R. R. Dasari, and Y. Park, “High-resolution three-dimensional imaging of red blood cells parasitized by Plasmodium falciparum and in situ hemozoin crystals using optical diffraction tomography,” J. Biomed. Opt. 19(1), 011005 (2013). [CrossRef]  

19. J. Yoon, K. Kim, H. Park, C. Choi, S. Jang, and Y. Park, “Label-free characterization of white blood cells by measuring 3D refractive index maps,” Biomed. Opt. Express 6(10), 3865–3875 (2015). [CrossRef]  

20. K. Kim, S. Lee, J. Yoon, J. Heo, C. Choi, and Y. Park, “Three-dimensional label-free imaging and quantification of lipid droplets in live hepatocytes,” Sci. Rep. 6(1), 36815 (2016). [CrossRef]  

21. S.-A. Yang, J. Yoon, K. Kim, and Y. Park, “Measurements of morphological and biophysical alterations in individual neuron cells associated with early neurotoxic effects in Parkinson’s disease,” Cytometry, Part A 91(5), 510–518 (2017). [CrossRef]  

22. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

23. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015). [CrossRef]  

24. R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9(5), 2130–2141 (2018). [CrossRef]  

25. J. Li, A. Matlock, Y. Li, Q. Chen, C. Zuo, and L. Tian, “High-speed in vitro intensity diffraction tomography,” Adv. Photonics 1(6), 066004 (2019). [CrossRef]  

26. A. Ozcan and U. Demirci, “Ultra wide-field lens-free monitoring of cells on-chip,” Lab Chip 8(1), 98–106 (2008). [CrossRef]  

27. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef]  

28. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef]  

29. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9(9), 889–895 (2012). [CrossRef]  

30. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 1–15 (2017). [CrossRef]  

31. Z. Göröcs and A. Ozcan, “On-chip biomedical imaging,” IEEE Rev. Biomed. Eng. 6, 29–46 (2013). [CrossRef]  

32. S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. 108(18), 7296–7301 (2011). [CrossRef]  

33. P. Müller, M. Schürmann, and J. Guck, “The theory of diffraction tomography,” arXiv Prepr. arXiv1507.00466 (2015).

34. C. Zuo, J. Sun, J. Zhang, Y. Hu, and Q. Chen, “Lensless phase microscopy and diffraction tomography with multi-angle and multi-wavelength illuminations using a LED matrix,” Opt. Express 23(11), 14314–14328 (2015). [CrossRef]  

35. J. Lim, K. Lee, K. H. Jin, S. Shin, S. Lee, Y. Park, and J. C. Ye, “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23(13), 16933–16948 (2015). [CrossRef]  

36. A. Berdeu, F. Momey, B. Laperrousaz, T. Bordy, X. Gidrol, J.-M. Dinten, N. Picollet-D’hahan, and C. Allier, “Comparative study of fully three-dimensional reconstruction algorithms for lens-free microscopy,” Appl. Opt. 56(13), 3939–3951 (2017). [CrossRef]  

37. M. Schmidt, G. Fung, and R. Rosales, “Optimization methods for l1-regularization,” Univ. Br. Columbia, Tech. Rep. TR-2009 19, (2009).

38. T. Pauwelyn, R. Stahl, L. Mayo, X. Zheng, A. Lambrechts, S. Janssens, L. Lagae, V. Reumers, and D. Braeken, “Reflective lens-free imaging on high-density silicon microelectrode arrays for monitoring and evaluation of in vitro cardiac contractility,” Biomed. Opt. Express 9(4), 1827–1841 (2018). [CrossRef]  

39. C. Steuwe, M.-M. Vaeyens, A. Jorge-Peñas, C. Cokelaere, J. Hofkens, M. B. J. Roeffaers, and H. Van Oosterwyck, “Fast quantitative time lapse displacement imaging of endothelial cell invasion,” PLoS One 15(1), e0227286 (2020). [CrossRef]  

40. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016). [CrossRef]  

41. H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: Sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4(1), 73–86 (2018). [CrossRef]  

42. E. Soubies, T.-A. Pham, and M. Unser, “Efficient inversion of multiple-scattering model for optical diffraction tomography,” Opt. Express 25(18), 21786–21800 (2017). [CrossRef]  

43. F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J.-L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923–8940 (2018). [CrossRef]  

44. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory 61(4), 1985–2007 (2015). [CrossRef]  

45. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

46. L. Condat, “A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms,” J. Optim. Theory Appl. 158(2), 460–479 (2013). [CrossRef]  

47. Y. C. Tang, C. X. Zhu, M. Wen, and J. G. Peng, “A splitting primal-dual proximity algorithm for solving composite optimization problems,” Acta. Math. Sin.-English Ser. 33(6), 868–886 (2017). [CrossRef]  

48. J. J. Moreau, “Fonctions convexes duales et points proximaux dans un espace hilbertien,” (1962).

49. P. Y. Liu, L. K. Chin, W. Ser, H. F. Chen, C.-M. Hsieh, C.-H. Lee, K.-B. Sung, T. C. Ayi, P. H. Yap, and B. Liedberg, “Cell refractive index for cell biology and disease diagnosis: past, present and future,” Lab Chip 16(4), 634–644 (2016). [CrossRef]  

50. H. A. der Vorst, “Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems,” SIAM J. Sci. Stat. Comput. 13(2), 631–644 (1992). [CrossRef]  

51. N. Keskar and A. Wächter, “A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization,” Optim. Methods Softw. 34(1), 150–171 (2019). [CrossRef]  

52. V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microsc. 205(2), 165–176 (2002). [CrossRef]  

53. Z. Luo, A. Yurt, R. Stahl, A. Lambrechts, V. Reumers, D. Braeken, and L. Lagae, “Pixel super-resolution for lens-free holographic microscopy using deep learning neural networks,” Opt. Express 27(10), 13581–13595 (2019). [CrossRef]  

54. O. Smidsrød and G. Skja, “Alginate as immobilization matrix for cells,” Trends Biotechnol. 8, 71–78 (1990). [CrossRef]  

55. A. Berdeu, B. Laperrousaz, T. Bordy, O. Mandula, S. Morales, X. Gidrol, N. Picollet-D’hahan, and C. Allier, “Lens-free microscopy for 3D+ time acquisitions of 3D cell culture,” Sci. Rep. 8(1), 16135 (2018). [CrossRef]  

56. S. O. Isikman, W. Bishara, U. Sikora, O. Yaglidere, J. Yeah, and A. Ozcan, “Field-portable lensfree tomographic microscope,” Lab Chip 11(13), 2222–2230 (2011). [CrossRef]  

57. Y.-W. Wen and R. H. Chan, “Parameter selection for total-variation-based image restoration using discrepancy principle,” IEEE Trans. on Image Process. 21(4), 1770–1781 (2012). [CrossRef]  

58. H. Liao, F. Li, and M. K. Ng, “Selection of regularization parameter in total variation image restoration,” J. Opt. Soc. Am. A 26(11), 2311–2320 (2009). [CrossRef]  

59. S. Ramani, Z. Liu, J. Rosen, J.-F. Nielsen, and J. A. Fessler, “Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and SURE-based methods,” IEEE Trans. on Image Process. 21(8), 3659–3672 (2012). [CrossRef]  

60. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

61. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). [CrossRef]  

62. Z. Wei and X. Chen, “Deep-learning schemes for full-wave nonlinear inverse scattering problems,” IEEE Trans. Geosci. Remote Sensing 57(4), 1849–1860 (2019). [CrossRef]  

63. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

64. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

65. T. Sato, D. E. Stange, M. Ferrante, R. G. J. Vries, J. H. Van Es, S. Van Den Brink, W. J. Van Houdt, A. Pronk, J. Van Gorp, and P. D. Siersema, “Long-term expansion of epithelial organoids from human colon, adenoma, adenocarcinoma, and Barrett’s epithelium,” Gastroenterology 141(5), 1762–1772 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The proposed lens-free tomographic microscopy. (a) The experimental setup. (b) The schematic diagram.
Fig. 2.
Fig. 2. Simulation results of a cell phantom. (a) Cell phantom GT. (b) Reconstructed cell phantom by the quasi-newton method. (c) Reconstructed cell phantom by the proposed method. (d), (e), (f) Cross-section images at z = 6.4 µm of the cell phantoms in (a), (b) and (c) respectively. Scale bar: 2 µm. (g) Profiles of the cell phantoms along the green dashed line in (d). (g) Profiles of the cell phantoms along the line in the z direction passing through the red dot in (d).
Fig. 3.
Fig. 3. Experimental results of a USAF resolution test chart. (a) Captured holograms under multi-angle illumination with the corresponding azimuthal and polar angles. Scale bar: 85 µm. (b) 3D reconstruction by the proposed method. (c) Left column: cross-section images of the 3D reconstruction at different focal distances. Right column: the corresponding intensity images obtained by lens-free z-stacking. Scale bar: 30 µm. (d) The normalized intensity profiles along group 8 elements (indicated by the dashed line in (c)) of the resolution chart for both approaches in each image plane (z=-5.6 µm, z=0 µm, z=+5.6 µm).
Fig. 4.
Fig. 4. Experimental results of polystyrene beads. (a) Captured holograms under multi-angle illumination with the corresponding azimuthal and polar angles. Scale bar: 100 µm. (b) 3D reconstruction by the 2D lens-free z-stacking approach. (c) The axial and lateral profiles of 10 µm beads obtained by the 2D lens-free z-stacking approach. (d) 3D reconstruction by the proposed method. (e) Cross-section images of 10 µm beads. Scale bar: 5 µm. (f) Cross-section images of 3 µm beads. Scale bar: 3 µm. (g) The averaged FWHM results of 10 µm beads and 3 µm beads.
Fig. 5.
Fig. 5. Impact of the regularization strength and the number of multi-angle measurements. (a) The elongation ratio of the reconstructed bead using different values of ${\mu _{TV}}$ and different numbers of measurements. (b) 3D reconstruction result under the condition indicated by the yellow dashed box in (a). Size of the bounding box: 20 µm in each direction. (c) 3D reconstruction result under the condition indicated by the blue dashed box in (a). Size of the bounding box: 20 µm in each direction.
Fig. 6.
Fig. 6. Experimental results of neuronspheres by: (a), (b) 2D lens-free z-stacking. Color bar: normalized light field intensity. (c), (d), (g) the proposed method. Color bar: normalized scattering potential. (e), (f), (h) confocal fluorescence microscopy. Blue: DAPI (nuclei). Green: phalloidin (actin).
Fig. 7.
Fig. 7. 3D reconstruction of intestinal organoids embedded in Matrigel over a volume of more than 3.4 mm × 2.3 mm × 0.3 mm. Color bar: normalized scattering potential.

Tables (1)

Tables Icon

Table 1. Comparison between the quasi-newton method and the proposed method in PSNR, reconstruction time and computer memory consumption

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

f ( r ) = k m 2 [ ( n ( r ) n m ) 2 1 ]
u ( r ) = u i n c ( r ) + V G ( r r ) f ( r ) u ( r ) d r
u ( r ) u i n c ( r ) + V G ( r r ) f ( r ) u i n c ( r ) d r
u = u s i n c + G diag ( f ) u i n c
a p ( f ) = | u s p i n c + G diag ( f u p i n c | = | L p ( f ) |
f = arg min f { D ( f ) + R ( f ) }
D ( f ) = 1 2 p = 1 P | | a p ( f ) y p | | 2
D ( f k ) = p = 1 P u p i n c ¯ G H { L p ( f k ) y p L p ( f k ) | L p ( f k ) | }
p r o x R ( f k , γ ) = arg min v { S ( v ) + T ( v ) + q Q F q ( K q v ) }
R ( f ) = μ L 1 | | f | | L 1 + μ T V | | f | | T V + I [ 0 , 0.5 ] ( f )
ε = | | n n G T | | | | n G T | | × 100 %
{ ( θ = i × 120 , ϕ = 45 j × 9 ) : i = 0 , 1 , 2  and  j = 0 , 1 , , J m a x }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.