Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Motion-resolved, reference-free holographic imaging via spatiotemporally regularized inversion

Open Access Open Access

Abstract

Holography is a powerful technique that records the amplitude and phase of an optical field simultaneously, enabling a variety of applications such as label-free biomedical analysis and coherent diffraction imaging. Holographic recording without a reference wave has been long pursued because it obviates the high experimental requirements of conventional interferometric methods. However, due to the ill-posed nature of the underlying phase retrieval problem, reference-free holographic imaging is faced with an inherent tradeoff between imaging fidelity and temporal resolution. Here, we propose a general computational framework, termed spatiotemporally regularized inversion (STRIVER), to achieve motion-resolved, reference-free holographic imaging with high fidelity. Specifically, STRIVER leverages signal priors in the spatiotemporal domain to jointly eliminate phase ambiguities and motion artifacts, and, when combined with diversity measurement schemes, produces a physically reliable, time-resolved holographic video from a series of intensity-only measurements. We experimentally demonstrate STRIVER in near-field ptychography, where dynamic holographic imaging of freely swimming paramecia is performed at a framerate-limited speed of 112 fps. The proposed method can be potentially extended to other measurement schemes, spectral regimes, and computational imaging modalities, pushing the temporal resolution toward higher limits.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Quantitative analysis of the optical waves can reveal the subtle interactions between light and physical objects, providing a powerful tool for biomedical and metrological applications [14]. Additionally, acquisition of the complex optical field enables numerical inversion of the light propagation and thus facilitates diffraction imaging, surpassing the limitations of lens-based optics [58]. However, direct access to the phase remains challenging when electromagnetic waves oscillate beyond the speed of the detection devices. Holography provides an elegant solution to this problem [9], where the unknown complex optical field is retrieved from its interference pattern with a known reference field [1015]. Despite its profound impacts in various disciplines, holography typically relies on interferometric setups, posing considerable technical challenges such as long-term stability, complex optical configuration, and high source coherence.

To realize reference-free holographic imaging, phase retrieval has been investigated, which takes a computational approach to restore the complex field distribution from phaseless intensity measurements [16,17]. Considering the ill-posed nature of estimating the complex optical field from a single intensity image, multiple diversity measurements are typically required to suppress the ambiguous solutions. Such diversity can be achieved through varying the measurement parameters, such as the defocus distance [18,19], illumination wavelength [20,21], wavefront modulation [22,23], aperture modulation [24,25], illumination angle [2628], or lateral translation position (also known as ptychography) [2932]. The chances of finding the optimal solution becomes higher as the measurement number increases, which has been justified by both empirical [33] and theoretical [34] studies. However, the requirement of acquiring multiple diversity images renders these approaches inapplicable to imaging dynamic samples.

In order to achieve a higher temporal resolution, one would naturally aim to reconstruct the complex optical field from fewer intensity images. With reduced measurements, phase ambiguity can be suppressed by exploiting prior knowledge of the field distribution. Such knowledge may include physical constraints [3538], sparsity [3943], features learned by neural networks [4450], or other implicit representations [5153]. These approaches can capture dynamic scenes, but the ill-posedness of the single-shot phase retrieval problem leads to limited generalizability and robustness to different scenes and experimental setups. The reliability of the reconstruction depends heavily on the accuracy of the signal priors and is also susceptible to noise and errors.

Facing the inherent tradeoff between imaging fidelity and temporal resolution, it is highly desirable to develop a phase retrieval method capable of capturing dynamic scenes with high quality. A main approach toward this goal is multiplexing [5466], where multiple diversity measurements are compressed into a single or few exposures, but it still remains challenging to achieve competitive imaging quality due to limited information throughput. Other potential solutions include parallel acquisition [6771], which significantly increases the system cost and complexity, scattering-based methods [7274] that involve relatively complicated calibration procedures, and task-specific hardware modifications [75]. It can be seen that current approaches are either dependent on complex hardware implementations or dedicated to particular optical configurations. A versatile reference-free holographic imaging technique remains to be explored.

 figure: Fig. 1.

Fig. 1. Illustration of STRIVER. (a) Schematic setup for the near-field, diffuser-based ptychographic imaging system that is used to introduce measurement diversity in this work. (b) Conceptual comparison with conventional single-shot and multi-frame reconstruction methods. STRIVER takes a set of $K$ diversity images as input, which is represented by a 3D datacube ${\textbf{Y}} \in {\mathbb{R}^{{M_x} \times {M_y} \times K}}$, and outputs a motion-resolved holographic video ${\textbf{X}} \in {\mathbb{C}^{{N_x} \times {N_y} \times K}}$.

Download Full Size | PDF

In this work, we propose a general computational framework, termed spatiotemporally regularized inversion (STRIVER), as an alternative solution to address this intrinsic dilemma between physical reliability and single-shot capability. Contrary to the established solutions where the sample’s temporal stationary assumption and spatial domain features are treated separately, our method exploits the spatiotemporal features of the entire holographic video to regulate the reconstruction problem. As a result, STRIVER can produce motion-resolved, physically reliable holographic video with the same temporal resolution as single-shot methods, as depicted by Fig. 1. Based on a prototype near-field ptychographic imaging system, we experimentally validated STRIVER by conducting high-speed holographic imaging of live paramecia. We further elaborate on the parameter selection rules and algorithmic behaviors, providing guidance for practical implementations.

2. METHODS

A. Problem Formulation

Diversity measurements tackle the ill-posedness of phase retrieval by taking multiple shots, each of which corresponds to different measurement parameters that encode the complex sample transmission function into the intensity distribution at the sensor plane. Therefore, the general forward model can be mathematically expressed in a vectorized form as

$${{\textbf{y}}_k} = |{{\textbf{A}}_k}{{\textbf{x}}_k}|,\quad k = 1,2, \ldots ,K,$$
where ${{\textbf{y}}_k} \in {\mathbb{R}^{{M_x}{M_y}}}$, ${{\textbf{A}}_k} \in {\mathbb{C}^{{M_x}{M_y} \times {N_x}{N_y}}}$, and ${{\textbf{x}}_k} \in {\mathbb{C}^{{N_x}{N_y}}}$ denote the captured field amplitude, the measurement matrix, and the sample transmission function with respect to the $k$th out of $K$ measurements, respectively. ${M_x}$, ${M_y}$, and ${N_x}$, ${N_y}$ refer to the dimension of the captured images and the sample field distribution, respectively. Specifically, we consider ptychography as our measurement scheme in this work, where the sample is laterally translated during measurement to introduce diversity, as schematically illustrated in Fig. 1(a). The captured diffraction patterns are then used to reconstruct the complex transmission function of the sample.

In the previous diversity phase retrieval implementations, it is assumed that no sample motion occurs during the multi-shot acquisition, i.e., ${{\textbf{x}}_1} = {{\textbf{x}}_2} = \ldots = {{\textbf{x}}_K}$, and thus the algorithm outputs a 2D holographic image representing the stationary complex field distribution. For dynamic scenes, however, the sample may have a temporally varying field distribution. The rasterized spatiotemporal sample transmission function is denoted as a complex-valued vector ${\textbf{x}} = ({\textbf{x}}_1^{\mathsf T},{\textbf{x}}_2^{\mathsf T}, \ldots ,{\textbf{x}}_K^{\mathsf T}{)^{\mathsf T}} \in {\mathbb{C}^N}$, where $N = {N_x}{N_y}K$ is the number of voxels in the spatiotemporal datacube ${\textbf{X}} \in {\mathbb{C}^{{N_x} \times {N_y} \times K}}$. While such formulation offers many more degrees of freedom in modeling dynamic objects, it leads to a severely ill-posed reconstruction problem. To address this, we propose STRIVER, which utilizes information redundancy in the spatiotemporal domain to regulate the reconstruction. Specifically, holographic reconstruction is recast as an optimization problem that enforces model consistency and spatiotemporal sparsity simultaneously:

$$\mathop {\min}\limits_{\textbf{x}} \underbrace {\frac{1}{2}\sum\limits_{k = 1}^K \left\| {|{{\textbf{A}}_k}{{\textbf{x}}_k}| - {{\textbf{y}}_k}} \right\|_2^2}_{F({\textbf{x}})} + \underbrace {{\phantom{\sum\limits_{k = 1}}\!\!\!\!\!\!\!\!\!{\rho _s}\parallel {\textbf{D}_{\textit{xy}}}{\textbf{x}}{\parallel _1} + {\rho _t}\parallel {\textbf{D}_t}{\textbf{x}}{\parallel _1}}}_{R({\textbf{x}})}.$$
The objective function we aim to minimize comprises two terms, as intuitively illustrated in Fig. 2. The data-fidelity term $F({\textbf{x}})$ describes how well the current estimate fits the forward model of Eq. (1). The regularization term $R({\textbf{x}})$ is introduced to incorporate prior knowledge of the spatiotemporal object distribution. Based on the observation that most sample distributions exhibit piece-wise smooth profiles in the spatial domain and are temporally consistent, we introduce the spatiotemporal total variation (TV) function in this work as a sparsity-promoting prior. ${{\textbf{D}}_{\textit{xy}}} = {({{\textbf{D}}_x^{\mathsf T},{\textbf{D}}_y^{\mathsf T}})^{\mathsf T}} \in {\mathbb{R}^{2N \times N}}$ and ${{\textbf{D}}_t} \in {\mathbb{R}^{N \times N}}$ denote the finite difference operators along the spatial ($x,\;y$) and temporal ($t$) dimensions, respectively. The finite differences are directly calculated with respect to the complex transmission function. Such a formulation leads to stable algorithmic behaviors and, in the meantime, both amplitude and phase channels are regularized in an implicit manner (see Supplement 1).
 figure: Fig. 2.

Fig. 2. Algorithmic implementation of STRIVER, where the forward model and the spatiotemporal prior are combined into the reconstruction framework. Spatiotemporal regularization is enforced by leveraging sparsity in the gradient domain. Meanwhile, data fidelity is ensured by penalizing deviations from the forward model ${{\textbf{y}}_k} = |{{\textbf{A}}_k}{{\textbf{x}}_k}|$ for each diversity measurement $k = 1, \ldots ,K$.

Download Full Size | PDF

The regularization parameters ${\rho _s}$ and ${\rho _t}$ are used to balance between a physical model and signal priors, offering much flexibility in dealing with different dynamic scenes. By tuning these parameters, the proposed framework can be seen as a generalization to the conventional phase retrieval algorithms. For example, when ${\rho _t} = 0$ and thus no temporal consistency is enforced, STRIVER degenerates into the single-shot phase retrieval algorithm with spatial TV regularization [43]. And, at the other extreme, when ${\rho _t} \to \infty$, the temporal constraint is so strong that the sample field should satisfy ${{\textbf{x}}_1} = {{\textbf{x}}_2} = \cdots = {{\textbf{x}}_K}$ in order to minimize Eq. (2), which becomes the conventional diversity phase retrieval algorithm based on stationary sample assumption. For most dynamic scenes, both ${\rho _s}$ and ${\rho _t}$ should take moderate values to obtain high-quality reconstruction.

It should be mentioned that spatiotemporal signal priors have been explored recently by several works in the context of phase retrieval, including hard constraints [76], low-rank priors [7779], and deep generative priors [80]. Nevertheless, existing approaches suffer from either limited reconstruction quality or an overwhelming computation burden, and dynamic quantitative phase imaging has primarily been demonstrated on synthetic data. Supervised learning has also been studied for extracting spatiotemporal features [81], but it requires labeled training data, which can be experimentally infeasible in certain applications. In comparison, our formulation in Eq. (2) employs handcrafted signal priors that fully exploit sparsity in the complex domain, and the resulting inverse problem can be efficiently solved using an accelerated proximal gradient algorithm with theoretically tractable convergence behaviors. As a result, the high flexibility and scalability of our method make it practical for large-scale experimental datasets.

B. Reconstruction Algorithm

To minimize the non-smooth and non-convex objective function in Eq. (2), we developed a proximal gradient algorithm due to its relatively low memory requirements and computational complexity for 3D large-scale optimization problems. Specifically, the algorithm proceeds by updating the data-fidelity term through a gradient descent step and updating the regularization term via its proximity operator [82]. Based on the Wirtinger calculus for complex variables [83], the gradient of $F$ with respect to ${\textbf{x}}$ is given by

$${\nabla _{\textbf{x}}}F({\textbf{x}}) = {\left({{{\left({{\nabla _{{{\textbf{x}}_1}}}F({\textbf{x}})} \right)}^{\mathsf T}},{{\left({{\nabla _{{{\textbf{x}}_2}}}F({\textbf{x}})} \right)}^{\mathsf T}}, \ldots ,{{\left({{\nabla _{{{\textbf{x}}_K}}}F({\textbf{x}})} \right)}^{\mathsf T}}} \right)^{\mathsf T}},$$
where
$${\nabla _{{{\textbf{x}}_k}}}F({\textbf{x}}) = \frac{1}{2}{\textbf{A}}_k^{\mathsf H}{\text{diag}}\left({\frac{{{{\textbf{A}}_k}{{\textbf{x}}_k}}}{{|{{\textbf{A}}_k}{{\textbf{x}}_k}|}}} \right)\left({|{{\textbf{A}}_k}{{\textbf{x}}_k}| - {{\textbf{y}}_k}} \right)$$
for $k = 1,2, \ldots ,K$ [43]. ${(\cdot)^{\mathsf H}}$ denotes the Hermitian operator. The proximity operator for $R$ is defined as
$${\text{prox}_{\gamma R}}({\textbf{x}}) = \mathop {\text{argmin}}\limits_{\textbf{u}} \gamma R({\textbf{u}}) + \frac{1}{2}\parallel {\textbf{u}} - {\textbf{x}}\parallel _2^2,$$
where $\gamma \gt 0$ is the step size. As indicated by Eq. (5), the proximal update involves solving a regularized Gaussian denoising subproblem. For the TV regularizer, in particular, no closed-form solution to the subproblem is available, and an iterative proximal solver should be invoked. The proximal solver is based on the projected gradient method modified from Ref. [43] (see Supplement 1 for a detailed derivation). Despite the iterative nature, a few iterations are usually sufficient to obtain an accurate result when the regularization parameters ${\rho _s}$ and ${\rho _t}$ take relatively small values, as is often the case in practice.

The basic proximal gradient method, like other first-order methods, has a low convergence speed. In this work, Nesterov’s extrapolation method is introduced to speed up convergence [84], leading to the following iterations, which is also known as the fast iterative shrinkage/threshold algorithm (FISTA):

 figure: Fig. 3.

Fig. 3. Hardware design and characterization. (a) Photograph of the near-field ptychographic imaging system. (b) Enlarged view of (a). (c), (d) Diffuser used in the experiment. A photograph (c) and its calibrated amplitude transmission (d) are shown. (e)–(i) Experimental results of a quantitative phase target. (e) Representative image of the captured raw diffraction patterns. (f) Retrieved amplitude transmission of the target. (g) Retrieved phase transmission of the target. (h) Enlarged view of (g), where Group 7 Element 4 can be resolved. (i) Cross-sectional phase profile of (g).

Download Full Size | PDF

$${{\textbf{v}}^{(i)}} = {{\textbf{u}}^{(i - 1)}} - \gamma {\nabla _{\textbf{u}}}F\left({{{\textbf{u}}^{(i - 1)}}} \right),$$
$${{\textbf{x}}^{(i)}} = {\text{prox}_{\gamma R}}\left({{{\textbf{v}}^{(i)}}} \right),$$
$${{\textbf{u}}^{(i)}} = {{\textbf{x}}^{(i)}} + {\beta _i}\left({{{\textbf{x}}^{(i)}} - {{\textbf{x}}^{(i - 1)}}} \right),$$
where $i = 1,2, \ldots$ is the iteration number, $\gamma \gt 0$ is the step size, and ${{\textbf{u}}^{(0)}} = {{\textbf{x}}^{(0)}}$ is the initial estimate. In all the experiments, the complex sample field is initialized with zero phase and uniform amplitude, despite we observed that different initialization strategies hardly have any influence on the results when the problem is well regularized. The extrapolation parameter in Eq. (8) is chosen as ${\beta _i} = i/(i + 3)$. According to our theoretical and numerical analyses [43,85], the algorithm exhibits stable convergence behavior when the step size is selected as $\gamma = 2/\mathop {\max}\nolimits_k \rho ({{\textbf{A}}_k^{\mathsf H}{{\textbf{A}}_k}})$ (see Supplement 1). To facilitate future research, a MATLAB implementation of the algorithm is available in Ref. [86].

C. Hardware Design and Characterization

To obtain diversity measurements, we developed an imaging system based on near-field ptychography [8789]. The experimental setup is shown in Fig. 3(a). A 532 nm fiber-coupled all-solid-state laser (MGL-III-532, Changchun New Industries Optoelectronics Technology Co., Ltd.) is used as the coherent light source for illumination. We use a variable neutral density filter to control the brightness of illumination, a plano–convex lens with a focal length of 100 mm to collimate the beam, and an aperture to adjust the size of the illumination probe. As highlighted by Fig. 3(b), the sample, diffuser, and sensor are placed very close to each other in sequence, enabling a high numerical aperture for high-resolution imaging. The diffuser is fabricated on a silica substrate with a thin chrome coating layer through laser direct writing. A ${2}\;\text{cm} \times {2}\;\text{cm}$ random binary pattern with a feature size of ${16}\;{\unicode{x00B5}\text{m}} \times {16}\;{\unicode{x00B5}\text{m}}$ is formed on the coating layer, offering amplitude modulation to the wavefront. The actual transmission function of the diffuser is experimentally calibrated using conventional near-field ptychography, as shown in Figs. 3(d) and 3(e). The sample is mounted on a 1D motorized translation stage, which scans continuously at a speed of approximately 2 mm/s during measurement. The exposure time is set to 0.1 ms so that motion blur caused by the translation is negligible during a single exposure. The diffuser is also mounted on a 2D motorized translation stage (KMTS25E/M, Thorlabs Inc.) for calibration purposes (see Supplement 1). The image sensor used in the experiment is an industrial bareboard CMOS sensor (Alvium 1800 U-811 m, Allied Vision) with a full resolution of ${2848} \times {2848}$ and a pixel size of ${2.74}\;{\unicode{x00B5}\text{m}}\; \times \;{2.74}\;{\unicode{x00B5}\text{m}}$. For high-speed imaging, the central ${1424}\; \times \;{1424}$ pixels are selected as the region of interest, enabling a maximum acquisition framerate of 112 fps and a field of view (FOV) of ${3.9}\;\text{mm} \times {3.9}\;\text{mm}$. The spatial resolution and information throughput are primarily limited by the pixel size and cable transmission bandwidth of the sensor, respectively.

Diffusers have been widely used in diffraction imaging for various purposes [9096]. In our system, specifically, the reasons for introducing the diffuser are threefold. First, it provides a cost-effective means of achieving high-speed modulation. Due to the randomness of the diffuser profile, a small lateral displacement comparable to the feature size is sufficient to introduce enough diversity. Second, scattering induced by the diffuser facilitates the encoding of low-frequency phase information into the captured diffraction patterns, thus improving phase reconstruction accuracy [97,98]. Third, using the diffuser instead of a confined probe to modulate the sample field allows for an extended FOV for each exposure. As a result, the entire sensor area is illuminated, in contrast to standard ptychography where a confined probe is used for illumination.

Accurate holographic reconstruction requires a precise modeling of the imaging process. For this specific system configuration, the measurement operator ${{\textbf{A}}_k}$ involves lateral translation, free-space propagation, and diffuser modulation (see Fig. S1 in Supplement 1). The underlying unknown system parameters include lateral translation positions, propagation distances, and the diffuser profile, all of which can be accurately estimated through computational methods. A detailed description of the system calibration pipeline can be found in Supplement 1.

We evaluated the system performance using a static quantitative phase target (QPT, Benchmark Technologies). A measured raw intensity image is shown in Fig. 3(e). Due to the scattering effect from the diffuser, the raw image resembles speckle patterns, where the sample distribution is merely visible. Ten diversity images were captured at different sample translation positions, and the amplitude and phase of the sample were then retrieved, as shown in Figs. 3(f) and 3(g), respectively. From the retrieved holographic image, Group 7 Element 4 of the USAF resolution target can be resolved, indicating a half-pitch resolution of 2.76 µm, which agrees with the Nyquist sampling limit of the sensor pixels [Fig. 3(h)]. As shown in Fig. 3(i), the retrieved phase profile closely matches the nominal phase value provided by the manufacturer, demonstrating the quantitative phase imaging capability of the system.

3. RESULTS

A. Quantitative Evaluation of STRIVER

We first conducted a quantitative analysis of STRIVER through simulation. The forward model and most system parameters (pixel size, wavelength, etc.) were set to match the experimental setup. A diffuser with random binary amplitude transmission was used for wavefront modulation. We used the Shepp–Logan phantom to synthesize the virtual object, and introduced a rigid motion with a translation speed of one pixel per frame and a clockwise rotation speed of one degree per frame, as shown in Fig. 4. The sample was discretized into $376 \times 376\,\,\text{pixels}$ and $K = 10$ diversity measurements with additive white Gaussian noises (signal-to-noise ratio 30 dB) were used for reconstruction. We ran the reconstruction algorithm for 200 iterations with 10 subiterations for each proximal update. The reconstruction took approximately 30 s on a laptop computer with an Intel Core i7-12700H (${2.30}\;\text{GHz})$ CPU and an Nvidia GeForce RTX 3060 graphics card. To evaluate the impact of different regularization parameters, we selected 14 different values for both ${\rho _s}$ and ${\rho _t}$, including zero value and 13 positive values logarithmically spaced between ${10^{- 5}}$ and ${10^{- 1}}$. Additionally, we considered a special case where the sample is assumed to be motionless, i.e., ${\rho _t} \to \infty$. In total, $14 \times 15$ groups of different parameters were tested. Relative error (RE) was used as the quantitative error metric for comparison, which is defined as

$${\text{RE}} = \parallel {\hat{\textbf{x}}} - {\textbf{x}}{\parallel _2}/\parallel {\textbf{x}}{\parallel _2},$$
where ${\hat{\textbf{x}}}$ and ${\textbf{x}}$ denote the estimated and ground-truth sample field, respectively.
 figure: Fig. 4.

Fig. 4. Quantitative evaluation of STRIVER based on simulation. (a) Reconstruction error, quantified by RE, for different choices of ${\rho _s}$ and ${\rho _t}$. ${14} \times {15}$ groups of regularization parameters are tested. (b) Visualized results. The upper two rows present the ground-truth sample amplitude and phase at different timestamps. The lower two rows present the retrieved phase and residuals of four selected cases, where the fifth out of $K = 10$ frames is shown: (i) without regularization, (ii) optimum under stationary sample constraint, (iii) optimum with spatial regularization only, and (iv) global optimum.

Download Full Size | PDF

Figure 4(a) illustrates the influence of ${\rho _s}$ and ${\rho _t}$ on phase retrieval of dynamic samples. Four representative cases are highlighted, and the corresponding retrieved sample fields and residuals are shown in Fig. 4(b). When ${\rho _s} = 0$ and ${\rho _t} = 0$, no regularization is applied to the estimated sample field, leading to a severely ill-posed reconstruction problem with poor fidelity [Case (i)]. For conventional multi-frame phase retrieval methods, the stationary sample assumption is enforced, corresponding to the blue line in Fig. 4(a). As a result, although the use of proper spatial regularization can improve the reconstruction quality, the result inherently suffers from motion blurs [Case (ii)]. When ${\rho _t} = 0$, the spatiotemporal datacube ${\textbf{x}}$ is decoupled into temporally independent frames ${{\textbf{x}}_1},{{\textbf{x}}_2}, \ldots ,{{\textbf{x}}_K}$ because no temporal regularization is incorporated, as indicated by the red line in Fig. 4(a). STRIVER then degenerates into frame-by-frame single-shot phase retrieval [Case (iii)]. When both ${\rho _s}$ and ${\rho _t}$ take moderate positive values, we obtain the global optimal result [Case (iv)]. For different dynamic samples, the optimal choice of the regularization parameters may vary from case to case depending on the degree of the motion (see Supplement 1).

B. Dynamic Holographic Imaging of Live Paramecia

To experimentally demonstrate dynamic holographic imaging with STRIVER, we conducted imaging of live paramecia. These microorganisms exhibit high-speed, non-rigid movements, with swimming speeds reaching up to a few millimeters per second. Their rapid and flexible motion can result in significant changes between adjacent measurements, even when the sensor operates at its maximum framerate. From these measurements, we reconstructed a motion-resolved holographic video using STRIVER. To strike a balance between physical robustness and computational workload, we performed the reconstruction of the entire video on a sliding window of $K = 10$ measurements, with a step size of five measurements. For comparison, we also implemented conventional reconstruction algorithms that rely solely on spatial domain sparsity for regularization. The retrieved 2D amplitude and unwrapped phase slices of the sample’s 3D spatiotemporal datacube are shown in Fig. 5. The conventional single-shot approach ($K = 1$) can capture the sample’s motion during the measurement, but the spatial-domain sparsity alone is insufficient to suppress artifacts due to the ill-posedness of the problem, inevitable measurement noises, and system errors, leading to limited signal-to-noise ratio in the reconstruction. Increasing the diversity measurement number $K$ in the conventional reconstruction method significantly reduces ambiguity but introduces severe motion artifacts, because the sample’s movement violates the underlying stationary assumption. In contrast, results obtained by STRIVER using the same number of diversity images achieve both high fidelity and high temporal resolution simultaneously. This unique advantage is mainly attributed to the utilization of temporal correlations in the sample field, which is clearly visualized by the spatiotemporal slices. Additional comparisons with a conventional reconstruction algorithm using $K = 2$, 3, and 5 diversity images can also be found in Figs. S13 of Supplement 1 and Visualization 1. It is evident that STRIVER outperforms the conventional method significantly in terms of both temporal resolution and imaging fidelity. The quantitative phase images reveal variations in the optical path lengths of the sample, providing an additional observation channel. For example, the vacuoles of the paramecium are clearly visualized with high contrast due to the refractive index difference between the body fluid and the surrounding medium.

 figure: Fig. 5.

Fig. 5. Comparison of different reconstruction algorithms on a live paramecium sample. 2D amplitude and unwrapped phase slices from a $110 \times 110 \times 70$ spatiotemporal datacube of the sample field are shown. From left to right are the results obtained by the conventional method with different numbers of diversity images ($K = 1$ and 10) and STRIVER ($K = 10$), respectively. Scale bars are 50 µm ($x,y$) and 0.1 s ($t$).

Download Full Size | PDF

We further validated STRIVER using multiple ptychographic datasets of paramecium samples. Figure 6 showcases the retrieved quantitative phase videos, where different types of sample movements are observed. In Figs. 6(b) and 6(c), the paramecia exhibit a 180° rotation, which is clearly visualized by the mirrored quantitative phase images (see Visualization 2). In Figs. 6(e) and 6(f), fast translational movements of the paramecium at a speed of approximately 1 mm/s are resolved (see Visualization 3). These experimental observations of various sample movements reveal the capability of STRIVER in characterizing fast and complex motions.

 figure: Fig. 6.

Fig. 6. Dynamic quantitative phase imaging of live paramecia. (a)–(c) Observed rotation movements of paramecia. (a) Full FOV of the sample. Selected phase images from a holographic video for two paramecia are shown in (b) and (c) (Visualization 2). (d)–(f) Observed translational movements of paramecia. (d) Full FOV of the sample. Selected phase images of two paramecia are shown in (e) and (f) (Visualization 3). Scale bars in (a) and (d) are 500 µm, in (b), (c), (e), and (f) are 100 µm.

Download Full Size | PDF

4. DISCUSSION AND CONCLUSION

From an algorithmic perspective, the STRIVER framework can be easily adapted to accommodate other diversity measurement schemes with minimal modifications. This can be realized, for example, by incorporating task-specific forward models [99]. We expect that the performance of STRIVER could potentially be improved by substituting the current total variation regularizer with advanced spatiotemporal priors capable of capturing long-term spatiotemporal features, hence enhancing the reconstruction quality and reducing the hardware requirement [100,101].

From an application perspective, the prototype lensless ptychographic system can be further optimized for higher imaging throughput. For example, the spatial resolution can be improved toward the diffraction limit by incorporating pixel super-resolution techniques [102105], and the temporal resolution can also be improved beyond the sensor framerate with high-speed encoding devices [106110]. Another future direction is to extend the spectral regime to shorter wavelengths. For example, X-ray or electron ptychography of dynamic samples can be potentially realized using STRIVER, enabling new applications in life and materials sciences [7,111].

To conclude, we have introduced STRIVER as a versatile computational framework for phase retrieval of dynamic scenes. It leverages diversity measurement and spatiotemporal signal priors to achieve physically reliable, motion-resolved complex field reconstruction. Using near-field diffuser-based ptychographic modulation as our diversity measurement scheme, we experimentally achieved holographic imaging of freely swimming paramecia at a framerate-limited speed of 112 fps. The algorithmic behaviors have been theoretically explored, ensuring computationally efficient implementation on large-scale datasets.

In a broader context, the concept of diversity measurements is ubiquitous in myriad computational imaging modalities, such as tomographic imaging [112116], single-pixel imaging [117119], polarization imaging [120122], and super-resolution imaging [123125], where temporal resolution is often sacrificed to obtain high-dimensional information from a sequence of low-dimensional measurements. In light of this, while our current work focuses on holography, we envision that the general principle underlying STRIVER may open the door to realize high-speed computational imaging that was previously unattainable.

Funding

National Natural Science Foundation of China (62235009).

Acknowledgment

The authors would like to thank Prof. Yuecheng Shen for helpful discussions on the paper.

Disclosures

The authors declare no conflicts of interest.

Data availability

The MATLAB implementation of the algorithm is available in Ref. [86]. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12, 578–589 (2018). [CrossRef]  

2. Z. Wang, L. Miccio, S. Coppola, et al., “Digital holography as metrology tool at micro-nanoscale for soft matter,” Light Adv. Manuf. 3, 151–176 (2022). [CrossRef]  

3. T. L. Nguyen, S. Pradeep, R. L. Judson-Torres, et al., “Quantitative phase imaging: recent advances and expanding potential in biomedicine,” ACS Nano 16, 11516–11544 (2022). [CrossRef]  

4. J. Park, B. Bai, D. Ryu, et al., “Artificial intelligence-enabled quantitative phase imaging methods for life sciences,” Nat. Methods 20, 1645–1660 (2023). [CrossRef]  

5. H. N. Chapman and K. A. Nugent, “Coherent lensless X-ray imaging,” Nat. Photonics 4, 833–839 (2010). [CrossRef]  

6. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18, 77–102 (2016). [CrossRef]  

7. F. Pfeiffer, “X-ray ptychography,” Nat. Photonics 12, 9–17 (2018). [CrossRef]  

8. L. Valzania, Y. Zhao, L. Rong, et al., “THz coherent lensless imaging,” Appl. Opt. 58, G256–G275 (2019). [CrossRef]  

9. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef]  

10. G. Popescu, T. Ikeda, R. R. Dasari, et al., “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. 31, 775–777 (2006). [CrossRef]  

11. B. Kemper and G. Von Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47, A52–A61 (2008). [CrossRef]  

12. C. Zheng, D. Jin, Y. He, et al., “High spatial and temporal resolution synthetic aperture phase microscopy,” Adv. Photon. 2, 065002 (2020). [CrossRef]  

13. D. Pirone, J. Lim, F. Merola, et al., “Stain-free identification of cell nuclei using tomographic phase microscopy in flow cytometry,” Nat. Photonics 16, 851–859 (2022). [CrossRef]  

14. J. Zhang, S. Dai, C. Ma, et al., “A review of common-path off-axis digital holography: towards high stable optical instrument manufacturing,” Light Adv. Manuf. 2, 333–349 (2021). [CrossRef]  

15. Z. Huang, P. Memmolo, P. Ferraro, et al., “Dual-plane coupled phase retrieval for non-prior holographic imaging,” PhotoniX 3, 1–16 (2022). [CrossRef]  

16. Y. Shechtman, Y. C. Eldar, O. Cohen, et al., “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process Mag. 32(3), 87–109 (2015). [CrossRef]  

17. J. Dong, L. Valzania, A. Maillard, et al., “Phase retrieval: from computational imaging to machine learning: a tutorial,” IEEE Signal Process Mag. 40(1), 45–57 (2023). [CrossRef]  

18. R. W. Gerchberg, “A practical algorithm for the determination of plane from image and diffraction pictures,” Optik 35, 237–246 (1972).

19. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983). [CrossRef]  

20. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982). [CrossRef]  

21. P. Bao, F. Zhang, G. Pedrini, et al., “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. 33, 309–311 (2008). [CrossRef]  

22. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007). [CrossRef]  

23. Y. Wu, M. K. Sharma, and A. Veeraraghavan, “WISH: wavefront imaging sensor with high resolution,” Light Sci. Appl. 8, 44 (2019). [CrossRef]  

24. C. Shen, M. Liang, A. Pan, et al., “Non-iterative complex wave-field reconstruction based on Kramers–Kronig relations,” Photon. Res. 9, 1003–1012 (2021). [CrossRef]  

25. K. Lee, J. Lim, and Y. Park, “Full-field quantitative X-ray phase nanotomography via space-domain Kramers–Kronig relations,” Optica 10, 407–414 (2023). [CrossRef]  

26. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34, 1924–1926 (2009). [CrossRef]  

27. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

28. Y. Baek and Y. Park, “Intensity-based holographic imaging via space-domain Kramers–Kronig relations,” Nat. Photonics 15, 354–360 (2021). [CrossRef]  

29. H. M. L. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]  

30. P. Thibault, M. Dierolf, A. Menzel, et al., “High-resolution scanning X-ray diffraction microscopy,” Science 321, 379–382 (2008). [CrossRef]  

31. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]  

32. S. Jiang, J. Zhu, P. Song, et al., “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20, 1058–1065 (2020). [CrossRef]  

33. V. Y. Ivanov, V. Sivokon, and M. Vorontsov, “Phase retrieval from a set of intensity measurements: theory and experiment,” J. Opt. Soc. Am. A 9, 1515–1524 (1992). [CrossRef]  

34. P. Grohs, S. Koppensteiner, and M. Rathmair, “Phase retrieval: uniqueness and stability,” SIAM Rev. 62, 301–350 (2020). [CrossRef]  

35. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987). [CrossRef]  

36. T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. 98, 233901 (2007). [CrossRef]  

37. S. Marchesini, “Invited article: a unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. 78, 011301 (2007). [CrossRef]  

38. J. Oh, H. Hugonnet, and Y. Park, “Non-interferometric stand-alone single-shot holographic camera using reciprocal diffractive imaging,” Nat. Commun. 14, 4870 (2023). [CrossRef]  

39. L. Denis, D. Lorenz, E. Thiébaut, et al., “Inline hologram reconstruction with sparsity constraints,” Opt. Lett. 34, 3475–3477 (2009). [CrossRef]  

40. A. Szameit, Y. Shechtman, E. Osherovich, et al., “Sparsity-based single-shot subwavelength coherent diffractive imaging,” Nat. Mater. 11, 455–459 (2012). [CrossRef]  

41. V. Katkovnik and K. Egiazarian, “Sparse phase imaging based on complex domain nonlocal BM3D techniques,” Digit. Signal Process. 63, 72–85 (2017). [CrossRef]  

42. W. Zhang, L. Cao, D. J. Brady, et al., “Twin-image-free holography: a compressive sensing approach,” Phys. Rev. Lett. 121, 093902 (2018). [CrossRef]  

43. Y. Gao and L. Cao, “Iterative projection meets sparsity regularization: towards practical single-shot quantitative phase imaging with in-line holography,” Light Adv. Manuf. 4, 1–17 (2023). [CrossRef]  

44. A. Sinha, J. Lee, S. Li, et al., “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

45. Y. Rivenson, Y. Zhang, H. Günaydn, et al., “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018). [CrossRef]  

46. A. Goy, K. Arthur, S. Li, et al., “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121, 243902 (2018). [CrossRef]  

47. X. Chang, L. Bian, and J. Zhang, “Large-scale phase retrieval,” eLight 1, 1–12 (2021). [CrossRef]  

48. H. Wang and L. Tian, “Local conditional neural fields for versatile and generalizable large-scale reconstructions in computational imaging,” arXiv, arXiv:2307.06207 (2023). [CrossRef]  

49. L. Huang, H. Chen, T. Liu, et al., “Self-supervised learning of hologram reconstruction using physics consistency,” Nat. Mach. Intell. 5, 895–907 (2023). [CrossRef]  

50. K. Wang, L. Song, C. Wang, et al., “On the use of deep learning for phase recovery,” arXiv, arXiv:2308.00942 (2023). [CrossRef]  

51. F. Wang, Y. Bian, H. Wang, et al., “Phase imaging with an untrained neural network,” Light Sci. Appl. 9, 77 (2020). [CrossRef]  

52. E. Bostan, R. Heckel, M. Chen, et al., “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7, 559–562 (2020). [CrossRef]  

53. H. Zhu, Z. Liu, Y. Zhou, et al., “DNF: diffractive neural field for lensless microscopic imaging,” Opt. Express 30, 18168–18178 (2022). [CrossRef]  

54. K. Creath and G. Goldstein, “Dynamic quantitative phase imaging for biological objects using a pixelated phase mask,” Biomed. Opt. Express 3, 2866–2880 (2012). [CrossRef]  

55. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494, 68–71 (2013). [CrossRef]  

56. X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103, 171105 (2013). [CrossRef]  

57. L. Tian, X. Li, K. Ramchandran, et al., “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]  

58. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3, 9–14 (2016). [CrossRef]  

59. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PloS ONE 12, e0171228 (2017). [CrossRef]  

60. W. Lee, D. Jung, S. Ryu, et al., “Single-exposure quantitative phase imaging in color-coded LED microscopy,” Opt. Express 25, 8398–8411 (2017). [CrossRef]  

61. B. Lee, J.-Y. Hong, D. Yoo, et al., “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5, 976–983 (2018). [CrossRef]  

62. X. Dong, X. Pan, C. Liu, et al., “Single shot multi-wavelength phase retrieval with coherent modulation imaging,” Opt. Lett. 43, 1762–1765 (2018). [CrossRef]  

63. J. Sun, Q. Chen, J. Zhang, et al., “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43, 3365–3368 (2018). [CrossRef]  

64. Y. Fan, J. Sun, Q. Chen, et al., “Single-shot isotropic quantitative phase microscopy based on color-multiplexed differential phase contrast,” APL Photon. 4, 121301 (2019). [CrossRef]  

65. J. Luo, Y. Liu, D. Wu, et al., “High-speed single-exposure time-reversed ultrasonically encoded optical focusing against dynamic scattering,” Sci. Adv. 8, eadd9158 (2022). [CrossRef]  

66. M. Du, X. Liu, A. Pelekanidis, et al., “High-resolution wavefront sensing and aberration analysis of multi-spectral extreme ultraviolet beams,” Optica 10, 255–263 (2023). [CrossRef]  

67. A. C. Chan, J. Kim, A. Pan, et al., “Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes),” Sci. Rep. 9, 11114 (2019). [CrossRef]  

68. G. I. Haham, O. Peleg, P. Sidorenko, et al., “High-resolution (diffraction limited) single-shot multiplexed coded-aperture ptychography,” J. Opt. 22, 075608 (2020). [CrossRef]  

69. C. Wang, M. Hu, Y. Takashima, et al., “Snapshot ptychography on array cameras,” Opt. Express 30, 2585–2598 (2022). [CrossRef]  

70. T. Aidukas, P. C. Konda, and A. R. Harvey, “High-speed multi-objective Fourier ptychographic microscopy,” Opt. Express 30, 29189–29205 (2022). [CrossRef]  

71. B. Wang, S. Li, Q. Chen, et al., “Learning-based single-shot long-range synthetic aperture Fourier ptychographic imaging with a camera array,” Opt. Lett. 48, 263–266 (2023). [CrossRef]  

72. K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016). [CrossRef]  

73. L. Gong, Q. Zhao, H. Zhang, et al., “Optical orbital-angular-momentum-multiplexed data transmission under high scattering,” Light Sci. Appl. 8, 27 (2019). [CrossRef]  

74. K. Lee, J. Lim, S. Y. Lee, et al., “Direct high-resolution X-ray imaging exploiting pseudorandomness,” Light Sci. Appl. 12, 88 (2023). [CrossRef]  

75. M. Kellman, M. Chen, Z. F. Phillips, et al., “Motion-resolved quantitative phase imaging,” Biomed. Opt. Express 9, 5456–5466 (2018). [CrossRef]  

76. J. Zhang, D. Yang, Y. Tao, et al., “Spatiotemporal coherent modulation imaging for dynamic quantitative phase and amplitude microscopy,” Opt. Express 29, 38451–38464 (2021). [CrossRef]  

77. N. Vaswani, S. Nayer, and Y. C. Eldar, “Low-rank phase retrieval,” IEEE Trans. Signal Process. 65, 4059–4074 (2017). [CrossRef]  

78. Z. Chen, G. Jagatap, S. Nayer, et al., “Low rank Fourier ptychography,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2018), pp. 6538–6542.

79. G. Jagatap, Z. Chen, S. Nayer, et al., “Sample efficient Fourier ptychography for structured data,” IEEE Trans. Comput. Imaging 6, 344–357 (2019). [CrossRef]  

80. P. Bohra, T.-A. Pham, Y. Long, et al., “Dynamic Fourier ptychography with deep spatiotemporal priors,” Inverse Prob. 39, 064005 (2023). [CrossRef]  

81. S. Lu, Y. Tian, Q. Zhang, et al., “Dynamic quantitative phase imaging based on Ynet-ConvLSTM neural network,” Opt. Lasers Eng. 150, 106833 (2022). [CrossRef]  

82. N. Parikh and S. Boyd, “Proximal algorithms,” Found. Trends Optim. 1, 127–239 (2014). [CrossRef]  

83. K. Kreutz-Delgado, “The complex gradient operator and the CR-calculus,” arXiv, arXiv:0906.4835 (2009). [CrossRef]  

84. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009). [CrossRef]  

85. Y. Gao, F. Yang, and L. Cao, “Pixel super-resolution phase retrieval for lensless on-chip microscopy via accelerated Wirtinger flow,” Cells 11, 1999 (2022). [CrossRef]  

86. Y. Gao and L. Cao, “Spatiotemporally regularized inversion (STRIVER) for motion-resolved computational imaging,” GitHub (2023) [accessed December 28 2023], https://github.com/THUHoloLab/STRIVER.

87. M. Stockmar, P. Cloetens, I. Zanette, et al., “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3, 1927 (2013). [CrossRef]  

88. T. Wang, S. Jiang, P. Song, et al., “Optical ptychography for biomedical imaging: recent progress and future directions,” Biomed. Opt. Express 14, 489–532 (2023). [CrossRef]  

89. S. Jiang, P. Song, T. Wang, et al., “Spatial-and Fourier-domain ptychography for high-throughput bio-imaging,” Nat. Protoc. 18, 2051–2083 (2023). [CrossRef]  

90. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. Lett. 35, 2585–2587 (2010). [CrossRef]  

91. F. Zhang, B. Chen, G. R. Morrison, et al., “Phase retrieval by coherent modulation imaging,” Nat. Commun. 7, 13367 (2016). [CrossRef]  

92. P. Berto, H. Rigneault, and M. Guillon, “Wavefront sensing with a thin diffuser,” Opt. Lett. 42, 5117–5120 (2017). [CrossRef]  

93. C. Wang, X. Dun, Q. Fu, et al., “Ultra-high resolution coded wavefront sensor,” Opt. Express 25, 13736–13746 (2017). [CrossRef]  

94. X. Pan, C. Liu, and J. Zhu, “Coherent amplitude modulation imaging based on partially saturated diffraction pattern,” Opt. Express 26, 21929–21938 (2018). [CrossRef]  

95. N. Antipa, G. Kuo, R. Heckel, et al., “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018). [CrossRef]  

96. Y. Zhang, Z. Zhang, and A. Maiden, “Ptycho-cam: a ptychographic phase imaging add-on for optical microscopy,” Appl. Opt. 61, 2874–2880 (2022). [CrossRef]  

97. S. Jiang, C. Guo, P. Song, et al., “Resolution-enhanced parallel coded ptychography for high-throughput optical imaging,” ACS Photon. 8, 3261–3271 (2021). [CrossRef]  

98. J. Yi, J. Zhao, B. Wang, et al., “Surface metrology by multiple-wavelength coherent modulation imaging,” Appl. Opt. 61, 7218–7224 (2022). [CrossRef]  

99. Y. Gao and L. Cao, “Projected refractive index framework for multi-wavelength phase retrieval,” Opt. Lett. 47, 5965–5968 (2022). [CrossRef]  

100. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019). [CrossRef]  

101. L. Wang, M. Cao, Y. Zhong, et al., “Spatial-temporal transformer for video snapshot compressive imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 45, 9072–9089 (2022). [CrossRef]  

102. W. Bishara, T.-W. Su, A. F. Coskun, et al., “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010). [CrossRef]  

103. V. Katkovnik, I. Shevkunov, N. V. Petrov, et al., “Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments,” Optica 4, 786–794 (2017). [CrossRef]  

104. Y. Gao and L. Cao, “Generalized optimization framework for pixel super-resolution imaging in digital holography,” Opt. Express 29, 28805–28823 (2021). [CrossRef]  

105. S. Jiang, C. Guo, P. Song, et al., “High-throughput digital pathology via a handheld, multiplexed, and AI-powered ptychographic whole slide scanner,” Lab Chip 22, 2657–2670 (2022). [CrossRef]  

106. Z. Wang, L. Spinoulas, K. He, et al., “Compressive holographic video,” Opt. Express 25, 250–262 (2017). [CrossRef]  

107. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5, 1113–1127 (2018). [CrossRef]  

108. X. Yuan, D. J. Brady, and A. K. Katsaggelos, “Snapshot compressive imaging: theory, algorithms, and applications,” IEEE Signal Process Mag. 38(2), 65–88 (2021). [CrossRef]  

109. X. Liu, A. Skripka, Y. Lai, et al., “Fast wide-field upconversion luminescence lifetime thermometry enabled by single-shot compressed ultrahigh-speed imaging,” Nat. Commun. 12, 6401 (2021). [CrossRef]  

110. B. Zhang, X. Yuan, C. Deng, et al., “End-to-end snapshot compressed super-resolution imaging with deep optics,” Optica 9, 451–454 (2022). [CrossRef]  

111. Y. Jiang, Z. Chen, and Y. Han, “Electron ptychography of 2D materials to deep sub-ångström resolution,” Nature 559, 343–349 (2018). [CrossRef]  

112. R. Horstmeyer, J. Chung, X. Ou, et al., “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). [CrossRef]  

113. D. Jin, R. Zhou, Z. Yaqoob, et al., “Tomographic phase microscopy: principles and applications in bioimaging,” J. Opt. Soc. Am. B 34, B64–B77 (2017). [CrossRef]  

114. S. Chowdhury, M. Chen, R. Eckert, et al., “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6, 1211–1219 (2019). [CrossRef]  

115. J. Li, A. Matlock, Y. Li, et al., “High-speed in vitro intensity diffraction tomography,” Adv. Photon. 1, 066004 (2019). [CrossRef]  

116. U. Kim, H. Quan, S. H. Seok, et al., “Quantitative refractive index tomography of millimeter-scale objects using single-pixel wavefront sampling,” Optica 9, 1073–1083 (2022). [CrossRef]  

117. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13, 13–20 (2019). [CrossRef]  

118. W. Jiang, Y. Yin, J. Jiao, et al., “2,000,000 fps 2D and 3D imaging of periodic or reproducible scenes with single-pixel detectors,” Photon. Res. 10, 2157–2164 (2022). [CrossRef]  

119. Z.-H. Xu, W. Chen, J. Penuelas, et al., “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26, 2427–2434 (2018). [CrossRef]  

120. C. He, H. He, J. Chang, et al., “Polarisation optics for biomedical and clinical applications: a review,” Light Sci. Appl. 10, 194 (2021). [CrossRef]  

121. S. Song, J. Kim, T. Moon, et al., “Polarization-sensitive intensity diffraction tomography,” Light Sci. Appl. 12, 124 (2023). [CrossRef]  

122. S. Mu, Y. Shi, Y. Song, et al., “Multislice computational model for birefringent scattering,” Optica 10, 81–89 (2023). [CrossRef]  

123. R. Heintzmann and T. Huser, “Super-resolution structured illumination microscopy,” Chem. Rev. 117, 13890–13908 (2017). [CrossRef]  

124. X. Chen, S. Zhong, Y. Hou, et al., “Superresolution structured illumination microscopy reconstruction algorithms: a review,” Light Sci. Appl. 12, 172 (2023). [CrossRef]  

125. R. Cao, F. L. Liu, L.-H. Yeh, et al., “Dynamic structured illumination microscopy with a neural space-time model,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2022), pp. 1–12.

Supplementary Material (4)

NameDescription
Supplement 1       Supplemental document
Visualization 1       Holographic video of live paramecia.
Visualization 2       Holographic video of live paramecia.
Visualization 3       Holographic video of live paramecia.

Data availability

The MATLAB implementation of the algorithm is available in Ref. [86]. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

86. Y. Gao and L. Cao, “Spatiotemporally regularized inversion (STRIVER) for motion-resolved computational imaging,” GitHub (2023) [accessed December 28 2023], https://github.com/THUHoloLab/STRIVER.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Illustration of STRIVER. (a) Schematic setup for the near-field, diffuser-based ptychographic imaging system that is used to introduce measurement diversity in this work. (b) Conceptual comparison with conventional single-shot and multi-frame reconstruction methods. STRIVER takes a set of $K$ diversity images as input, which is represented by a 3D datacube ${\textbf{Y}} \in {\mathbb{R}^{{M_x} \times {M_y} \times K}}$, and outputs a motion-resolved holographic video ${\textbf{X}} \in {\mathbb{C}^{{N_x} \times {N_y} \times K}}$.
Fig. 2.
Fig. 2. Algorithmic implementation of STRIVER, where the forward model and the spatiotemporal prior are combined into the reconstruction framework. Spatiotemporal regularization is enforced by leveraging sparsity in the gradient domain. Meanwhile, data fidelity is ensured by penalizing deviations from the forward model ${{\textbf{y}}_k} = |{{\textbf{A}}_k}{{\textbf{x}}_k}|$ for each diversity measurement $k = 1, \ldots ,K$.
Fig. 3.
Fig. 3. Hardware design and characterization. (a) Photograph of the near-field ptychographic imaging system. (b) Enlarged view of (a). (c), (d) Diffuser used in the experiment. A photograph (c) and its calibrated amplitude transmission (d) are shown. (e)–(i) Experimental results of a quantitative phase target. (e) Representative image of the captured raw diffraction patterns. (f) Retrieved amplitude transmission of the target. (g) Retrieved phase transmission of the target. (h) Enlarged view of (g), where Group 7 Element 4 can be resolved. (i) Cross-sectional phase profile of (g).
Fig. 4.
Fig. 4. Quantitative evaluation of STRIVER based on simulation. (a) Reconstruction error, quantified by RE, for different choices of ${\rho _s}$ and ${\rho _t}$. ${14} \times {15}$ groups of regularization parameters are tested. (b) Visualized results. The upper two rows present the ground-truth sample amplitude and phase at different timestamps. The lower two rows present the retrieved phase and residuals of four selected cases, where the fifth out of $K = 10$ frames is shown: (i) without regularization, (ii) optimum under stationary sample constraint, (iii) optimum with spatial regularization only, and (iv) global optimum.
Fig. 5.
Fig. 5. Comparison of different reconstruction algorithms on a live paramecium sample. 2D amplitude and unwrapped phase slices from a $110 \times 110 \times 70$ spatiotemporal datacube of the sample field are shown. From left to right are the results obtained by the conventional method with different numbers of diversity images ($K = 1$ and 10) and STRIVER ($K = 10$), respectively. Scale bars are 50 µm ($x,y$) and 0.1 s ($t$).
Fig. 6.
Fig. 6. Dynamic quantitative phase imaging of live paramecia. (a)–(c) Observed rotation movements of paramecia. (a) Full FOV of the sample. Selected phase images from a holographic video for two paramecia are shown in (b) and (c) (Visualization 2). (d)–(f) Observed translational movements of paramecia. (d) Full FOV of the sample. Selected phase images of two paramecia are shown in (e) and (f) (Visualization 3). Scale bars in (a) and (d) are 500 µm, in (b), (c), (e), and (f) are 100 µm.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

y k = | A k x k | , k = 1 , 2 , , K ,
min x 1 2 k = 1 K | A k x k | y k 2 2 F ( x ) + k = 1 ρ s D xy x 1 + ρ t D t x 1 R ( x ) .
x F ( x ) = ( ( x 1 F ( x ) ) T , ( x 2 F ( x ) ) T , , ( x K F ( x ) ) T ) T ,
x k F ( x ) = 1 2 A k H diag ( A k x k | A k x k | ) ( | A k x k | y k )
prox γ R ( x ) = argmin u γ R ( u ) + 1 2 u x 2 2 ,
v ( i ) = u ( i 1 ) γ u F ( u ( i 1 ) ) ,
x ( i ) = prox γ R ( v ( i ) ) ,
u ( i ) = x ( i ) + β i ( x ( i ) x ( i 1 ) ) ,
RE =∥ x ^ x 2 / x 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.