Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Diffraction-gated real-time ultrahigh-speed mapping photography

Open Access Open Access

Abstract

Single-shot high-speed mapping photography is a powerful tool used for studying fast dynamics in diverse applications. Despite much recent progress, existing methods are still strained by the trade-off between sequence depth and light throughput, errors induced by parallax, limited imaging dimensionality, and the potential damage caused by pulsed illumination. To overcome these limitations, we explore time-varying optical diffraction as a new gating mechanism to obtain ultrahigh imaging speed. Inspired by the pulse front tilt-gated imaging and the space-time duality in optics, we embody the proposed paradigm in the developed diffraction-gated real-time ultrahigh-speed mapping (DRUM) photography. The sweeping optical diffraction envelope generated by the inter-pattern transition of a digital micromirror device enables sequential time-gating at the sub-microsecond level. DRUM photography can capture a transient event in a single exposure at 4.8 million frames per second. We apply it to the investigation of femtosecond laser-induced breakdown in liquid and laser ablation in biological samples.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Single-shot high-speed mapping (also referred to as framing) photography is an important imaging technique for the observation of transient scenes in real time (i.e., at their actual time of occurrence) [1]. This approach maps time-gated frames of a dynamic scene onto different spatial positions, which are recorded by one or many two-dimensional (2D) detectors. It circumvents the need for ultrafast CCD or CMOS cameras, whose special sensors have lower fill factors and sensitivity [2]. Compared to conventional streak imaging [3], high-speed mapping photography features 2D ultrafast imaging ability. Different from many computational ultrafast imaging techniques [411], it does not require complex optical modulation hardware (e.g., a spatial encoder [7] and an interferometry setup [11]) and sophisticated image reconstruction software (e.g., convex optimization algorithms [12] and deep neural networks [13,14]). As a result, it has a wide application scope that is not constrained by requirements in spatiotemporal sparsity and bandwidths. Because of these important technical merits, high-speed mapping photography has been implemented in diverse scientific studies, including streamer discharge [15], material phase transition [16], and shock-wave propagation [17].

The most commonly implemented method in single-shot high-speed mapping photography is based on beam splitting and gated intensified CCD cameras [18]. Despite its popularity, this method suffers from the trade-off between the sequence depth (i.e., the number of frames in each captured movie) and light throughput. Meanwhile, because of the requirement of apparatus duplication in each arm after beam splitting, a system scale-up would considerably increase the construction cost and operational complexity. To overcome these limitations, alternative methods seek to first transfer the temporal information to certain photon tags (e.g., wavelength [1923], angle [24], and space [25]) and then exploit the properties in these tags (e.g., color dispersion and propagation direction) to separate temporal slices to different spatial positions [26]. Without the need to replicate the image of the transient scene, these methods break the trade-off between light throughput and sequence depth. In the meantime, they enjoy high flexibility in tuning the sequence depth, frame size, and other technical specifications based on the same imaging setup. Nevertheless, these methods still confront several limitations. For example, most of the systems require sophisticated apparatuses, such as optical parametric amplifiers [20], a high-speed rotating mirror [25], and a femtosecond pulse shaper [23]. Moreover, these image modalities are inherently constrained by the limit of these photon tags. In particular, for time-wavelength mapping, temporal resolution degrades with a larger sequence depth because each image is probed only by a portion of the original spectrum. Time-angle mapping could induce parallax errors from different probing directions.

Among existing paradigms, of particular interest is linear time-space coupling. A well-known representation of this category is the pulse front tilt (PFT) [27]. When a femtosecond pulse is reflected by (or transmitted through) a diffraction grating, the linear phase added to the temporal frequency spectrum of the incident pulse linearly links the time to one spatial axis. An imaging system is used to produce an output pulse at the original pulse width but with a tunable tilt angle, which provides femtosecond time-gating [28]. Compared to time-wavelength mapping, PFT-based approaches are not subject to the trade-off between temporal resolution and sequence depth. Because the illumination and/or the detection are perpendicular to the system’s optical axis, PFT-based approaches are parallax-free. Leveraging these advantages, linear time-space coupling based on PFT has been used in ultrafast electron microscopy [29], single-shot autocorrelation measurement [30], and femtosecond fluorescence spectroscopy [31]. Nevertheless, existing systems that are only capable of point-probing or line-probing are not readily available for high-speed mapping photography. Moreover, the object either needs to be assumed to be spatially uniform or must move laterally. Finally, these systems employ a femtosecond laser to probe the events, which may pose potential risks of sample damage.

To overcome these limitations, here, we report diffraction-gated real-time ultrahigh-speed mapping (DRUM) photography. Based on optical space-time duality, we derive the spatial equivalence of the linear phase ramp from that in the temporal frequency spectrum. This dynamic phase profile generates the linear time-space coupling in the diffraction envelope, which gates out successive temporal slices in adjacent diffraction orders. Optically embodying this concept, single-shot DRUM photography can capture transient events in real time with an imaging speed of 4.8 million frames per second (Mfps). We demonstrate the feasibility of DRUM photography by imaging the dynamics of intensity decay and beam sweeping. To show DRUM photography’s broad utility, we apply it to the study of femtosecond-pulse-induced bubble dynamics in liquid and the ablation of a biological sample at single-cell resolution.

2. RESULTS

A. Operating Principle of DRUM Photography

The operating principle of DRUM photography is inspired by the optical space-time duality that originates from the mathematical equivalence between spatial diffraction and temporal dispersion [32,33]. In particular, we seek to create time-space coupling in light diffraction to overcome the limitations in the PFT-based time-gating. Using spatial Fourier transformation, this coupling can be generated by a time-varying linear phase ramp at the spatial frequency domain. In practice, this dynamic phase profile is optically embodied by the inter-pattern transition of a digital micromirror device (DMD) as a programmable diffraction grating. First, the DMD’s pixelated configuration produces multiple diffraction orders that duplicate the information of the transient scene. Second, the fast flipping motion of micromirrors in the inter-pattern transition sweeps the diffraction envelope through these diffraction orders. This operation serves as a time gate that extracts sequential frames from the duplicated transient scenes. Details of the theory, modeling, and simulation of DRUM photography can be found in Supplement 1, Sections S1–S2, Figs. S1–S4, and Visualization 1. A detailed explanation of the link between DRUM photography and PFT-gated ultrafast imaging is provided in Supplement 1, Section S3, and Table S1.

Developed based upon this principle, the DRUM photography system is shown schematically in Fig. 1(a). A 473-nm continuous-wave laser (CNI Laser, blue-473-200mw) is used as the light source to illuminate a transient scene. The transmitted light is collected by a finite objective lens (Nikon, CF Achro ${4} \times$, 0.1 NA), is reflected by a beam splitter (Thorlabs, BP250), and then forms an image of the transient scene on the intermediate image plane. Then, it is processed by a folded ${4}f$ imaging system consisting of a stereoscopic objective lens (Olympus, MVPLAPO2XC, 0.5 NA) and a DMD (Ajile Light Industries, AJD-4500) consisting of an array of ${912} \times {1140}$ micromirrors (Supplement 1, Sections S1A, Fig. S1). With a rhombus orientation, each micromirror can be quickly flipped between two static states with a tilt angle of ${\theta _{{\rm b} - {\rm off}}} = - 12^\circ$ (the “off” state) and ${\theta _{{\rm b} - {\rm on}}} = + 12^\circ$ (the “on” state) from the surface normal on the $x^{\prime}$ axis. Thus, serving as a reflective programmable blazed grating, the DMD generates many diffraction orders, seven of which are along the $x^{\prime}$ axis and pass through the same stereoscopic objective lens and land on spatially separated positions on the intermediate image plane. Finally, they are relayed by another ${4}f$ imaging system, consisting of Lens 1 (Thorlabs, AC508-100-A) and Lens 2 (Thorlabs, AC508-75-A), to a CMOS camera (Optronis, CP70-1HS-M-1900) rotated by ${\sim}{34}^\circ$ to accommodate these images.

 figure: Fig. 1.

Fig. 1. DRUM photography. (a) Schematic. (b) Operating principle. ${t_{\rm s}}$, time of the dynamic scene. A synthetic scene composed of seven letters, “DrumCam,” at different times is used to illustrate the system’s operating principle.

Download Full Size | PDF

During the operation of DRUM photography, the camera’s exposure is synchronized with the DMD’s flipping motion (detailed in Supplement 1, Section S4). During the transition from an “all-off” pattern to an “all-on” pattern, the continuous and synchronous change of the tilt angle of each micromirror results in a time-varying phase profile (Supplement 1, Fig. S4c), which induces sweeping of the diffraction envelope through the diffraction orders located in its moving trajectory. Derived using scalar diffraction theory [34,35] (detailed in Supplement 1, Section S1B) for a point target, the intensity observed at the intermediate image plane is expressed as

$$\begin{split}I_{\rm r}(x,t)&=2c_{\rm r}^{2}(1+\cos\,m\pi)\cdot \Bigg\{\Bigg[{\rm sinc}\left(\frac{x-f\sin(2\theta_{\rm b})}{\sqrt{2}\lambda f/w}\right)\\&\quad\cdot\!\sum_{m=-\infty}^{\infty}\delta\left(x-\frac{m\lambda f}{p}\right)\Bigg]\otimes{\rm sinc}\left(\frac{xL_{x^{\prime}}}{\lambda f}\right)\Bigg\}^{2}.\end{split}$$

Here, ${c_{\rm r}}$ is a constant, $\lambda$ denotes the wavelength of the continuous-wave laser, $f$ denotes the focal length of the stereoscopic objective lens, $p$ denotes the pitch of micromirrors, and $m$ is the index of the diffraction orders. $\otimes$ denotes the convolution operation; $w$ denotes the width of the micromirror; ${\theta _{\rm b}}$, as a function of time $t$, is the instantaneous tilt angle of the micromirror during flipping; ${L_{{x^\prime}}}$ denotes the DMD’s window size.

Equation (1) provides the foundation of the image acquisition of DRUM photography. In particular, the dynamic scene convolves ${I_{\rm r}}({x,t})$ to produce the image at the intermediate image plane. Second, the first “sinc” term reveals time-space coupling by a sweeping diffraction envelope across the diffraction orders. The sequential dwelling produces successive temporal slices of the transient scene [Fig. 1(b)]. These frames are relayed by the ${4}f$ imaging system to the camera, which records a snapshot [inset of Fig. 1(a)].

B. Key Parameters of DRUM Photography

As the core device of DRUM photography, the DMD’s intrinsic characteristics determine the key parameters of the DRUM photography system. First, the sequence depth is determined by the sweeping range of the diffraction center and the spacing of the diffraction orders. Because the micromirror’s tilt angle changes from ${\theta _{{\rm b} - {\rm off}}}$ to ${\theta _{{\rm b} - {\rm on}}}$ during the inter-pattern transition, the diffraction center sweeps across a total angular span of 48°. Meanwhile, from Eq. (1), the corresponding angle of each diffraction order is determined by $m\lambda /p$ under the paraxial approximation. Therefore, the eligible diffraction orders must satisfy the relation $2{\theta _{{\rm b} - {\rm off}}} \le m\lambda /p \le 2{\theta _{{\rm b} - {\rm on}}}$. By considering the experimental conditions (i.e., $\lambda = 473{\rm \;nm}$ and $p = 10.8{\rm \;\unicode{x00B5}{\rm m}}$), the index of diffraction orders is $m = 0, \pm 2, \pm 4, \pm 6$, $\pm 8$. Among them, the diffraction orders $m = \pm 8$ are too close to the beam’s reflection directions of the static “off” and “on” states of the micromirrors, which produces a high background. They are blocked at the intermediate image plane. Thus, DRUM photography has the sequence depth of seven frames.

Second, the field of view (FOV) of DRUM photography is jointly determined by the sequence depth, overall magnification ratio, and the size of the camera’s sensor. The first two parameters are either intrinsic to the DMD or are easy to adjust, while the last parameter presents a major limitation. To maximize the FOV, the camera used in the DRUM photography system was rotated by ${\sim}{34}^\circ$ so that the seven frames were aligned with the camera’s diagonal.

Finally, the frame rate of DRUM photography is governed by the flipping motion of the micromirrors, which obeys $t \epsilon [{0,{t_{\rm f}}}]$ (where ${t_{\rm f}}$ is the total time of the flipping operation) and ${\theta _{\rm b}} \epsilon [{{\theta _{{\rm b} - {\rm off}}}, {\theta _{{\rm b} - {\rm on}}}}]$. Considering the sweeping speed of the diffraction envelop and the distance between adjacent diffraction orders, the frame rate of DRUM photography is determined by

$$\gamma _{\rm DRUM}=\frac{2p}{\Delta m \lambda}\frac{d\theta_{\rm b}}{dt}.$$

Here, $\frac{{d{\theta _{\rm b}}}}{{dt}}$ denotes the angular velocity of the micromirrors’ flipping motion, and $\Delta m$ is the index difference of the adjacent diffraction orders ($\Delta m=2$ in this study). Details of the derivation of Eq. (2) and the simulation of $\frac{{d{\theta _{\rm b}}}}{{dt}}$ are presented in Section S1C and Figs. S2–S3 in Supplement 1.

C. Evaluation of the System Performance

The demonstration of DRUM photography starts with the quantification of the key parameters of the system. To characterize spatial resolution, we placed a negative USAF resolution target (Edmund Optics, 55-622) at the object plane. The images captured by DRUM photography are shown in Fig. 2(a). Resolving Group 7 Element 2 in all seven frames, the DRUM photography system has the spatial resolution of 3.5 µm. To characterize the system’s imaging speed and temporal resolution, we used a photodiode, loaded on a laterally moving translation stage, to capture the time history of intensity for each diffraction order during the DMD micromirror’s flipping motion. Details in measurement and characterization are provided in Supplement 1, Section S5, Fig. S5, and Table S2. The full evolution is shown in Visualization 2, and the temporal response for each diffraction order is shown in Fig. 2(b). The frame interval between adjacent orders is ${0.21}\;{\pm}\;{0.03}\;\unicode{x00B5} {\rm s}$ (${\rm mean}\;{\pm}\;{\rm standard}\;{\rm deviation}$), which corresponds to a mean frame rate of 4.8 Mfps. Moreover, the temporal resolution, defined as the full-width at half-maximum of the convolution between the temporal response (see Supplement 1, Fig. S5c) and the time window of each frame, is ${0.37}\;{\pm}\;{0.01}\;\unicode{x00B5} {\rm s}$. Additional details of the quantification of these parameters are provided in Supplement 1, Section S5.

 figure: Fig. 2.

Fig. 2. Characterization of the performance of DRUM photography. (a) Spatial resolution tested with a negative USAF resolution target. (b) Temporal response at the center of each diffraction order. (c) Time-lapse images of the intensity decay of a bar target. (d) Normalized average intensity from (c) as the function of time with fitting. (e) As (c) but showing the laser beam’s sweeping motion. (f) Laser beam’s horizontal position as a function of time with fitting.

Download Full Size | PDF

We then conducted proof-of-concept experiments by imaging vertical bars (from Group 3 Element 3 on the resolution target) illuminated by modulated light using an acousto-optic modulator (G&H, 170172). In the first experiment, a sinusoidal waveform with a period of 30.0 µs was used to induce an intensity decay to the illumination beam. A detailed description of the experimental setup is presented in Supplement 1, Section S6. The recorded seven frames are shown in Fig. 2(c). Plotted in Fig. 2(d), the time history of the average intensity of all three bars well matches the sinusoidal fitting curve with a period of 29.2 µs. In the second experiment, a ramp voltage waveform with a period of 3 µs was used to sweep the illumination beam across a field of ${0.64}\;{\rm mm} \times {0.35}\;{\rm mm}$, which contained horizontal bars and a number “3”. The recorded frames are shown in Fig. 2(e). By tracking the location of the intensity peak in the $x$ direction, the sweeping speed was characterized to be 348 m/s, which is in good agreement with the preset condition, as shown in Fig. 2(f). The full evaluations of both dynamics are included in Visualization 3.

D. Side-View Observation of Laser-Induced Breakdown in Liquid

To show the broad utility of DRUM photography, we used it to observe the interaction of femtosecond laser pulses with liquid. In the experimental setup [Fig. 3(a)], a single pump pulse (1035-nm wavelength and 350-fs pulse width) generated by a femtosecond laser (Huaray, HR-Femto-10) was focused by a 15-mm-focal-length lens (Thorlabs, LB1092) into distilled water in a quartz cuvette. The tight focus ionized the distilled water. This laser-induced breakdown (LIB) generated a plasma channel, followed by cavitation at the focus, which perturbed the refractive index of the distilled water [36]. This process was recorded by DRUM photography. Using this setup with a 9-µJ pump pulse, we captured three sequences, each starting at 0, 1.50 µs, and 3.00 µs from the laser-pulse excitation, to record the channel length changes in distilled water over a total of 4.23 µs [see Fig. 3(b) and Visualization 4]. The time history of the channel length is plotted in Fig. 3(c). It shows that the channel appears at 0.23 µs and roughly maintains its length for 1 µs. Then, its length decreases from 368 µm until the channel completely vanishes at 3.46 µs. At the same time, a bubble appears at 3.00 µs. For each frame, after the centroid of this bubble was located, the distance between the centroid and the radial intensity maximum was computed. The averaged result was used as the value of the bubble radius. The result shows an increasing bubble radius from 21 µm at 3.00 µs to 36 µm at 3.65 µs, followed by its decrease to 25 µm at 4.23 µs.

 figure: Fig. 3.

Fig. 3. Side-view observation of laser-induced breakdown in distilled water using DRUM photography. (a) Experimental setup. (b) Time-lapse images showing the evolution of the plasma channel in distilled water using a 9-µJ pump pulse. (c) Time history of the channel length and bubble radius quantified from (b). (d) Development of cavitation from the plasma channel by using a 10-µJ pump pulse. (e) Bubble radius as the function of time from (d), fitted by the cavitation theory.

Download Full Size | PDF

As a comparison, we increased the pump pulse energy to 10 µJ. The result of DRUM photography is shown in Fig. 3(d) and Visualization 5. At this energy level, a bubble with a channel appears in the FOV, and the diameter of the bubble increased from 240 µm to 325 µm, as shown in Fig. 3(e). The change of generated bubble radius (i.e., the bubble cavitation dynamics) was fit with the well-known Rayleigh–Plesset model. Starting from the bubble inception, the model assumes spherical symmetry and incompressible surrounding fluid and incorporates the physics associated with the fluid viscosity, density, and surface tension at the gas-liquid interface. The result shows excellent agreement with the experimental results, predicting an increase in maximum bubble radius with increasing pulse energy [37]. Supplement 1, Sections S7–S8, and Fig. S6 contain information on the numerical modeling of bubble dynamics and additional experimental data.

E. Front-View Observation of Cavitation Dynamics in Liquid

We also imaged bubble dynamics from the front view using a carbonated drink. In particular, the pump pulse, reflected by a short-pass dichroic mirror (Edmund Optics, 69-219), was focused onto the object plane by the objective lens [Fig. 4(a)]. The time-lapse images of the generated bubble are shown in Fig. 4(b) and Visualization 6. For each frame, we found the bubble’s centroid and averaged the intensity radially. Then, from the location of the intensity minimum, we calculated the derivative of the intensity profile. The bubble’s radius was determined by the distance between the bubble’s center to the first turning point in the calculated derivatives [Fig. 4(b)]. As depicted in Fig. 4(c), the time history of the bubble’s size was fitted with the Rayleigh–Plesset model [38], which shows an excellent agreement with the cavitation theory (detailed in Supplement 1, Section S7).

 figure: Fig. 4.

Fig. 4. Front-view imaging of cavitation dynamics using DRUM photography. (a) Experimental setup. (b) Time-lapse images of a pump-pulse-generated bubble (left column) and the normalized intensity profiles averaged in the radial direction (right column). Scale bar: 200 µm. The green dot marks the first turning point of the line profile from the intensity minimum, which is used to determine the bubble radius. (c) Bubble radius as the function of time fitted by the cavitation theory.

Download Full Size | PDF

F. Imaging of Laser Ablation of a Biological Sample

To investigate the transient interaction between an ultrashort laser pulse and a biological sample using the experimental setup shown in Fig. 4(a), we focused a 9-µJ femtosecond pump pulse onto a single-layer onion cell sample placed at the object plane. The captured images are shown in Fig. 5(a), and the corresponding movie is shown in Visualization 7. The time history of the damaged area is shown in Fig. 5(b). The result reveals three phases in this laser ablation process. First, within 0.23 µs, the used pump pulse ablated the cytoplasm and a small part of the cell skeleton by optical breakdown, generating an initial damaged area of ${1.4} \times {{10}^4}\;\unicode{x00B5}{\rm m}^2$ in size. Then, from 0.23 µs to 0.88 µs, the damaged size slowly increases to ${2.1} \times {{10}^4}\;\unicode{x00B5}{\rm m}^2$. Finally, after 0.88 µs, because of the gradual relaxation of thermal confinement, the area quickly increases to ${4.5} \times {{10}^4}\;\unicode{x00B5}{\rm m}^2$. Correspondingly, the ablation speed starts at ${60.8}\;{{\rm mm}^2}/{\rm ms}$, then drops to ${11.5}\;{{\rm mm}^2}/{\rm ms}$ from 0.46 µs to 0.88 µs, and finally accelerates to ${99.1}\;{{\rm mm}^2}/{\rm ms}$.

 figure: Fig. 5.

Fig. 5. Imaging the laser ablation of single-layer onion cells using DRUM photography. (a) Time-lapse images. The damaged area is delineated by magenta-dotted boxes. (b) Ablated area and ablation rate as a function of time. (c) Lengths of the ablated area in two directions versus time. (d) Damaging speeds in two directions versus time.

Download Full Size | PDF

The result also reveals the anisotropic damage lengths and speeds in the directions that are orthogonal and parallel to the cell’s skeleton [marked by ${x_{\rm s}}$ and ${y_{\rm s}}$ in Fig. 5(a)]. The anisotropy in the damaged area arises from possible plasma fragmentation and plasma intensity variation [36]. For this experiment, the time histories of the length and the corresponding speed are plotted in Figs. 5(c) and 5(d). The length in the ${y_{\rm s}}$ direction has slight growth (i.e., from 210 µm to 242 µm) from 0.23 µs to the end of the observation window, which corresponds to a relatively low expansion speed of 32 m/s on average. In contrast, following the initial damage, the length in the ${x_{\rm s}}$ direction has considerable growth (i.e., from 174 µm to 384 µm). In the second phase, the length increases from 174 µm at 0.23 µs to 352 µm at 0.88 µs with an average speed of 274 m/s. Afterward, in the third phase, the length increases to 384 µm at 1.23 µs with a lower average speed of 91 m/s.

5. DISCUSSION

We have developed DRUM photography for 2D ultrahigh-speed imaging in real time, with a frame rate of 4.8 Mfps, a temporal resolution of 0.37 µs, and a sequence depth of seven frames. This imaging modality leverages the DMD’s inter-pattern transition to produce a swept diffraction gate, which slices out sequential frames of the dynamic scene from the diffraction orders generated by the DMD. The concept of DRUM photography is proved by imaging the dynamics of intensity decay and beam sweeping. DRUM photography is applied to the study of the laser-liquid interaction and the investigation of laser ablation of single-layer onion cells.

DRUM photography offers versatile ultrahigh-speed imaging with off-the-shelf components. The usage of a mass-produced DMD as the core component, which has no mechanical movement in producing the diffraction gate, makes the system cost efficient and stable. DRUM photography has comparable performance in the maximum imaging speed and the frame size at the maximum imaging speed to existing commercial ultrahigh-speed cameras but with a manufacturing cost of over an order of magnitude less (Supplement 1, Section S9, and Table S3). The key feature of DRUM photography is its ability to spatially separate and temporally gate successive 2D images in the optical domain while satisfying the imaging condition between the object and the sensor. It allows for the sensitive mapping measurement of a single ordinary CMOS sensor by directing these sequential frames onto different areas. The high sensitivity of these sensors, along with probing using a continuous-wave laser beam, permits ultrahigh-speed imaging with low peak intensity, which reduces the chance of photodamage to the object.

DRUM photography leverages programmable diffraction gating to accomplish ultrahigh imaging speed. Inspired by the space-time duality and the PFT-gated ultrafast imaging, a linear phase ramp introduced by the flipping of the DMD’s micromirrors in the spatial frequency domain is used to enable time-space coupling at a sub-microsecond time scale without spatial overlapping. Different from existing relevant approaches (detailed in Supplement 1, Section S10), DRUM photography presents the only modality that uses the DMD’s inter-pattern transition for ultrahigh-speed imaging at high spatial resolution.

The system performance of DRUM photography has ample room to for further enhancement. From Eq. (2), a shorter wavelength, a larger pitch, and a faster flipping motion can increase DRUM photography’s imaging speed. Meanwhile, additional diffraction orders that exist between adjacent diffraction orders could be leveraged to increase the sequence depth and frame rate. Finally, the diffraction-gating method introduced in this work is adaptable to other devices, e.g., a one-dimensional micro-electromechanical-system micromirror array with nanosecond-level flipping time [39], to increase the imaging speed toward the billion-frame-per-second level and enhance the light throughput. To accommodate these technical requirements without compromising on the FOV and signal-to-noise ratio in the acquired images, it is highly desirable to use image sensors with a large aspect ratio, a high pixel count, and high sensitivity [40,41]. Moreover, color sensors, used with multiple single-wavelength lasers, can further increase imaging speed, sequence depth, and FOV. Polarization information could also be incorporated for diverse applications such as 3D ranging [42,43] and plasma science [44]. From this perspective, the integration of advanced spectral-polarization imaging sensors [45,46] into DRUM photography marks a promising research direction. Finally, DRUM photography does not rely on sparsity in the transient scene. Nonetheless, the sparsity in transient scenes can be exploited to introduce partial overlapping between adjacent diffraction orders and can use compressed-sensing-based algorithms [47,48] to computationally separate these frames to enhance the FOV.

DRUM photography can be readily applied to biophysics. Figures 3(d) and 4(b) reveal salient features of bubble cavitation captured by DRUM photography. The experimental data were fit by the Rayleigh–Plesset model as the nonlinear equation of motion. The quantitative model readouts, including a maximum bubble wall velocity of 168–210 m/s and collapse time (i.e., the time it takes for the bubble to collapse from its largest size) of 52.2–60.8 µs, are consistent with previous laser-induced optical breakdown studies [49].

In addition to the study of cavitation dynamics conducted in this work, this system will likely contribute to ultrasound-stimulated early-disease molecular detection [50] and therapeutic-targeted drug delivery [51]. Meanwhile, DRUM photography of laser ablation (Fig. 5) will offer insight into the maximum tissue displacement caused by the expansion of the laser plasma and the resulting macroscopic structural deformation [52], opening up potential applications in laser ablation lithotripsy, nano-surgeries, and laser-based cleaning methods [53].

Funding

Natural Sciences and Engineering Research Council of Canada (RGPAS-2018-522650, RGPAS-507845-2017, RGPIN-2017-05959, RGPIN-2018-06217); Canada Research Chairs (CRC-2022-00119); Canada Foundation for Innovation (37146); New Frontier in Research Fund (NFRFE-2020-00267); Canadian Cancer Society (707056); Fonds de recherche du Québec - Nature et technologies (203345–Centre d’Optique, Photonique, et Lasers); Fonds de Recherche du Québec - Santé (267406, 280229).

Acknowledgment

The authors thank Cheng Jiang, Yingming Lai, and Jingdan Liu from INRS as well as Alan Boate and Jeremy Gribben from Ajile Light Industries for experimental assistance and fruitful discussion.

Disclosures

The authors disclose the following patent application: US Provisional 63/505,472 (J.L. and X.L.).

Data availability

All data are available in the main text and the supplementary materials. The raw data for Fig. 2(e) can be downloaded via [54]. All other raw data are available from the corresponding author upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5, 1113–1127 (2018). [CrossRef]  

2. L. Lazovsky, D. Cismas, G. Allan, and D. Given, “CCD sensor and camera for 100 Mfps burst frame rate image capture,” Proc. SPIE 5787, 184–190 (2005). [CrossRef]  

3. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]  

4. H. Tang, T. Men, X. Liu, Y. Hu, J. Su, Y. Zuo, P. Li, J. Liang, M. C. Downer, and Z. Li, “Single-shot compressed optical field topography,” Light Sci. Appl. 11, 244 (2022). [CrossRef]  

5. X. Feng and L. Gao, “Ultrafast light field tomography for snapshot transient and non-line-of-sight imaging,” Nat. Commun. 12, 2179 (2021). [CrossRef]  

6. J. Liang, “Punching holes in light: recent progress in single-shot coded-aperture optical imaging,” Rep. Prog. Phys. 83, 116101 (2020). [CrossRef]  

7. Y. Lai, Y. Xue, C.-Y. Côté, X. Liu, A. Laramée, N. Jaouen, F. Légaré, L. Tian, and J. Liang, “Single-shot ultraviolet compressed ultrafast photography,” Laser Photon. Rev. 14, 2000122 (2020). [CrossRef]  

8. X. Liu, J. Liu, C. Jiang, F. Vetrone, and J. Liang, “Single-shot compressed optical-streaking ultra-high-speed photography,” Opt. Lett. 44, 1387–1390 (2019). [CrossRef]  

9. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Aldén, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl. 6, e17045 (2017). [CrossRef]  

10. J. Liang, L. Gao, P. Hai, C. Li, and L. V. Wang, “Encrypted three-dimensional dynamic imaging using snapshot time-of-flight compressed ultrafast photography,” Sci. Rep. 5, 15504 (2015). [CrossRef]  

11. T. Kakue, K. Tosa, J. Yuasa, T. Tahara, Y. Awatsuji, K. Nishio, S. Ura, and T. Kubota, “Digital light-in-flight recording by holography by use of a femtosecond pulsed laser,” IEEE J. Sel. Top. Quantum Electron. 18, 479–485 (2012). [CrossRef]  

12. S. Chan, X. Wang, and O. Elgendy, “Plug-and-play ADMM for image restoration: fixed-point convergence and applications,” IEEE Trans. Comput. Imaging 3, 84–98 (2017). [CrossRef]  

13. X. Liu, J. Monteiro, I. Albuquerque, Y. Lai, C. Jiang, S. Zhang, T. H. Falk, and J. Liang, “Single-shot real-time compressed ultrahigh-speed imaging enabled by a snapshot-to-video autoencoder,” Photon. Res. 9, 2464–2474 (2021). [CrossRef]  

14. X. Yuan, D. J. Brady, and A. K. Katsaggelos, “Snapshot compressive imaging: theory, algorithms, and applications,” IEEE Signal Process. Mag. 38, 65–88 (2021). [CrossRef]  

15. H. Fujita, S. Kanazawa, K. Ohtani, A. Komiya, T. Kaneko, and T. Sato, “Initiation process and propagation mechanism of positive streamer discharge in water,” J. Appl. Phys. 116, 213301 (2014). [CrossRef]  

16. T. Suzuki, R. Hida, Y. Yamaguchi, K. Nakagawa, T. Saiki, and F. Kannari, “Single-shot 25-frame burst imaging of ultrafast phase transition of Ge2Sb2Te5 with a sub-picosecond resolution,” Appl. Phys. Express 10, 092502 (2017). [CrossRef]  

17. L. Dresselhaus-Cooper, J. E. Gorfain, C. T. Key, B. K. Ofori-Okai, S. J. Ali, D. J. Martynowych, A. Gleason, S. Kooi, and K. A. Nelson, “Single-shot multi-frame imaging of cylindrical shock waves in a multi-layered assembly,” Sci. Rep. 9, 3689 (2019). [CrossRef]  

18. V. Tiwari, M. Sutton, and S. McNeill, “Assessment of high speed imaging systems for 2D and 3D deformation measurements: methodology development and validation,” Exp. Mech. 47, 561–579 (2007). [CrossRef]  

19. J. Yao, D. Qi, H. Liang, Y. He, Y. Yao, T. Jia, Y. Yang, Z. Sun, and S. Zhang, “Exploring femtosecond laser ablation by snapshot ultrafast imaging and molecular dynamics simulation,” Ultrafast Sci.2 (2022). [CrossRef]  

20. X. Zeng, S. Zheng, Y. Cai, Q. Lin, J. Liang, X. Lu, J. Li, W. Xie, and S. Xu, “High-spatial-resolution ultrafast framing imaging at 15 trillion frames per second by optical parametric amplification,” Adv. Photon. 2, 056002 (2020). [CrossRef]  

21. G. Gao, K. He, J. Tian, C. Zhang, J. Zhang, T. Wang, S. Chen, H. Jia, F. Yuan, L. Liang, X. Yan, S. Li, C. Wang, and F. Yin, “Ultrafast all-optical solid-state framing camera with picosecond temporal resolution,” Opt. Express 25, 8721–8729 (2017). [CrossRef]  

22. T. Suzuki, F. Isa, L. Fujii, K. Hirosawa, K. Nakagawa, K. Goda, I. Sakuma, and F. Kannari, “Sequentially timed all-optical mapping photography (STAMP) utilizing spectral filtering,” Opt. Express 23, 30512–30522 (2015). [CrossRef]  

23. K. Nakagawa, A. Iwasaki, Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics 8, 695–700 (2014). [CrossRef]  

24. M. Sheinman, S. Erramilli, L. Ziegler, M. K. Hong, and J. Mertz, “Flatfield ultrafast imaging with single-shot non-synchronous array photography,” Opt. Lett. 47, 577–580 (2022). [CrossRef]  

25. C. T. Chin, C. Lancée, J. Borsboom, F. Mastik, M. E. Frijlink, N. de Jong, M. Versluis, and D. Lohse, “Brandaris 128: a digital 25 million frames per second camera with 128 highly sensitive frames,” Rev. Sci. Instrum. 74, 5026–5034 (2003). [CrossRef]  

26. X. Zeng, X. Lu, C. Wang, K. Wu, Y. Cai, H. Zhong, Q. Lin, J. Lin, R. Ye, and S. Xu, “Review and prospect of single-shot ultrafast optical imaging by active detection,” Ultrafast Sci. 3, 0020 (2023). [CrossRef]  

27. J. Hebling, G. Almasi, I. Z. Kozma, and J. Kuhl, “Velocity matching by pulse front tilting for large-area THz-pulse generation,” Opt. Express 10, 1161–1166 (2002). [CrossRef]  

28. J. Y. Liang, L. R. Zhu, and L. H. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light Sci. Appl. 7, 42 (2018). [CrossRef]  

29. P. Baum and A. H. Zewail, “Breaking resolution limits in ultrafast electron diffraction and microscopy,” Proc. Natl. Acad. Sci. USA 103, 16105–16110 (2006). [CrossRef]  

30. J. Cohen, P. Bowlan, V. Chauhan, and R. Trebino, “Measuring temporally complex ultrashort pulses using multiple-delay crossed-beam spectral interferometry,” Opt. Express 18, 6583–6597 (2010). [CrossRef]  

31. L. Zhao, J. L. P. Lustres, V. Farztdinov, and N. P. Ernsting, “Femtosecond fluorescence spectroscopy by upconversion with tilted gate pulses,” Phys. Chem. Chem. Phys. 7, 1716–1725 (2005). [CrossRef]  

32. B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30, 1951–1963 (1994). [CrossRef]  

33. S. A. Akhmanov, V. A. Vysloukh, and A. S. Chirkin, “Self-action of wave packets in a nonlinear medium and femtosecond laser pulse generation,” Sov. Phys. Usp. 29, 642 (1986). [CrossRef]  

34. J. Liang, S.-Y. Wu, R. N. Kohn, M. F. Becker, and D. J. Heinzen, “Grayscale laser image formation using a programmable binary mask,” Opt. Eng. 51, 108201 (2012). [CrossRef]  

35. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).

36. G. Sinibaldi, A. Occhicone, F. A. Pereira, D. Caprini, L. Marino, F. Michelotti, and C. M. Casciola, “Laser induced cavitation: plasma generation and breakdown shockwave,” Phys. Fluids 31, 103302 (2019). [CrossRef]  

37. A. Vogel, N. Linz, S. Freidank, and G. Paltauf, “Femtosecond-laser-induced nanocavitation in water: implications for optical breakdown threshold and cell surgery,” Phys. Rev. Lett. 100, 038102 (2008). [CrossRef]  

38. M. S. Plesset, “The Dynamics of Cavitation Bubbles,” J. Appl. Mech 16, 277–282 (1949). [CrossRef]  

39. O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350 kHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019). [CrossRef]  

40. E. Dumont, C. De Bleye, G. Rademaker, L. Coïc, J. Horne, P.-Y. Sacré, O. Peulen, P. Hubert, and E. Ziemons, “Development of a prototype device for near real-time surface-enhanced Raman scattering monitoring of biological samples,” Talanta 224, 121866 (2021). [CrossRef]  

41. K. Urdl, S. Weiss, A. Karpa, M. Perić, E. Zikulnig-Rusch, M. Brecht, A. Kandelbauer, U. Müller, and W. Kern, “Furan-functionalised melamine-formaldehyde particles performing Diels-Alder reactions,” Eur. Polym. J. 108, 225–234 (2018). [CrossRef]  

42. X. Li, Z. Liu, Y. Cai, C. Pan, J. Song, J. Wang, and X. Shao, “Polarization 3D imaging technology: a review,” Front. Phys. 11, 341 (2023). [CrossRef]  

43. J. Liang, P. Wang, L. Zhu, and L. V. Wang, “Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution,” Nat. Commun. 11, 5252 (2020). [CrossRef]  

44. J. Buday, P. Pořízka, and J. Kaiser, “Imaging laser-induced plasma under different laser irradiances,” Spectrochim. Acta B Atom. Spectros. 168, 105874 (2020). [CrossRef]  

45. M. Garcia, C. Edmiston, R. Marinov, A. Vail, and V. Gruev, “Bio-inspired color-polarization imager for real-time in situ imaging,” Optica 4, 1263–1271 (2017). [CrossRef]  

46. M. Kulkarni and V. Gruev, “Integrated spectral-polarization imaging sensor with aluminum nanowire polarization filters,” Opt. Express 20, 22997–23012 (2012). [CrossRef]  

47. B. W. Reed, A. A. Moghadam, R. S. Bloom, S. T. Park, A. M. Monterrosa, P. M. Price, C. Barr, S. A. Briggs, K. Hattar, J. McKeown, and D. J. Masiel, “Electrostatic subframing and compressive-sensing video in transmission electron microscopy,” Struct. Dyn. 6, 054303 (2019). [CrossRef]  

48. X. Liu, A. Skripka, Y. Lai, C. Jiang, J. Liu, F. Vetrone, and J. Liang, “Fast wide-field upconversion luminescence lifetime thermometry enabled by single-shot compressed ultrahigh-speed imaging,” Nat. Commun. 12, 6401 (2021). [CrossRef]  

49. A. Vogel, S. Busch, and U. Parlitz, “Shock wave emission and cavitation bubble generation by picosecond and nanosecond optical breakdown in water,” J. Acoust. Soc. Am. 100, 148–165 (1996). [CrossRef]  

50. L. Abou-Elkacem, S. V. Bachawal, and J. K. Willmann, “Ultrasound molecular imaging: moving toward clinical translation,” Eur. J. Radiol. 84, 1685–1693 (2015). [CrossRef]  

51. B. Helfield, Y. Zou, and N. Matsuura, “Acoustically-stimulated nanobubbles: opportunities in medical ultrasound imaging and therapy,” Front. Phys. 9, 654374 (2021). [CrossRef]  

52. A. Vogel and V. Venugopalan, “Mechanisms of pulsed laser ablation of biological tissues,” Chem. Rev. 103, 577–644 (2003). [CrossRef]  

53. B. Helfield, J. J. Black, B. Qin, J. Pacella, X. Chen, and F. S. Villanueva, “Fluid viscosity affects the fragmentation and inertial cavitation threshold of lipid-encapsulated microbubbles,” Ultrasound Med. Biol. 42, 782–794 (2016). [CrossRef]  

54. X. Liu, “Raw data of the laser beam sweeping experiment,” figshare (2023), https://doi.org/10.6084/m9.figshare.23669136.v4

Supplementary Material (8)

NameDescription
Supplement 1       Supplemental document
Visualization 1       Simulation of DRUM photography using a seven-frame jellyfish video.
Visualization 2       Time histories of the intensity measured by the photodiode scanning across the x axis on the image plane.
Visualization 3       DRUM photography of intensity decay (left) and sweeping motion (right) of a laser beam on a resolution target.
Visualization 4       Side-view observation of laser-induced plasma channel and cavitation by using the pulse energy of 9 µJ. The movie combines three sequences, each starting at 0, 1.50, and 3.00 µs.
Visualization 5       Side-view observation of laser-induced plasma channel and cavitation by using the pulse energy of 10 µJ.
Visualization 6       Front-view observation of cavitation dynamics in liquid.
Visualization 7       DRUM photography of laser ablation of an onion epidermis sample.

Data availability

All data are available in the main text and the supplementary materials. The raw data for Fig. 2(e) can be downloaded via [54]. All other raw data are available from the corresponding author upon reasonable request.

54. X. Liu, “Raw data of the laser beam sweeping experiment,” figshare (2023), https://doi.org/10.6084/m9.figshare.23669136.v4

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. DRUM photography. (a) Schematic. (b) Operating principle. ${t_{\rm s}}$, time of the dynamic scene. A synthetic scene composed of seven letters, “DrumCam,” at different times is used to illustrate the system’s operating principle.
Fig. 2.
Fig. 2. Characterization of the performance of DRUM photography. (a) Spatial resolution tested with a negative USAF resolution target. (b) Temporal response at the center of each diffraction order. (c) Time-lapse images of the intensity decay of a bar target. (d) Normalized average intensity from (c) as the function of time with fitting. (e) As (c) but showing the laser beam’s sweeping motion. (f) Laser beam’s horizontal position as a function of time with fitting.
Fig. 3.
Fig. 3. Side-view observation of laser-induced breakdown in distilled water using DRUM photography. (a) Experimental setup. (b) Time-lapse images showing the evolution of the plasma channel in distilled water using a 9-µJ pump pulse. (c) Time history of the channel length and bubble radius quantified from (b). (d) Development of cavitation from the plasma channel by using a 10-µJ pump pulse. (e) Bubble radius as the function of time from (d), fitted by the cavitation theory.
Fig. 4.
Fig. 4. Front-view imaging of cavitation dynamics using DRUM photography. (a) Experimental setup. (b) Time-lapse images of a pump-pulse-generated bubble (left column) and the normalized intensity profiles averaged in the radial direction (right column). Scale bar: 200 µm. The green dot marks the first turning point of the line profile from the intensity minimum, which is used to determine the bubble radius. (c) Bubble radius as the function of time fitted by the cavitation theory.
Fig. 5.
Fig. 5. Imaging the laser ablation of single-layer onion cells using DRUM photography. (a) Time-lapse images. The damaged area is delineated by magenta-dotted boxes. (b) Ablated area and ablation rate as a function of time. (c) Lengths of the ablated area in two directions versus time. (d) Damaging speeds in two directions versus time.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

I r ( x , t ) = 2 c r 2 ( 1 + cos m π ) { [ s i n c ( x f sin ( 2 θ b ) 2 λ f / w ) m = δ ( x m λ f p ) ] s i n c ( x L x λ f ) } 2 .
γ D R U M = 2 p Δ m λ d θ b d t .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.