Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot ultrafast optical imaging

Open Access Open Access

Abstract

Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at 100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing real-time imaging capability, which is indispensable for recording nonrepeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey state-of-the-art single-shot ultrafast optical imaging comprehensively. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six subcategories. Under each subcategory, we describe operating principles, present representative cutting-edge techniques, with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects for technical advancement in this field.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Optical imaging of transient events in their actual time of occurrence exerts compelling scientific significance and practical merits. Occurring in two-dimensional (2D) space and at femtosecond to nanosecond time scales, these transient events reflect many important fundamental mechanisms in physics, chemistry, and biology [13]. Conventionally, the pump-probe methods have allowed capture of these dynamics through repeated measurements. However, many ultrafast phenomena are either nonrepeatable or difficult to reproduce. Examples include optical rogue waves [4,5], irreversible crystalline chemical reactions [6], light scattering in living tissue [7], shock waves in laser-induced damage [8], and chaotic laser dynamics [9]. Under these circumstances, the pump-probe methods are inapplicable. In other cases, although reproducible, the ultrafast phenomena have significant shot-to-shot variations and low occurrence rates. Examples include dense plasma generation by high-power, low-repetition laser systems [10,11] and laser-driven implosion in inertial confinement fusion [12]. For these cases, the pump-probe methods would lead to significant inaccuracy and low productivity.

To overcome the limitations in the pump-probe methods, many single-shot ultrafast optical imaging techniques have been developed in recent years. Here, “single-shot” describes capturing the entire dynamic process in real time (i.e., the actual duration in which an event occurs) without repeating the event. Extended from the established categorization [13,14], we define “ultrafast” as imaging speeds at 100 million frames per second (Mfps) or above, which correspond to interframe time intervals of 10 ns or less. “Optical” refers to detecting photons in the extreme ultraviolet to the far-infrared spectral range. Finally, we restrict “imaging” to 2D, i.e., x, y. With the unique capability of recording nonrepeatable and difficult-to-reproduce transient events, single-shot ultrafast optical imaging techniques have become indispensable for understanding fundamental scientific questions and for achieving high measurement accuracy and efficiency.

The prosperity of single-shot ultrafast optical imaging is built upon advances in three major scientific areas. The first contributor is the vast popularization and continuous progress in ultrafast laser systems [15], which enable producing pulses with femtosecond-level pulse widths and up to joule-level pulse energy. The ultrashort pulse width naturally offers outstanding temporal slicing capability. The wide bandwidth (e.g., 30nm for a 30-fs, 800-nm, Gaussian pulse) allows implementation of different pulse-shaping technologies [16] to attach various optical markers to the ultrashort pulses. The high pulse energy provides a sufficient photon count in each ultrashort time interval for obtaining images with a reasonable signal-to-noise ratio. All three features pave the way for the single-shot ultrafast recording of transient events. The second contributor is the incessantly improving performance of ultrafast detectors [1722]. New device structures and state-of-the-art fabrication have enabled novel storage schemes, faster electronic responses, and higher sensitivity. These efforts have circumvented the limitations of conventional detectors in on-board storage and readout speeds. The final contributor is the development of new computational frameworks in imaging science. Of particular interest is the effort to apply compressed sensing (CS) [23,24] in spatial and temporal domains to overcome the speed limit of conventional optical imaging systems. These three major contributors have largely propelled the field of single-shot ultrafast optical imaging by improving existing techniques and by enabling new imaging concepts.

In this mini-review, we provide a comprehensive survey of the cutting-edge techniques in single-shot ultrafast optical imaging and their associated applications. The limited article length precludes covering the ever-expanding technical scope in this area, resulting in possible omissions. We restrict the scope of this review by our above-described definition of single-shot ultrafast optical imaging. As a result, multiple-shot ultrafast optical imaging methods—including various pump-probe arrangements implemented via either temporal scanning by ultrashort probe pulses and ultrafast gated devices [2529] or spatial scanning using zero-dimensional and one-dimensional (1D) ultrafast detectors [18,30,31]—are not discussed. In addition, existing single-shot optical imaging modalities at less than 100-Mfps imaging speeds, such as ultrahigh-speed sensors [32], rotating mirror cameras [33], and time-stretching methods [3436], are excluded. Finally, ultrafast imaging using nonoptical sources, such as electron beams, x rays, and terahertz radiations [3739], falls out of scope here. Interested readers can refer to the selected references listed herein and the extensive literature elsewhere.

The subsequent sections are organized by the following conceptual structure [Fig. 1(a)]. According to the illumination requirement, single-shot ultrafast optical imaging can be categorized into active detection and passive detection. The active-detection domain exploits specially designed pulse trains for probing 2D transient events [Fig. 1(b)]. Depending on the optical marker carried by the pulse train, the active-detection domain can be further separated into four methods, namely, space division, angle division, temporal wavelength division, and spatial frequency division. The passive-detection domain leverages receive-only ultrafast detectors to record 2D dynamic phenomena [Fig. 1(c)]. It is further divided into two image formation methods—direct imaging and reconstruction imaging. The in-depth description of each method starts by its basic principle, followed by the description of representative techniques, with their applications as well as their strengths and limitations. In the last section, a summary and an outlook are provided to conclude this mini-review.

 figure: Fig. 1.

Fig. 1. (a) Categorization of single-shot ultrafast optical imaging in two detection domains and six methods with 11 representative techniques; (b) conceptual illustration of active-detection-based single-shot ultrafast optical imaging. Colors represent different optical markers. (c) Conceptual illustration of passive-detection-based single-shot ultrafast optical imaging.

Download Full Size | PDF

2. ACTIVE-DETECTION DOMAIN

In general, the active-detection domain works by using an ultrafast pulse train to probe a transient event in a single image acquisition. Each pulse in the pulse train is attached with a unique optical marker [e.g., different spatial positions, angles, wavelengths, states of polarization (SOPs), or spatial frequencies]. Leveraging the properties of these markers, novel detection mechanisms are deployed to separate the transmitted probe pulses after the transient scene in the spatial or spatial frequency domain to recover the (x,y,t) data cube. In the following, we will present four methods with six representative techniques in this domain.

A. Space Division

The first method in this domain is space division. Extended from Muybridge’s famous imaging setup to capture a horse in motion [40], this method constructs a probe pulse train with each pulse occupying a different spatial and temporal position. Synchronized with a high-speed object traversing through the field of view (FOV), each probe pulse records the object’s instantaneous position at a different time. These probe pulses are projected onto different spatial areas of a detector, which thus records the (x,y,t) information. Two representative techniques are presented in this subsection.

1. Single-Shot Femtosecond Time-Resolved Optical Polarimetry

Figure 2(a) shows the schematic of the single-shot femtosecond time-resolved optical polarimetry (SS-FTOP) system for imaging ultrashort pulses’ propagation in an optical nonlinear medium [41]. An ultrafast Ti:sapphire laser generated 800-nm, 65-fs pulses, which were split by a beam splitter to generate pump and probe pulses. The pump pulse, reflected from the beam splitter, passed through a half-wave plate for polarization control and then was focused into a 10-mm-long Kerr medium. The transmitted pulse from the beam splitter was frequency doubled by a β-barium borate (BBO) crystal. Then, it was incident on a four-step echelon [42], which had a step width of 0.54 mm and a step height of 0.2 mm. This echelon, therefore, produced four probe pulses with an interpulse time delay of 0.96 ps, corresponding to an imaging speed of 1.04 trillion frames per second (Tfps). These probe pulses were incident on the Kerr medium orthogonally to the pump pulse. The selected step width and height of the echelon allowed each probe pulse to arrive coincidentally with the propagating pump pulse in the Kerr medium. A pair of polarizers, whose polarization axes were rotated 90° relative to each other, sandwiched the Kerr medium [43]. As a result, the birefringence induced by the pump pulse allowed part of the probe pulses to be transmitted to a charge-coupled device (CCD) camera. The transient polarization change thus provided contrast to image the pump pulse’s propagation. Each of the four recorded frames had 41pixels×60pixels [44]. SS-FTOP’s temporal resolution, determined by the probe pulse width and pump pulse’s lateral size, was 276 fs.

 figure: Fig. 2.

Fig. 2. SS-FTOP based on space division followed by varied time delays for imaging a single ultrashort pulse’s propagation in a Kerr medium (adapted from [41]). (a) Schematic of experimental setup; (b) two sequences of single ultrashort pulses’ propagation. The time interval between adjacent frames is 0.96 ps. (c) Transverse intensity profiles of the first frames of the two single-shot observations in (b).

Download Full Size | PDF

Figure 2(b) shows two sequences of propagation dynamics of single, 45-μJ laser pulses in the Kerr medium. In spite of nearly constant pulse energy, the captured pulse profiles varied from shot to shot, attributed to self-modulation-induced complex structures within the pump pulses [45]. As a detailed comparison, the transverse (i.e., x axis) profiles of the first frames of both measurements [Fig. 2(c)] show different intensity distributions. In particular, two peaks are observed in the first measurement, indicating the formation of double filaments during the pump pulse’s propagation [45].

SS-FTOP can fully leverage the ultrashort laser pulse width to achieve exceptional temporal resolution and imaging speeds. With the recent progress in attosecond laser science [46,47], the imaging speed could exceed one quadrillion (i.e., 1015) fps. However, this technique has three limitations. First, using the echelon to induce the time delay discretely, SS-FTOP has a limited sequence depth (i.e., the number of frames captured in one acquisition). In addition, the sequence depth conflicts with the imaging FOV. Finally, SS-FTOP can only image ultrafast moving objects with nonoverlapping trajectory.

2. Light-in-Flight Recording by Holography

Invented in the 1970s [48], this technique records time-resolving holograms by using an ultrashort reference pulse sweeping through a holographic recording medium [e.g., a film [49] for conventional holography or a CCD camera [50] for digital holography (DH)]. The pulse front of the obliquely incident reference pulse intersects with different spatial areas of the recorded holograms, generating time-resolved views of the transient event.

As an example, Fig. 3(a) illustrates a DH system for light-in-flight (LIF) recording of a femtosecond pulse’s sweeping motion on a patterned plate [51]. An ultrashort laser pulse (800-nm wavelength and 96-fs pulse width), from a mode-locked Ti:sapphire laser, was divided into an illumination pulse and a reference pulse. Both pulses were expanded and collimated by beam expanders. The illumination pulse was incident on a diffuser plate at 0.5° against the surface normal. A USAF resolution test chart was imprinted on the plate. The light scattered from the diffuser plate formed the signal pulse. The reference pulse was directed to a CCD camera with an incident angle of 0.5°. When the time of arrival of scattered photons coincided with that of the reference pulse, interference fringes were formed. Images were digitally reconstructed from the acquired hologram. The temporal resolution of LIF-DH was mainly determined by the FOV, incident angle, and coherence length [51].

 figure: Fig. 3.

Fig. 3. LIF recording by DH based on obliquely sweeping the reference pulse on the imaging plane, a form of space division [51]. (a) Experimental setup for recording the hologram; (b) representative frames of single ultrashort laser pulses’ movement on a USAF resolution target. The time interval between frames is 192 fs. The white arrow points to the features of the USAF resolution target.

Download Full Size | PDF

Figure 3(b) shows the sequence of the femtosecond laser pulse sweeping over the patterned plate. In this example, seven subholograms (512pixels×512pixels in size), extracted from the digitally recorded hologram, were used to reconstruct frames. The temporal resolution was 88 fs, corresponding to an effective imaging speed of 11.4 Tfps. In each frame, the bright spot at the center is the zeroth-order diffraction, and the reconstructed femtosecond laser pulse is shown as the circle. The USAF resolution target can be seen in the reconstructed frames. The sweeping motion of the femtosecond laser pulse is observed by sequentially displaying the reconstructed frames.

Akin to SS-FTOP, LIF holography fully leverages the ultrashort pulse width to achieve femtosecond-level temporal resolution. The sequence depth is also inversely proportional to the imaging FOV. Different from SS-FTOP, the FOV in LIF holography is tunable in reconstruction, which could provide a much-improved sequence depth. In addition, in theory, provided there were a sufficiently wide scattering angle and a long-coherence reference pulse, LIF holography could be used to image ultrafast moving objects with complex trajectories and other spatially overlapped transient phenomena. However, the shape of the reconstructed object is determined by the geometry for which the condition of the same time of arrival between the reference and the signal pulses is satisfied [51]. The shape is also subject to observation positions [52]. Finally, based on interferometry, this technique cannot be implemented in imaging transient scenes with incoherent light emission.

B. Angle Division

In the angle-division method, the transient event is probed from different angles in two typical arrangements. The first scheme uses an angularly separated and temporally distinguished ultrashort pulse train, which can be generated by dual echelons with focusing optics [53]. The second scheme uses longer pulses to cover the entire duration of the transient event. These pulses probe the transient events simultaneously. Each probe pulse records a projected view of the transient scene from a different angle. This scheme, implementing the Radon transformation in the spatiotemporal domain, allows leveraging well-established methods in medical imaging modalities, such as x-ray computed tomography (CT), and therefore has received extensive attention in recent years. Here, we discuss a representative second scheme.

Figure 4(a) shows the system setup of single-shot frequency-domain tomography (SS-FDT) [54] for imaging transient refractive index perturbation. An 800-nm, 30-fs, 0.7-μJ pump pulse induced a transient refractive index structure (Δn) in a fused silica glass due to the nonlinear refractive index dependence and pump-generated plasma. This structure evolved at a luminal speed. In addition, a pair of pulses, directly split from the pump pulse (each with a 30-μJ pulse energy), were crossed spatiotemporally in a BBO crystal that was sandwiched between two HZF4 glasses. The first HZF4 glass generated a fan of up to eight 800-nm daughter pulses by cascaded four-wave mixing. The BBO crystal doubled the frequency as well as increased the number of pulses. The second HZF4 glass chirped these frequency-doubled daughter pulses to 600 fs. In the experiment, five angularly separated probe pulses were selected.

 figure: Fig. 4.

Fig. 4. SS-FDT for imaging transient refractive index perturbation based on angle division followed by spectral imaging holography [54]. (a) Schematic of the experimental setup. Upper-left inset, principle of imprinting a phase streak in a probe pulse (adapted from [55]); θ, incident angle of the probe pulse; Δn, refractive index change; (b) phase streaks induced by the evolving refractive index profile. xpr and zpr(loc), the transverse and the longitudinal coordinates of probe pulses; (c) representative snapshots of the refractive index change using a pump energy of E=0.7μJ; xob and zob(loc), the transverse and the longitudinal coordinates of the object.

Download Full Size | PDF

The HZF4/BBO/HZF4 structure was directly imaged onto the target plane. The five probe pulses illuminated the target with probing angles of θ. Because all the probe pulses were generated at the same time from the same origin, they overlapped both spatially and temporally at the target. The transient scene was measured by spectral imaging interferometry [5557]. Specifically, the transient refractive index structure imprinted a phase streak in each probe pulse [see the upper-left inset in Fig. 4(a)]. Before these probe pulses arrived, a chirped 400-nm reference pulse, directly split from the pump pulse, recorded the phase reference. The reference pulse and the transmitted probe pulses were imaged to an entrance slit of an imaging spectrometer. Inside the spectrometer, a diffraction grating and focusing optics broadened both reference pulse and probe pulses, which overlapped temporally to form a frequency-domain hologram [58] on a CCD camera. In the experiment, this hologram recorded all five projected views of the transient event.

Three steps were taken in image reconstruction. First, a 1D inverse Fourier transform was taken to convert the spatial axis into the incident angle. This step, therefore, separated the five probe pulses in the spatial frequency domain. Second, by windowing, shifting, and inverse transforming the hologram area associated with each probe pulse, five phase streaks were retrieved [Fig. 4(b)]. Finally, these streaks were fed into a tomographic image reconstruction algorithm (e.g., algebraic reconstruction technique [54,59]) to recover the evolution of the transient scene. The reconstructed movie had a 4.35-Tfps imaging speed, an approximately 3-ps temporal resolution, and a sequence depth of 60 frames. Each frame was 128pixels×128pixels in size [60].

Figure 4(c) shows representative frames of nonlinear optical dynamics induced by intense laser pulse’s propagation in a fused silica glass. The reconstructed movie has revealed that the pulse experienced self-focusing within 7.4 ps. Then, an upper spatial lobe split off from the main pulse appeared at 9.8 ps, attributed to laser filamentation. After that, a steep-walled index “hole” was generated near the center of the profile at 12.2 ps, indicating that the generated plasma had induced a negative index change that locally offset the laser-induced positive nonlinear refractive index change.

The synergy of CT and spectral imaging interferometry has endowed SS-FDH with advantages of a large sequence depth, the ability to record complex amplitude, and the capability of observing dynamic events along the probe pulses’ propagation directions. However, the sparse angular sampling results in artifacts that strongly affect the reconstructed images [61], which thus limits the spatial and temporal resolutions.

C. Temporal Wavelength Division

The third method in the active-detection domain is temporal wavelength division. Rooted in time-stretching imaging [3436], the wavelength-division method attaches different wavelengths to individual pulses. After probing the transient scene, a 2D (x, y) image at a specific time point is stamped by unique spectral information. At the detection side, dispersive optical elements spectrally separate transmitted probe pulses, which allows directly obtaining the (x,y,t) data cube.

The representative technique discussed in this subsection is the sequentially time all-optical mapping photography (STAMP) [62]. As shown in Fig. 5(a), a femtosecond pulse first passed through a temporal mapping device that comprised a pulse stretcher and a pulse shaper. Depending on specific experimental requirements, the pulse stretcher used different dispersing materials to increase the pulse width and to induce spectral chirping. Then, a classic 4f pulse shaper [63] filtered selective wavelengths, generating a pulse train containing six wavelength-encoded pulses [see an example in the upper-left inset in Fig. 5(a)] to probe the transient scene. After the scene, these transmitted pulses went through a spatial mapping unit, which used a diffraction grating and imaging optics to separate them in space. Finally, these pulses were recorded at different areas on an imaging sensor. Each frame is 450pixels×450pixels in size.

 figure: Fig. 5.

Fig. 5. STAMP based on temporal wavelength division. (a) System schematic of STAMP [62]; upper inset, normalized intensity profiles of the six probe pulses with an interframe time interval of 229 fs (corresponding to a frame rate of 4.4 Tfps) and an exposure time of 733 fs; lower insets, schematics of the temporal mapping device and the spatial mapping device; (b) single-shot imaging of electronic response and phonon formation at 4.4 Tfps [62]; (c) schematic setup of spectrally filtered (SF)-STAMP [65]; f1 and f2, focal lengths of lenses. BPF, bandpass filter; DOE, diffractive optical element; (d) full sequence of crystalline-to-amorphous phase transition in Ge2Sb2Te5 captured by the SF-STAMP system with an interframe time interval of 133 fs (corresponding to an imaging speed of 7.52 Tfps) and an exposure time of 465 fs [65].

Download Full Size | PDF

The STAMP system has been deployed in visualizing light-induced plasmas and phonon propagation in materials. A 70-fs, 40-μJ laser pulse was cylindrically focused into a ferroelectric crystal wafer at room temperature to produce coherent phonon-polariton waves. The laser-induced electronic response and phonon formation were captured at 4.4 Tfps with an exposure time of 733 fs per frame [Fig. 5(b)]. The first two frames show the irregular and complex electronic response of the excited region in the crystal. The following frames show an upward-propagating coherent vibration wave.

Since STAMP’s debut in 2014, various recent developments [6467] have reduced the system’s complexity and improved the system’s specifications. For example, the schematic setup of the spectrally filtered (SF)-STAMP system [65] is shown in Fig. 5(c). SF-STAMP abandoned the pulse shaper in the temporal mapping device. Consequently, instead of using several temporally discrete probe pulses, SF-STAMP used a single frequency-chirped pulse to probe the transient event. At the detection side, SF-STAMP adopted a single-shot ultrafast pulse characterization setup [68,69]. In particular, a diffractive optical element (DOE) generated spatially resolved replicas of the transmitted probe pulse. These replicas were incident to a tilted bandpass filter, which selected different transmissive wavelengths according to the incident angles [68]. Consequently, the sequence depth was equal to the number of replicas produced by the DOE. The imaging speed and the temporal resolution were limited by the transmissive wavelength range.

Figure 5(d) shows the full sequence (a total of 25 frames) of the crystalline-to-amorphous phase transition of Ge2Sb2Te5 alloy captured by SF-STAMP at 7. 52 Tfps (with an exposure time of 465 fs). Each frame is 400pixels×300pixels in size. The gradual change in the probe laser transmission up to 660fs is clearly shown, demonstrating the phase transition process. In comparison with this amorphized area, the surrounding crystalline areas retained high reflectance. The sequence also shows that the phase change domain did not spatially spread to the surrounding area. This observation has verified the theoretical model that attributed the initiation of nonthermal amorphization to Ge-atom displacements from octahedral to tetrahedral sites [65,70].

The SF-STAMP technique has achieved one of the highest imaging speeds (7.52 Tfps) in active-detection-based single-shot ultrafast optical imaging. The original STAMP system, although having a high light throughout, is restricted by a low sequence depth. SF-STAMP, although having increased the sequence depth, significantly sacrifices the light throughput. In addition, the trade-off between the pulse width and spectral bandwidth limits STAMP’s temporal resolution. Finally, the STAMP techniques are applicable only to color-neutral objects.

D. Spatial Frequency Division

The last method discussed in the active-detection domain is spatial frequency division. It works by attaching different spatial carrier frequencies to probe pulses. The transmitted probe pulses are spatially superimposed at the detector. In the ensuing imaging reconstruction, temporal information associated with each probe pulse is separated at the spatial frequency domain, which allows recovering the (x,y,t) data cube. Two representative techniques are shown for this method.

1. Time-Resolved Holographic Polarization Microscopy

In the time-resolved holographic polarization microscopy (THPM) system [Fig. 6(a)] [71], a pulsed laser generated a pump pulse and a probe pulse by using a beam splitter. The pump pulse illuminated the sample. The probe pulse was first frequency-doubled by a potassium dihydrogen phosphate crystal, tuned to circular polarization by a quarter-wave plate, and then split by another beam splitter to two pulses with a time delay. Each probe pulse passed through a 2D grating and was further split to signal pulses and reference pulses. The orientation of the grating determined the polarization of diffracted light to be 45° with the x axis. Two signal pulses were generated by selecting only the zeroth diffraction order from a pinhole filter. The reference pulses passed through a four-pinhole filter, on which two linear polarizers were placed side by side [see the lower-right inset in Fig. 6(a)]. The orientations of the linear polarizers P2 and P3 were along the x axis and the y axis, respectively. As a result, a total of four reference pulses (i.e., Rt1,x, Rt1,y, Rt2,x, Rt2,y), each carrying a different combination of arrival times and SOPs, were generated. After the sample, the reference pulses interfered with the transmitted signal pulses with the same arrival times on a CCD camera (2048pixels×2048pixels). There, the interference fringes of signal pulse with the four reference pulses had different spatial frequencies. Therefore, a frequency multiplexed hologram was recorded [Fig. 6(b)].

 figure: Fig. 6.

Fig. 6. THPM based on time delays and spatial frequency division of the reference pulses for imaging laser-induced damage of a mica lamina sample (adapted from [71]). (a) Schematic of experimental setup. Black arrows indicate the pulses’ SOPs. KDP, potassium dihydrogen phosphate; lower-right inset, generation of four reference pulses; (b) recorded hologram of a USAF resolution target. The zoom-in picture shows the detailed interferometric pattern of this hologram. (c) Spatial frequency spectrum of (b); (d) time-resolved multicontrast imaging of ultrafast laser-induced damage in a mica lamina sample.

Download Full Size | PDF

In image reconstruction, the acquired hologram was first Fourier transformed. Information carried by each carrier frequency was separated in the spatial frequency domain [Fig. 6(c)]. Then, by windowing, shifting, and inverse Fourier transforming of the hologram associated with each carrier frequency, four images of complex amplitudes were retrieved. Finally, the phase information in the complex amplitude was used to calculate the SOP, in terms of azimuth and phase difference [72].

THPM has been used for real-time imaging of laser-induced damage [Fig. 6(d)]. A mica lamina plate, obtained by mechanical exfoliation, was the sample. A pump laser damaged the plate with an intensity of >40J/cm2. THPM captured the initial amplitude and phase change at 0.1 ns and 1.7 ns after the pump pulse, corresponding to a sequence depth of two frames and an imaging speed of 625 Gfps. The images have revealed the generation and propagation of shock waves. In addition, the movie reflects nonuniform changes in amplitude and phase, which was due to the sample’s anisotropy. Furthermore, the SOP analysis revealed a phase change of 0.4π and an azimuth angle of 36° at the initial stage. Complex structures in azimuth and phase difference were observed at the 1.7-ns delay, indicating anisotropic changes in transmission and refractive index in the process of laser irradiation.

2. Frequency Recognition Algorithm for Multiple Exposures Imaging

The second technique is the frequency recognition algorithm for multiple exposures (FRAME) imaging [73]. Instead of forming the fringes via interference, FRAME attaches various carrier frequencies to probe pulses using intensity modulation. In the setup [Fig. 7(a)], a 125-fs pulse output from an ultrafast laser was split into four subpulses, each having a specified time delay. The intensity profile of each subpulse was modulated by a Ronchi grating with an identical period but a unique orientation. These Ronchi gratings, providing sinusoidal intensity modulation to the probe pulses, were directly imaged to the measurement volume. As a result, the spatially modulated illumination patterns were superimposed onto the dynamic scene in the captured single image. In the spatial frequency domain, the carrier frequency of the sinusoidal pattern shifted the band-limited frequency content of the scene to unexploited areas. Thus, the temporal information of the scene, conveyed by sinusoidal patterns with different angles, was separated in the spatial frequency domain without any cross talk. Following a similar procedure as THPM, the sequence could be recovered. The FRAME imaging system has captured the propagation of a femtosecond laser pulse through CS2 liquid at imaging speeds up to 5 Tfps with a frame size of 1002pixels×1004pixels. The temporal resolution, limited by the imaging speed, was 200 fs. Using a similar Kerr-gate setup as the one in Fig. 2(a), transient refractive index change was used as the contrast to indicate the pulse’s propagation [Figs. 7(b) and 7(c)].

 figure: Fig. 7.

Fig. 7. FRAME imaging based on spatial frequency division of the probe pulses [73]. (a) System schematic; (b) sequence of reconstructed frames of a propagating femtosecond light pulse in a Kerr medium at 5 Tfps. The white dashed arc in the 600-fs frame indicates the pulse’s position at 0 fs. (c) Vertically summed intensity profiles of (b).

Download Full Size | PDF

Akin to the space-division method (Section 2.A), the generation of the probe pulse train using the spatial frequency division method does not rely on dispersion. Thus, the preservation of the laser pulse’s entire spectrum allows simultaneously maximizing the frame rate and the temporal resolution. The carrier frequency can be attached via either interference or intensity modulation. The former offers the ability of directly measuring the complex amplitude of the transient event. The latter allows easy adaptation to other ultrashort light sources, such as subnanosecond flash lamps [74] and LEDs [75,76]. By integrating imaging modalities that can sense other photon tags (e.g., polarization), it is possible to achieve high-dimensional imaging of ultrafast events. The major limitation of the frequency-division method, similar to its space-division counterpart, is the limited sequence depth. To increase the sequence depth, either the FOV or spatial resolution must be sacrificed.

3. PASSIVE-DETECTION DOMAIN

A transient event can also be passively captured in a single measurement. In this domain, the ultrahigh temporal resolution is provided by receive-only ultrafast detectors without the need for active illumination. The transient event is either imaged directly or reconstructed computationally. Compared with active detection, passive detection has unique advantages in imaging self-luminescent and color-selective transient scenes or distant scenes that are light-years away. In the following, we will present two passive-detection methods implemented in five representative techniques.

A. Direct Imaging

The first method in the passive-detection domain is direct imaging using novel ultrafast 2D imaging sensors. In spite of limitations in pixel counts, sequence depth, or light throughput, this method has the highest technological maturity, manifesting in the successful commercialization and wide applications of various products. Here, three representative techniques are discussed.

1. In Situ Storage Image Sensor CCD

The in situ storage image sensor (ISIS) CCD camera uses a novel charge transfer and storage structure to achieve ultra-high imaging speed. As an example, the ISIS CCD camera manufactured by DALSA [77] (Fig. 8) had 64pixels×64pixels, each with a 100-μm pitch. Each pixel contained a photosensitive area (varied from 10×10μm2 to 18×18μm2 in size, corresponding to a fill factor of 1–3%), two readout gates (PV and PV¯ in Fig. 8), and 16 charge storage/transfer elements (arranged into two groups of eight elements, with opposite transfer directions). Transfer elements from adjacent pixels constituted continuous two-phase CCD registers in the vertical direction. Horizontal CCD registers with multiport readout nodes were distributed at the top and the bottom of the sensor. The two readout gates, operating out of phase in burst mode, transferred photo-generated charges alternatingly into the up and down groups of storage elements within each frame’s exposure time of 10 ns (corresponding to an imaging speed of 100 Mfps). During image capturing, charges generated by the odd or even numbered frames filled up the storage elements on both sides without being read out, which is the bottleneck in speed. After a full integration cycle (16 exposures in total), the sensor was reset while all time-stamped charges were read out in a conventional manner. As a result, the interframe time interval could be decreased to the transfer time of an image signal to the in situ storage [78]. The ultrafast imaging ability of this ISIS-based CCD camera has been demonstrated by imaging a 4-ns pulsed LED light source [77].

 figure: Fig. 8.

Fig. 8. Structure of DALSA’s ISIS CCD camera based on on-chip charge transfer and storage (adapted from [77]). The sensor has 64pixels×64pixels, while six are shown here. Arrows indicate the charge transfer directions.

Download Full Size | PDF

The DALSA’s ISIS-based CCD camera, to our best knowledge, is currently the fastest CCD camera. Based on mature fabrication technologies, this camera holds great potential for being further developed as a consumer product. However, currently, its limited number of pixels, low sequence depth, and extremely low fill factor make this camera far below most users’ requirements [78].

2. Ultrafast Framing Camera

In general, the ultrafast framing camera (UFC) is built upon the operation of beam splitting with ultrafast time gating [17]. In an example [79] schematically shown in Fig. 9(a), a pyramidal beam splitter with an octagonal base generated eight replicated images of the transient scene. Each image was projected onto a time-gated intensified CCD camera. Onset and width of the time gate for each intensified CCD camera were precisely controlled to capture successive temporal slices [i.e., 2D spatial (x, y) information at a given time point] of the transient event. The recent advances in ultrafast electronics have enabled interframe time intervals as short as 10 ps and a gate time of 200 ps [80]. The implementation of new optical design has increased the sequence depth to 16 frames [81].

 figure: Fig. 9.

Fig. 9. UFC based on beam splitting along with ultrafast time gating. (a) Schematic of a UFC [79]; (b) schematic of shadowgraph imaging of cylindrical shock waves using a UFC (adapted from [85]); inset, configuration of the multilayered target; (c) sequence of captured shadowgraph frames showing the convergence and subsequent divergence of the shock waves generated by a laser excitation ring (red dashed circle in the first frame) in the target [85]. The shock front is pointed by the white arrows. Additional rings and structure instabilities are shown by the blue arrows and orange arrows, respectively.

Download Full Size | PDF

UFCs have been used in widespread applications, including ballistic testing, materials characterization, and fluid dynamics [8284]. Figure 9(b) shows the setup of ultrafast imaging shadowgraphy of cylindrically propagating shock waves using the UFC [85]. In the experiment, a multistage amplified Ti:sapphire laser generated pump and probe pulses. The pump pulse, with a 150-ps pulse width and a 1-mJ pulse energy, was converted into a ring shape (with a 150-μm inner diameter and an 8-μm width) by a 0.5° axicon and a 30-mm-focal-length lens. This ring pattern was used to excite a shock wave on the target. The probe pulse, compressed to a 130-fs pulse width, passed through a frequency-doubling Fabry–Perot cavity. The output 400-nm probe pulse train was directed through the target of a 10-μm-thick ink-water layer that was sandwiched between two sapphire wafers. The transient density variations of the shock wave altered the refractive index in the target, causing light refraction in probe pulses. The transmitted probe pulses were imaged onto a UFC (Specialised Imaging, Inc.) that is able to capture 16 frames (1360pixels×1024pixels) with an imaging speed of 333 Mfps and a temporal resolution of 3 ns [81].

The single-shot ultrafast shock imaging has allowed tracking nonreproducible geometric instability [Fig. 9(c)]. The sequence revealed asymmetric structures of converging and diverging shock waves. While imperfect circles from the converging shock front were observed, the diverging wave maintained a nearly circular structure. The precise evolution was different in each shock event, which was characteristic of converging shock waves [86]. In addition, faint concentric rings [indicated by blue arrows in Fig. 9(c)] were seen in these events. Tracking these faint features has allowed detailed studies of shock behavior (e.g., substrate shocks and coupled wave interactions) [85].

3. High-Speed Sampling Camera

Streak cameras are ultrafast detectors with up to femtosecond temporal resolutions [22]. In the conventional operation [Fig. 10(a)], incident photons first pass through a narrow (typically 50–200 μm wide) entrance slit [along the x axis in Fig. 10(a)]. The image of this entrance slit is formed on a photocathode, where photons are converted into photoelectrons via the photoelectric effect. These photoelectrons are accelerated by a pulling voltage and then passed between a pair of sweep electrodes. A linear voltage ramp is applied to the electrodes so that photoelectrons are deflected to different vertical positions [along the y axis in Fig. 10(a)] according to their times of arrival. The deflected photoelectrons are amplified by a microchannel plate and then converted back to photons by bombarding a phosphor screen. The phosphor screen is imaged onto an internal imaging sensor [e.g., a CCD or a complementary metal–oxide–semiconductor (CMOS) camera], which records the time course with a 1D FOV [i.e., an (x,t) image at a specific y position]. Because the streak camera’s operation requires using one spatial axis on the internal imaging sensor to record the temporal information, the narrow entrance confines its FOV. Therefore, the conventional streak camera is a 1D ultrafast imaging device.

 figure: Fig. 10.

Fig. 10. HISAC based on remapping the scene from 2D to 1D in space and streak imaging. (a) Schematic of a streak camera; (b) schematic of a HISAC system; (c) formation process of individual frames from the streak data; (d) sequence showing shock wave breakthrough. The time interval between frames is 336ps. The laser focus is outlined by the white dashed circle. (b)–(d) are adapted from [88].

Download Full Size | PDF

To overcome this limitation, various dimension-reduction imaging techniques [8790] have been developed to allow direct imaging of a 2D transient scene by a streak camera. In general, this imaging modality maps a 2D image into a line to interface with the streak camera’s entrance slit, so that the streak camera can capture an (x,y,t) data cube in a single shot. As an example, Fig. 10(b) shows the system schematic of the high-speed sampling camera (HISAC) [88]. An optical telescope imaged the transient scene to a 2D-1D optical fiber bundle. The input end of this fiber bundle was arranged as a 2D fiber array (15×15) [91], which sampled the 2D spatial information of the relayed image. The 2D fiber array was remapped to a 1D configuration (1×225) at the output end that interfaced with a streak camera’s entrance slit. The ultrashort temporal resolution, provided by the streak camera, was 47 ps, corresponding to an imaging speed of 21.3 Gfps. A 2D frame at a specific time point was covered by reshaping one row on the streak image to 2D according to the optical fiber mapping [Fig. 10(c)]. Totally, the sequence depth was up to 240 frames.

HISAC has been implemented in many imaging applications, including heat propagation in solids [92], electron energy transport [93], and plasma dynamics [94]. As an example, Fig. 10(d) shows the dynamic spatial dependence of ablation pressure imaged by HISAC [88]. A 100-J, 1.053-μm laser pulse was loosely focused on a 10-μm-thick hemispherical plastic target. The sequence shows the shock breakthrough process. It revealed that the shock heating decreased with the incident angle, and the breakthrough speed was faster for smaller incident angles. This nonuniformity was ascribed to the angular dependence of pressure generated by the laser absorption.

The dimension-reduction-based streak imaging has an outstanding sequence depth. In addition, its temporal resolution is not bounded by the response time in electronics, which permits hundreds of Gfps to even Tfps imaging speed. However, its biggest limitation is the low number of fibers in the bundle, which produces either a low spatial resolution or a small FOV.

B. Reconstruction Imaging

Despite recent progress, existing ultrafast detectors, restricted by their operating principles, still have limitations, such as imaging FOV, pixel count, and sequence depth. To overcome these limitations, novel computational techniques have thus been brought in. Of particular interest among existing computational techniques is CS. In conventional imaging, the number of measurements is required to be equal to the number of pixels (or voxels) to precisely reproduce a scene. In contrast, CS allows underdetermined reconstruction of sparse scenes. The underlying rationale is that natural scenes possess sparsity when expressed in an appropriate space. In this case, many independent bases in the chosen space convey little to no useful information. Therefore, the number of measurements can be substantially compressed without excessive loss of image fidelity [95]. Here, two representative techniques are discussed.

1. Compressed Ultrafast Photography

Compressed ultrafast photography (CUP) synergistically combines CS with streak imaging [96]. Figure 11(a) shows the schematic of a lossless encoding (LLE) CUP system [97,98]. In data acquisition, the dynamic scene, denoted as I, was first imaged by a camera lens. Following the intermediate image plane, a beam splitter reflected half of the light to an external CCD camera. The other half of the light passed through the beam splitter and was imaged to a digital micromirror device (DMD) through a 4f system consisting of a tube lens and a stereoscope objective. The DMD, as a binary amplitude spatial light modulator [99], spatially encoded transient scenes with a pseudo-random binary pattern. Because each DMD pixel can be tilted to either +12° (ON state) or 12° (OFF state) from its surface normal, two spatially encoded scenes, modulated by complementary patterns [see the upper inset in Fig. 11(a)], were generated after the DMD. The light beams from both channels were collected by the same stereoscope objective, passed through tube lenses, planar mirrors, and a right-angle prism mirror to form two images at separate horizontal positions on a fully opened entrance port (5mm×17mm). Inside the streak camera, the spatially encoded scenes experienced temporal shearing and spatiotemporal integration and were finally recorded by an internal CCD camera in the streak camera.

 figure: Fig. 11.

Fig. 11. CUP for single-shot real-time ultrafast optical imaging based on spatial encoding and 2D streaking followed by compressed-sensing reconstruction. (a) Schematic of the lossless-encoding CUP system [98]; DMD, digital micromirror device; upper inset, Illustration of complementary spatial encoding; lower inset, close-up of the configuration before the streak camera’s entrance port (black box); (b) CUP of a propagating photonic Mach cone [97]; (c) CUP of dynamic volumetric imaging [105]; (d) CUP of spectrally resolved pulse-laser-pumped fluorescence emission [96]. Scale bar: 10 mm.

Download Full Size | PDF

For image reconstruction, the acquired snapshots from the external CCD camera and the streak camera, denoted as E, were used as inputs for the two-step iterative shrinkage/thresholding algorithm [100], which solved the minimization problem of minI{12EOI22+βϕ(I)}. Here O is a joint measurement operator that accounts for all operations in data acquisition, ·2 denotes the l2 norm, ϕ(I) is a regularization function that promotes sparsity in the dynamic scene, and β is the regularization parameter. The solution to this minimization problem can be stably and accurately recovered, even with a highly compressed measurement [95,101]. For the original CUP system [96], the reconstruction produced up to 350 frames in a movie with an imaging speed of up to 100 Gfps and with an effective exposure time of 50ps [102]. Each frame contained 150pixels×150pixels. The LLE-CUP system, while maintaining the imaging speed at 100 Gfps, improved the numbers of (x,y,t) pixels in the data cube to 330×200×300 [97]. Recently, a trillion-frame-per-second CUP (T-CUP) system, employing a femtosecond streak camera [103], has achieved an imaging speed of 10 Tfps, an effective exposure time of 0.58 ps, numbers of (x, y) pixels of 450×150 per frame, and a sequence depth of 350 frames [104].

CUP has been used for a number of applications. First, it allows the capture, for the first time, a scattering-induced photonic Mach cone [97]. A thin scattering plate assembly contained an air (mixed with dry ice) source tunnel that was sandwiched between two silicone-rubber (mixed with aluminum oxide powder) display panels. When an ultrashort laser pulse was launched into the source tunnel, the scattering events generated secondary sources of light that advanced superluminally to the light propagating in the display panels, forming a scattering-induced photonic Mach cone. CUP imaged the formation and propagation of a photonic Mach cone at 100 Gfps [Fig. 11(b)]. Second, by leveraging the ultrahigh temporal resolution and the spatial encoding, CUP has enabled single-shot encrypted volumetric imaging [105]. Through the sequential imaging of the CCD camera inside the streak camera, high-speed volumetric imaging at 75 volumes per second was demonstrated by using a two-ball object rotating at 150 revolutions per minute [Fig. 11(c)]. Finally, CUP has achieved single-shot spectrally resolved fluorescence lifetime mapping [96]. Showing in Fig. 11(d), Rhodamine 6 G dye solution placed in a cuvette was excited by a single 7-ps laser pulse. The CUP system clearly captured both the excitation and the fluorescence emission processes. The movie has also allowed quantification of the fluorescence lifetime.

CUP exploits the spatial encoding and temporal shearing to tag each frame with a spatiotemporal “barcode,” which allows an (x,y,t) data cube to be compressively recorded as a snapshot with spatiotemporal mixing. This salient advantage overcomes the limitation in the FOV in conventional streak imaging, converting the streak camera into a 2D ultrafast optical detector. In addition, CUP’s recording paradigm allows fitting more frames onto the CCD surface area, which significantly improves the sequence depth while maintaining the inherent imaging speed of the streak camera. Compared with the LIF holography, CUP does not require the presence of a reference pulse, making it especially suitable for imaging incoherent light events (e.g., fluorescence emission). However, the spatiotemporal mixture in CUP trades spatial and temporal resolutions of a streak camera for the added spatial dimension.

2. Multiple-Aperture Compressed Sensing CMOS

CS has also been implemented in the time domain with a CMOS sensor. Figure 12(a) shows the structure of the multiple-aperture CS CMOS (MA-CS CMOS) sensor [106]. In image acquisition, a transient scene, I, first passed through the front optics. A 5×3 lens array (0.72mm×1.19mm pitch size and 3-mm focal length), sampling the aperture plane, optically generated a total of 15 replicated images; each was formed onto an imaging sensor (64pixels×108pixels in size). A dynamic shutter, encoded by a unique binary random code sequence for each sensor, modulated the temporal integration process [107,108]. The temporally encoded transient scene was spatiotemporally integrated on the sensor. The acquired data from all sensors, denoted by E, were fed into a compressed-sensing-based reconstruction algorithm [109,110] that solved the inverse problem of minIiDiIps.t.E=AI. Here, DiI is the discrete gradient of I at pixel i, ·p means the p norm (p=1 or 2), and A is the observation matrix. With the prior information about the 15 temporal codes and the assumption that all 15 images are spatially identical, the spatiotemporal data cube was recovered. The reconstructed frame rate and the temporal resolution, determined by the maximum operation frequency of the shutter controller, were 200 Mfps and 5 ns, respectively. Based on the captured 15 images, 32 frames could be recovered in the reconstructed movie.

 figure: Fig. 12.

Fig. 12. MA-CS CMOS sensor based on temporally encoding each of the image replicas (adapted from [106]). (a) System schematic. PD, photodiode; FD, float diffuser; SD, storage diode; (b) temporally resolved frame of laser-pulse-induced plasma emission. The inter-frame time interval is 5 ns.

Download Full Size | PDF

This sensor has been implemented in time-of-flight LIDAR [111] and plasma imaging [106]. As an example, Fig. 12(b) shows the plasma emission induced by an 8-ns, 532-nm laser pulse. The MA-CS CMOS sensor captured these dynamics in a period of 30 ns.

The implementation of the MA recording and CS overcomes the imaging speed limit in the conventional CMOS sensor. In addition, built upon the mature CMOS fabrication technology, the MA-CS CMOS sensor has great potential in increasing the pixel count. Moreover, different from the UFCs, a series of temporal gates, instead of one, was applied to the sensor, which significantly increased the light throughput. Finally, compared with the ISIS CCD camera, the fill factor of the MA-CS CMOS sensor has been improved to 16.7%. However, the MA recording scheme may face technical difficulties in scalability and parallax, which poses challenges in improving the sequence depth.

4. SUMMARY AND OUTLOOK

In this mini-review, based on the illumination requirement, we categorize single-shot ultrafast optical imaging into two general domains. According to specific image acquisition and reconstruction strategies, these domains are further divided into six methods. A total of 11 representative techniques have been surveyed from aspects of their underlying principles, system schematics, specifications, applications, as well as their advantages and limitations. This information is summarized in Table 1. In addition, Fig. 13 compares the sequence depths versus the imaging speeds of these techniques. In practice, researchers could use this table and figure as general guidelines to assess the fortes and bottlenecks and select the most suitable technique for their specific studies.

Tables Icon

Table 1. Comparative Summary of Representative Single-Shot Ultrafast Optical Imaging Techniques

 figure: Fig. 13.

Fig. 13. Comparison of representative single-shot ultrafast optical imaging techniques in imaging speeds and sequence depths. Triangles and circles represent active and passive detection domains. Blue and black colors represent the direct and reconstruction imaging methods, respectively. Solid and hollow marks represent high and low (including medium) light throughputs. The numbers in the parentheses are the years in which the techniques were published. CUP, compressed ultrafast photography; T-CUP, trillion-frame-per-second CUP; FRAME, frequency recognition algorithm for multiple exposures; HISAC, high-speed sampling camera; ISIS CCD, in situ storage image sensor CCD; LIF-DH, light-in-flight recording by digital holography; MA-CS CMOS, multi-aperture compressed sensing CMOS; SS-FDT, single-shot Fourier-domain tomography; SS-FTOP, single-shot femtosecond time-resolved optical polarimetry; STAMP, sequentially timed all-optical mapping photography; SF-STAMP, spectral-filtering STAMP; THPM, time-resolved holographic polarization microscopy; UFC, ultrafast framing camera.

Download Full Size | PDF

Single-shot ultrafast optical imaging will undoubtedly continue its fast evolution in the future. This interdisciplinary field is built upon the areas of laser science, optoelectronics, nonlinear optics, imaging theory, computational techniques, and information theory. Fast progress in these disciplines will create new imaging techniques and will improve the performance of existing techniques, both of which, in turn, will open new avenues for a wide range of laboratory and field applications [112115]. In the following, we outline four prospects in system development.

First, the recent development of a number of emerging techniques suggests intriguing opportunities for significantly improving the specifications of existing single-shot ultrafast optical imaging modalities in the near future. For example, implementing the dual-echelon-based probing schemes [116,117] could easily improve the sequence depth by approximately 1 order of magnitude for SS-FTOP (Section 2.A). In addition, ptychographic ultrafast imaging [118] has been recently demonstrated in simulation. Spectral multiplexing tomography [119] has shown ultrafast imaging ability experimentally on single frame recording. Both techniques, having solved the issue of limited probing angles, could be implemented in SS-FDT (Section 2.B) to reduce reconstruction artifacts and thus to improve spatial and temporal resolutions. As another example, time-stretching microscopy with gigahertz-level line rates has been demonstrated [120,121]. Integrating a 2D disperser in these systems could result in new wavelength-division-based 2D ultrafast imaging (Section 2.C). Finally, a high-speed image rotator [122] could increase the sequence depth of FRAME imaging (Section 2.D). For the passive-detection domain, the highest speed limit of silicon sensors, in theory, has been predicted to be 90.1 Gfps [123]. Currently, a number of Gfps-level sensors are under development [78,124,125]. This recent progress could significantly improve the imaging FOV, pixel count, and sequence depth in ultrafast CCD and CMOS sensors (Section 3.A). In addition, the advent of many femtosecond streak imaging techniques [126] could support the pursuit of the imaging speed of CUP (Section 3.B) further toward hundreds of Tfps levels.

Second, computational techniques will exert a more significant role in single-shot ultrafast optical imaging. Optical realization of mathematical models can transfer some unique advantages in these models into the ultrafast optical imaging systems and thus has alleviated the hardware limitations in, for example, imaging speed for MA-CS CMOS (Section 3.B). In addition, a number of established image reconstruction algorithms used in tomography have been grafted to the spatiotemporal domain [e.g., the algebraic reconstruction technique for SS-FDT (Section 2.B)]. It is therefore believed that this trend will continue, and more algorithms used in x-ray CT, magnetic resonance imaging, and ultrasound imaging may be applied in newly developed ultrafast optical imaging instruments. Finally, it is predicted that machine-learning techniques [127,128] will be implemented in the single-short ultrafast optical imaging to improve imaging reconstruction speed and accuracy.

Third, high-dimensional single-shot ultrafast optical imaging [129] will gain more attention. Many transient events may not be reflected by light intensity. Therefore, the ability to measure other optical contrasts, for example, phase and polarization, will significantly enhance the ability and the application scope of ultrafast optical imaging. A few techniques [e.g., SS-FTOP (Section 2.A) and THPM (Section 2.D)] have already been explored along this path. It is envisaged that imaging modalities that sense other optical contrasts (e.g., volumography, spectroscopy, and light-field photography) will be increasingly integrated into ultrafast optical imaging.

Finally, continuous streaming will be one of the ultimate milestones in single-shot ultrafast optical imaging. Working in the stroboscopic mode, current single-shot ultrafast imaging techniques still require a precise synchronization in imaging transient events, falling short in visualizing asynchronous processes. Towards this goal, innovations in large-format imaging sensors [130], high-speed interfaces [131], and ultrafast optical waveform recorders [132] can be leveraged. In addition, intelligent selection, reconstruction, and management of big data are also indispensable [133].

Funding

National Institutes of Health (NIH) (DP1 EB016986, R01 CA186567); Natural Sciences and Engineering Research Council of Canada (NSERC) (RGPAS-507845-2017, RGPIN-2017-05959); Fonds de Recherche du Québec - Nature et Technologies (FRQNT) (2019-NC-252960).

REFERENCES

1. J. B. Allen and R. F. Larry, Electrochemical Methods: Fundamentals and Applications (Wiley, 2001), pp. 156–176.

2. T. Gorkhover, S. Schorb, R. Coffee, M. Adolph, L. Foucar, D. Rupp, A. Aquila, J. D. Bozek, S. W. Epp, B. Erk, L. Gumprecht, L. Holmegaard, A. Hartmann, R. Hartmann, G. Hauser, P. Holl, A. Hömke, P. Johnsson, N. Kimmel, K.-U. Kühnel, M. Messerschmidt, C. Reich, A. Rouzée, B. Rudek, C. Schmidt, J. Schulz, H. Soltau, S. Stern, G. Weidenspointner, B. White, J. Küpper, L. Strüder, I. Schlichting, J. Ullrich, D. Rolles, A. Rudenko, T. Möller, and C. Bostedt, “Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles,” Nat. Photonics 10, 93–97 (2016). [CrossRef]  

3. M. Imada, A. Fujimori, and Y. Tokura, “Metal-insulator transitions,” Rev. Mod. Phys. 70, 1039–1263 (1998). [CrossRef]  

4. D. R. Solli, C. Ropers, P. Koonath, and B. Jalali, “Optical rogue waves,” Nature 450, 1054–1057 (2007). [CrossRef]  

5. B. Jalali, D. R. Solli, K. Goda, K. Tsia, and C. Ropers, “Real-time measurements, rare events and photon economics,” Eur. Phys. J. Spec. Top. 185, 145–157 (2010). [CrossRef]  

6. P. R. Poulin and K. A. Nelson, “Irreversible organic crystalline chemistry monitored in real time,” Science 313, 1756–1760 (2006). [CrossRef]  

7. V. V. Tuchin, “Methods and algorithms for the measurement of the optical parameters of tissues,” in Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnosis (SPIE, 2015), pp. 303–304.

8. N. Šiaulys, L. Gallais, and A. Melninkaitis, “Direct holographic imaging of ultrafast laser damage process in thin films,” Opt. Lett. 39, 2164–2167 (2014). [CrossRef]  

9. M. Sciamanna and K. A. Shore, “Physics and applications of laser diode chaos,” Nat. Photonics 9, 151–162 (2015). [CrossRef]  

10. R. I. Kodama, P. Norreys, K. Mima, A. Dangor, R. Evans, H. Fujita, Y. Kitagawa, K. Krushelnick, T. Miyakoshi, and N. Miyanaga, “Fast heating of ultrahigh-density plasma as a step towards laser fusion ignition,” Nature 412, 798–802 (2001). [CrossRef]  

11. Z. Li, H.-E. Tsai, X. Zhang, C.-H. Pai, Y.-Y. Chang, R. Zgadzaj, X. Wang, V. Khudik, G. Shvets, and M. C. Downer, “Single-shot visualization of evolving plasma wakefields,” AIP Conf. Proc. 1777, 040010 (2016). [CrossRef]  

12. D. Bradley, P. Bell, J. Kilkenny, R. Hanks, O. Landen, P. Jaanimagi, P. McKenty, and C. Verdon, “High‐speed gated x‐ray imaging for ICF target experiments,” Rev. Sci. Instrum. 63, 4813–4817 (1992). [CrossRef]  

13. P. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J. 57, 293–302 (2009). [CrossRef]  

14. D. M. Camm, “World’s most powerful lamp for high-speed photography,” Proc. SPIE 1801, 184–189 (1993). [CrossRef]  

15. M. E. Fermann, A. Galvanauskas, and G. Sucha, Ultrafast Lasers: Technology and Applications, Vol. 80 of Optical Engineering (CRC Press, 2002).

16. A. M. Weiner, “Ultrafast optical pulse shaping: a tutorial review,” Opt. Commun. 284, 3669–3692 (2011). [CrossRef]  

17. S. V. Patwardhan and J. P. Culver, “Quantitative diffuse optical tomography for small animals using an ultrafast gated image intensifier,” J. Biomed. Opt. 13, 011009 (2008). [CrossRef]  

18. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014). [CrossRef]  

19. H. Ishikawa, Ultrafast All-Optical Signal Processing Devices (Wiley, 2008).

20. K. Sato, E. Saitoh, A. Willoughby, P. Capper, and S. Kasap, Spintronics for Next Generation Innovative Devices (Wiley, 2015).

21. M. El-Desouki, M. J. Deen, Q. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS image sensors for high speed applications,” Sensors 9, 430–444 (2009). [CrossRef]  

22. Hamamatsu K. K., “Guide to streak cameras,” 2008, https://www.hamamatsu.com/resources/pdf/sys/SHSS0006E_STREAK.pdf.

23. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). [CrossRef]  

24. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

25. T. Feurer, J. C. Vaughan, and K. A. Nelson, “Spatiotemporal coherent control of lattice vibrational waves,” Science 299, 374–377 (2003). [CrossRef]  

26. L. Fieramonti, A. Bassi, E. A. Foglia, A. Pistocchi, C. D’Andrea, G. Valentini, R. Cubeddu, S. De Silvestri, G. Cerullo, and F. Cotelli, “Time-gated optical projection tomography allows visualization of adult zebrafish internal structures,” PLoS One 7, e50744 (2012). [CrossRef]  

27. M. Balistreri, H. Gersen, J. P. Korterik, L. Kuipers, and N. Van Hulst, “Tracking femtosecond laser pulses in space and time,” Science 294, 1080–1082 (2001). [CrossRef]  

28. G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6, 6021 (2015). [CrossRef]  

29. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016). [CrossRef]  

30. W. Becker, The bh TCSPC Handbook (Becker and Hickl, 2014).

31. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [CrossRef]  

32. T. G. Etoh, C. Vo Le, Y. Hashishin, N. Otsuka, K. Takehara, H. Ohtake, T. Hayashida, and H. Maruyama, “Evolution of ultra-high-speed CCD imagers,” Plasma Fusion Res. 2, S1021 (2007). [CrossRef]  

33. C. T. Chin, C. Lancée, J. Borsboom, F. Mastik, M. E. Frijlink, N. de Jong, M. Versluis, and D. Lohse, “Brandaris 128: a digital 25 million frames per second camera with 128 highly sensitive frames,” Rev. Sci. Instrum. 74, 5026–5034 (2003). [CrossRef]  

34. K. Goda, K. Tsia, and B. Jalali, “Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena,” Nature 458, 1145–1149 (2009). [CrossRef]  

35. C. Lei, B. Guo, Z. Cheng, and K. Goda, “Optical time-stretch imaging: principles and applications,” Appl. Phys. Rev. 3, 011102 (2016). [CrossRef]  

36. J.-L. Wu, Y.-Q. Xu, J.-J. Xu, X.-M. Wei, A. C. Chan, A. H. Tang, A. K. Lau, B. M. Chung, H. C. Shum, and E. Y. Lam, “Ultrafast laser-scanning time-stretch imaging at visible wavelengths,” Light Sci. Appl. 6, e16196 (2017). [CrossRef]  

37. A. H. Zewail, “4D ultrafast electron diffraction, crystallography, and microscopy,” Annu. Rev. Phys. Chem. 57, 65–103 (2006). [CrossRef]  

38. K. J. Gaffney and H. N. Chapman, “Imaging atomic structure and dynamics with ultrafast X-ray scattering,” Science 316, 1444–1448 (2007). [CrossRef]  

39. X. C. Zhang, “Terahertz wave imaging: horizons and hurdles,” Phys. Med. Biol. 47, 3667–3677 (2002). [CrossRef]  

40. J. Muybridge, “The horse in motion,” Nature 25, 605 (1882). [CrossRef]  

41. X. Wang, L. Yan, J. Si, S. Matsuo, H. Xu, and X. Hou, “High-frame-rate observation of single femtosecond laser pulse propagation in fused silica using an echelon and optical polarigraphy technique,” Appl. Opt. 53, 8395–8399 (2014). [CrossRef]  

42. T. Shin, J. W. Wolfson, S. W. Teitelbaum, M. Kandyla, and K. A. Nelson, “Dual echelon femtosecond single-shot spectroscopy,” Rev. Sci. Instrum. 85, 083115 (2014). [CrossRef]  

43. M. Fujimoto, S. Aoshima, M. Hosoda, and Y. Tsuchiya, “Femtosecond time-resolved optical polarigraphy: imaging of the propagation dynamics of intense light in a medium,” Opt. Lett. 24, 850–852 (1999). [CrossRef]  

44. L. Yan, (personal communication, May 2nd, 2018).

45. A. Couairon and A. Mysyrowicz, “Femtosecond filamentation in transparent media,” Phys. Rep. 441, 47–189 (2007). [CrossRef]  

46. P. B. Corkum and F. Krausz, “Attosecond science,” Nat. Phys. 3, 381–387 (2007). [CrossRef]  

47. M. T. Hassan, T. T. Luu, A. Moulet, O. Raskazovskaya, P. Zhokhov, M. Garg, N. Karpowicz, A. Zheltikov, V. Pervak, and F. Krausz, “Optical attosecond pulses and tracking the nonlinear response of bound electrons,” Nature 530, 66–70 (2016). [CrossRef]  

48. N. Abramson, “Light-in-flight recording by holography,” Opt. Lett. 3, 121–123 (1978). [CrossRef]  

49. T. Kubota, K. Komai, M. Yamagiwa, and Y. Awatsuji, “Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation,” Opt. Express 15, 14348–14354 (2007). [CrossRef]  

50. H. Rabal, J. Pomarico, and R. Arizaga, “Light-in-flight digital holography display,” Appl. Opt. 33, 4358–4360 (1994). [CrossRef]  

51. T. Kakue, K. Tosa, J. Yuasa, T. Tahara, Y. Awatsuji, K. Nishio, S. Ura, and T. Kubota, “Digital light-in-flight recording by holography by use of a femtosecond pulsed laser,” IEEE J. Sel. Top. Quantum Electron. 18, 479–485 (2012). [CrossRef]  

52. A. Komatsu, Y. Awatsuji, and T. Kubota, “Dependence of reconstructed image characteristics on the observation condition in light-in-flight recording by holography,” J. Opt. Soc. Am. A 22, 1678–1682 (2005). [CrossRef]  

53. G. P. Wakeham and K. A. Nelson, “Dual-echelon single-shot femtosecond spectroscopy,” Opt. Lett. 25, 505–507 (2000). [CrossRef]  

54. Z. Li, R. Zgadzaj, X. Wang, Y.-Y. Chang, and M. C. Downer, “Single-shot tomographic movies of evolving light-velocity objects,” Nat. Commun. 5, 3085 (2014). [CrossRef]  

55. Z. Li, R. Zgadzaj, X. Wang, S. Reed, P. Dong, and M. C. Downer, “Frequency-domain streak camera for ultrafast imaging of evolving light-velocity objects,” Opt. Lett. 35, 4087–4089 (2010). [CrossRef]  

56. N. H. Matlis, S. Reed, S. S. Bulanov, V. Chvykov, G. Kalintchenko, T. Matsuoka, P. Rousseau, V. Yanovsky, A. Maksimchuk, and S. Kalmykov, “Snapshots of laser wakefields,” Nat. Phys. 2, 749–753 (2006). [CrossRef]  

57. S. Le Blanc, E. Gaul, N. Matlis, A. Rundquist, and M. Downer, “Single-shot measurement of temporal phase shifts by frequency-domain holography,” Opt. Lett. 25, 764–766 (2000). [CrossRef]  

58. M. C. Nuss, M. Li, T. H. Chiu, A. M. Weiner, and A. Partovi, “Time-to-space mapping of femtosecond pulses,” Opt. Lett. 19, 664–666 (1994). [CrossRef]  

59. R. Gordon, R. Bender, and G. T. Herman, “Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography,” J. Theort. Biol. 29, 471–476, IN1-IN2, 477-481 (1970).

60. Z. Li, “Single-shot visualization of evolving, light-speed refractive index structures,” Ph.D. dissertation (University of Texas at Austin, 2014).

61. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

62. K. Nakagawa, A. Iwasaki, Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics 8, 695–700 (2014). [CrossRef]  

63. A. Weiner, Ultrafast Optics, Vol. 72 of Wiley Series in Pure and Applied Optics (Wiley, 2011).

64. G. Gao, K. He, J. Tian, C. Zhang, J. Zhang, T. Wang, S. Chen, H. Jia, F. Yuan, L. Liang, X. Yan, S. Li, C. Wang, and F. Yin, “Ultrafast all-optical solid-state framing camera with picosecond temporal resolution,” Opt. Express 25, 8721–8729 (2017). [CrossRef]  

65. T. Suzuki, R. Hida, Y. Yamaguchi, K. Nakagawa, T. Saiki, and F. Kannari, “Single-shot 25-frame burst imaging of ultrafast phase transition of Ge2Sb2Te5 with a sub-picosecond resolution,” Appl. Phys. Express 10, 092502 (2017). [CrossRef]  

66. T. Suzuki, F. Isa, L. Fujii, K. Hirosawa, K. Nakagawa, K. Goda, I. Sakuma, and F. Kannari, “Sequentially timed all-optical mapping photography (STAMP) utilizing spectral filtering,” Opt. Express 23, 30512–30522 (2015). [CrossRef]  

67. G. Gao, J. Tian, T. Wang, K. He, C. Zhang, J. Zhang, S. Chen, H. Jia, F. Yuan, L. Liang, X. Yan, S. Li, C. Wang, and F. Yin, “Ultrafast all-optical imaging technique using low-temperature grown GaAs/AlxGa1-xAs multiple-quantum-well semiconductor,” Phys. Lett. 381, 3594–3598 (2017). [CrossRef]  

68. P. Gabolde and R. Trebino, “Single-frame measurement of the complete spatiotemporal intensity and phase of ultrashort laser pulses using wavelength-multiplexed digital holography,” J. Opt. Soc. Am. B 25, A25–A33 (2008). [CrossRef]  

69. P. Gabolde and R. Trebino, “Single-shot measurement of the full spatio-temporal field of ultrashort pulses with multi-spectral digital holography,” Opt. Express 14, 11460–11467 (2006). [CrossRef]  

70. J. Takeda, W. Oba, Y. Minami, T. Saiki, and I. Katayama, “Ultrafast crystalline-to-amorphous phase transition in Ge2Sb2Te5 chalcogenide alloy thin film using single-shot imaging spectroscopy,” Appl. Phys. Lett. 104, 261903 (2014). [CrossRef]  

71. Q.-Y. Yue, Z.-J. Cheng, L. Han, Y. Yang, and C.-S. Guo, “One-shot time-resolved holographic polarization microscopy for imaging laser-induced ultrafast phenomena,” Opt. Express 25, 14182–14191 (2017). [CrossRef]  

72. T. Colomb, F. Dürr, E. Cuche, P. Marquet, H. G. Limberger, R.-P. Salathé, and C. Depeursinge, “Polarization microscopy by use of digital holography: application to optical-fiber birefringence measurements,” Appl. Opt. 44, 4461–4469 (2005). [CrossRef]  

73. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Aldén, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl. 6, e17045 (2017). [CrossRef]  

74. D. Birch and R. Imhof, “Coaxial nanosecond flashlamp,” Rev. Sci. Instrum. 52, 1206–1212 (1981). [CrossRef]  

75. W. O’Hagan, M. McKenna, D. Sherrington, O. Rolinski, and D. Birch, “MHz LED source for nanosecond fluorescence sensing,” Meas. Sci. Technol. 13, 84–91 (2001). [CrossRef]  

76. T. Araki and H. Misawa, “Light emitting diode‐based nanosecond ultraviolet light source for fluorescence lifetime measurements,” Rev. Sci. Instrum. 66, 5469–5472 (1995). [CrossRef]  

77. L. Lazovsky, D. Cismas, G. Allan, and D. Given, “CCD sensor and camera for 100 Mfps burst frame rate image capture,” Proc. SPIE 5787, 184–190 (2005). [CrossRef]  

78. T. G. Etoh, D. V. Son, T. Yamada, and E. Charbon, “Toward one giga frames per second—evolution of in situ storage image sensors,” Sensors 13, 4640–4658 (2013). [CrossRef]  

79. V. Tiwari, M. Sutton, and S. McNeill, “Assessment of high speed imaging systems for 2D and 3D deformation measurements: methodology development and validation,” Exp. Mech. 47, 561–579 (2007). [CrossRef]  

80. Stanford Computer Optics, “XXRapidFrame: multiframing ICCD camera,” 2017, http://www.stanfordcomputeroptics.com/download/Brochure-XXRapidFrame.pdf.

81. Specialised Imaging, “SIMD—ultra high speed framing camera,” 2017, http://specialised-imaging.com/products/simd-ultra-high-speed-framing-camera.

82. M. Versluis, “High-speed imaging in fluids,” Exp. Fluids 54, 1458 (2013). [CrossRef]  

83. H. Xing, Q. Zhang, C. H. Braithwaite, B. Pan, and J. Zhao, “High-speed photography and digital optical measurement techniques for geomaterials: fundamentals and applications,” Rock Mech. Rock Eng. 50, 1611–1659 (2017). [CrossRef]  

84. H. Fujita, S. Kanazawa, K. Ohtani, A. Komiya, and T. Sato, “Spatiotemporal analysis of propagation mechanism of positive primary streamer in water,” J. Appl. Phys. 113, 113304 (2013). [CrossRef]  

85. L. Dresselhaus-Cooper, J. E. Gorfain, C. T. Key, B. K. Ofori-Okai, S. J. Ali, D. J. Martynowych, A. Gleason, S. Kooi, and K. A. Nelson, “Development of single-shot multi-frame imaging of cylindrical shock waves for deeper understanding of a multi-layered target geometry,” arXiv:1707.08940 (2017).

86. J. D. Kilkenny, S. G. Glendinning, S. W. Haan, B. A. Hammel, J. D. Lindl, D. Munro, B. A. Remington, S. V. Weber, J. P. Knauer, and C. P. Verdon, “A review of the ablative stabilization of the Rayleigh-Taylor instability in regimes relevant to inertial confinement fusion,” Phys. Plasmas 1, 1379–1389 (1994). [CrossRef]  

87. B. Heshmat, G. Satat, C. Barsi, and R. Raskar, “Single-shot ultrafast imaging using parallax-free alignment with a tilted lenslet array,” in Conference on Lasers and Electro-Optics (CLEO), OSA Technical Digest (Optical Society of America, 2014), paper STu3E.7.

88. R. Kodama, K. Okada, and Y. Kato, “Development of a two-dimensional space-resolved high speed sampling camera,” Rev. Sci. Instrum. 70, 625–628 (1999). [CrossRef]  

89. H. Shiraga, M. Nakasuji, M. Heya, and N. Miyanaga, “Two-dimensional sampling-image x-ray streak camera for ultrafast imaging of inertial confinement fusion plasmas,” Rev. Sci. Instrum. 70, 620–623 (1999). [CrossRef]  

90. A. Tsikouras, R. Berman, D. W. Andrews, and Q. Fang, “High-speed multifocal array scanning using refractive window tilting,” Biomed. Opt. Express 6, 3737–3747 (2015). [CrossRef]  

91. K. Shigemori, T. Yamamoto, Y. Hironaka, T. Kawashima, S. Hattori, H. Nagatomo, H. Kato, N. Sato, T. Watari, and M. Takagi, “Converging shock generation with cone target filled with low density foam,” J. Phys. Conf. Ser. 717, 012050 (2016). [CrossRef]  

92. M. Nakatsutsumi, J. Davies, R. Kodama, J. Green, K. Lancaster, K. Akli, F. Beg, S. Chen, D. Clark, and R. Freeman, “Space and time resolved measurements of the heating of solids to ten million kelvin by a petawatt laser,” New J. Phys. 10, 043046 (2008). [CrossRef]  

93. R. Scott, F. Perez, J. Santos, C. Ridgers, J. Davies, K. Lancaster, S. Baton, P. Nicolai, R. Trines, and A. Bell, “A study of fast electron energy transport in relativistically intense laser-plasma interactions with large density scalelengths,” Phys. Plasmas 19, 053104 (2012). [CrossRef]  

94. J. Fuchs, M. Nakatsutsumi, J. Marques, P. Antici, N. Bourgeois, M. Grech, T. Lin, L. Romagnani, V. Tikhonchuk, and S. Weber, “Space-and time-resolved observation of single filaments propagation in an underdense plasma and of beam coupling between neighbouring filaments,” Plasma Phys. Controlled Fusion 49, B497–B504 (2007). [CrossRef]  

95. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013). [CrossRef]  

96. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]  

97. J. Liang, C. Ma, L. Zhu, Y. Chen, L. Gao, and L. V. Wang, “Single-shot real-time video recording of photonic Mach cone induced by a scattered light pulse,” Sci. Adv. 3, e1601814 (2017). [CrossRef]  

98. J. Liang, C. Ma, L. Zhu, Y. Chen, L. Gao, and L. V. Wang, “Ultrafast imaging of light scattering dynamics using second-generation compressed ultrafast photography,” Proc. SPIE 10076, 1007612 (2017). [CrossRef]  

99. J. Liang, S.-Y. Wu, R. N. Kohn, M. F. Becker, and D. J. Heinzen, “Grayscale laser image formation using a programmable binary mask,” Opt. Eng. 51, 108201 (2012). [CrossRef]  

100. J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). [CrossRef]  

101. E. J. Candès, “The restricted isometry property and its implications for compressed sensing,” C. R. Math. Acad. Sci. Paris 346, 589–592 (2008). [CrossRef]  

102. L. Zhu, Y. Chen, J. Liang, L. Gao, C. Ma, and L. V. Wang, “Space- and intensity-constrained reconstruction for compressed ultrafast photography,” Optica 3, 694–697 (2016). [CrossRef]  

103. Hamamatsu K.K., “FESCA-200 femtosecond streak camera,” 2015, http://www.scbeibin.com/dda/htmm/uploadfile/20170505125154255.pdf.

104. J. Liang, L. Zhu, and L. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light Sci. Appl. 7, 42 (2018). [CrossRef]  

105. J. Liang, L. Gao, P. Hai, C. Li, and L. V. Wang, “Encrypted three-dimensional dynamic imaging using snapshot time-of-flight compressed ultrafast photography,” Sci. Rep. 5, 15504 (2015). [CrossRef]  

106. F. Mochizuki, K. Kagawa, S.-I. Okihara, M.-W. Seo, B. Zhang, T. Takasawa, K. Yasutomi, and S. Kawahito, “Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor,” Opt. Express 24, 4155–4176 (2016). [CrossRef]  

107. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006). [CrossRef]  

108. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32, 1–10 (2013). [CrossRef]  

109. C. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master’s thesis (Rice University, 2010).

110. “TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms, 2009,” http://www.caam.rice.edu/~optimization/L1/TVAL3/.

111. F. Mochizuki, K. Kagawa, M.-W. Seo, T. Takasawa, K. Yasutomi, and S. Kawahito, “A multi-aperture compressive time-of-flight CMOS imager for pixel-wise coarse histogram acquisition,” in International Image Sensor Workshop (IISW) (2015), pp. 178–181.

112. Q. Song, A. Nakamura, K. Hirosawa, K. Isobe, K. Midorikawa, and F. Kannari, “Two-dimensional spatiotemporal focusing of femtosecond pulses and its applications in microscopy,” Rev. Sci. Instrum. 86, 083701 (2015). [CrossRef]  

113. N. Vogt, “Voltage sensors: challenging, but with potential,” Nat. Methods 12, 921–924 (2015). [CrossRef]  

114. P. Trocha, M. Karpov, D. Ganin, M. H. P. Pfeiffer, A. Kordts, S. Wolf, J. Krockenberger, P. Marin-Palomo, C. Weimann, S. Randel, W. Freude, T. J. Kippenberg, and C. Koos, “Ultrafast optical ranging using microresonator soliton frequency combs,” Science 359, 887–891 (2018). [CrossRef]  

115. M.-G. Suh and K. J. Vahala, “Soliton microcomb range measurement,” Science 359, 884–887 (2018). [CrossRef]  

116. J. Zhang, X. Tan, M. Liu, S. W. Teitelbaum, K. W. Post, F. Jin, K. A. Nelson, D. Basov, W. Wu, and R. D. Averitt, “Cooperative photoinduced metastable phase control in strained manganite films,” Nat. Mater. 15, 956–960 (2016). [CrossRef]  

117. K. Kim, B. Yellampalle, A. Taylor, G. Rodriguez, and J. Glownia, “Single-shot terahertz pulse characterization via two-dimensional electro-optic imaging with dual echelons,” Opt. Lett. 32, 1968–1970 (2007). [CrossRef]  

118. P. Sidorenko, O. Lahav, and O. Cohen, “Ptychographic ultrahigh-speed imaging,” Opt. Express 25, 10997–11008 (2017). [CrossRef]  

119. N. Matlis, A. Axley, and W. Leemans, “Single-shot ultrafast tomographic imaging by spectral multiplexing,” Nat. Commun. 3, 1111 (2012). [CrossRef]  

120. F. Xing, H. Chen, C. Lei, M. Chen, S. Yang, and S. Xie, “A 2-GHz discrete-spectrum waveband-division microscopic imaging system,” Opt. Commun. 338, 22–26 (2015). [CrossRef]  

121. C. Lei, Y. Wu, A. C. Sankaranarayanan, S. M. Chang, B. Guo, N. Sasaki, H. Kobayashi, C. W. Sun, Y. Ozeki, and K. Goda, “GHz optical time-stretch microscopy by compressive sensing,” IEEE Photon. J. 9, 3900308 (2017). [CrossRef]  

122. E. G. Paek, Y.-S. Im, J. Y. Choe, and T. K. Oh, “Acoustically steered and rotated true-time-delay generator based on wavelength-division multiplexing,” Appl. Opt. 39, 1298–1308 (2000). [CrossRef]  

123. T. G. Etoh, A. Q. Nguyen, Y. Kamakura, K. Shimonomura, T. Y. Le, and N. Mori, “The theoretical highest frame rate of silicon image sensors,” Sensors 17, 483 (2017). [CrossRef]  

124. T. Etoh, V. Dao, K. Shimonomura, E. Charbon, C. Zhang, Y. Kamakura, and T. Matsuoka, “Toward 1 Gfps: evolution of ultra-high-speed image sensors-ISIS, BSI, multi-collection gates, and 3D-stacking,” in IEEE International Electron Devices Meeting (IEDM) (IEEE, 2014).

125. M. Zlatanski, W. Uhring, and J.-P. Le Normand, “Sub-500-ps temporal resolution streak-mode optical sensor,” IEEE Sens. J. 15, 6570–6583 (2015). [CrossRef]  

126. U. Fruhling, M. Wieland, M. Gensch, T. Gebert, B. Schutte, M. Krikunova, R. Kalms, F. Budzyn, O. Grimm, J. Rossbach, E. Plonjes, and M. Drescher, “Single-shot terahertz-field-driven X-ray streak camera,” Nat. Photonics 3, 523–528 (2009). [CrossRef]  

127. B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature 555, 487–492 (2018). [CrossRef]  

128. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018). [CrossRef]  

129. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016). [CrossRef]  

130. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012). [CrossRef]  

131. J. Linkemann, “Review of up-to date digital cameras interfaces,” in Advanced Optical Technologies (2013), p. 141.

132. M. A. Foster, R. Salem, D. F. Geraghty, A. C. Turner-Foster, M. Lipson, and A. L. Gaeta, “Silicon-chip-based ultrafast optical oscilloscope,” Nature 456, 81–84 (2008). [CrossRef]  

133. M. Xiaofeng and C. Xiang, “Big data management: concepts, techniques and challenges,” J. Comput. Res. Dev. 1, 146–169 (2013).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. (a) Categorization of single-shot ultrafast optical imaging in two detection domains and six methods with 11 representative techniques; (b) conceptual illustration of active-detection-based single-shot ultrafast optical imaging. Colors represent different optical markers. (c) Conceptual illustration of passive-detection-based single-shot ultrafast optical imaging.
Fig. 2.
Fig. 2. SS-FTOP based on space division followed by varied time delays for imaging a single ultrashort pulse’s propagation in a Kerr medium (adapted from [41]). (a) Schematic of experimental setup; (b) two sequences of single ultrashort pulses’ propagation. The time interval between adjacent frames is 0.96 ps. (c) Transverse intensity profiles of the first frames of the two single-shot observations in (b).
Fig. 3.
Fig. 3. LIF recording by DH based on obliquely sweeping the reference pulse on the imaging plane, a form of space division [51]. (a) Experimental setup for recording the hologram; (b) representative frames of single ultrashort laser pulses’ movement on a USAF resolution target. The time interval between frames is 192 fs. The white arrow points to the features of the USAF resolution target.
Fig. 4.
Fig. 4. SS-FDT for imaging transient refractive index perturbation based on angle division followed by spectral imaging holography [54]. (a) Schematic of the experimental setup. Upper-left inset, principle of imprinting a phase streak in a probe pulse (adapted from [55]); θ , incident angle of the probe pulse; Δ n , refractive index change; (b) phase streaks induced by the evolving refractive index profile. x pr and z pr ( loc ) , the transverse and the longitudinal coordinates of probe pulses; (c) representative snapshots of the refractive index change using a pump energy of E = 0.7 μJ ; x ob and z ob ( loc ) , the transverse and the longitudinal coordinates of the object.
Fig. 5.
Fig. 5. STAMP based on temporal wavelength division. (a) System schematic of STAMP [62]; upper inset, normalized intensity profiles of the six probe pulses with an interframe time interval of 229 fs (corresponding to a frame rate of 4.4 Tfps) and an exposure time of 733 fs; lower insets, schematics of the temporal mapping device and the spatial mapping device; (b) single-shot imaging of electronic response and phonon formation at 4.4 Tfps [62]; (c) schematic setup of spectrally filtered (SF)-STAMP [65]; f 1 and f 2 , focal lengths of lenses. BPF, bandpass filter; DOE, diffractive optical element; (d) full sequence of crystalline-to-amorphous phase transition in Ge 2 Sb 2 Te 5 captured by the SF-STAMP system with an interframe time interval of 133 fs (corresponding to an imaging speed of 7.52 Tfps) and an exposure time of 465 fs [65].
Fig. 6.
Fig. 6. THPM based on time delays and spatial frequency division of the reference pulses for imaging laser-induced damage of a mica lamina sample (adapted from [71]). (a) Schematic of experimental setup. Black arrows indicate the pulses’ SOPs. KDP, potassium dihydrogen phosphate; lower-right inset, generation of four reference pulses; (b) recorded hologram of a USAF resolution target. The zoom-in picture shows the detailed interferometric pattern of this hologram. (c) Spatial frequency spectrum of (b); (d) time-resolved multicontrast imaging of ultrafast laser-induced damage in a mica lamina sample.
Fig. 7.
Fig. 7. FRAME imaging based on spatial frequency division of the probe pulses [73]. (a) System schematic; (b) sequence of reconstructed frames of a propagating femtosecond light pulse in a Kerr medium at 5 Tfps. The white dashed arc in the 600-fs frame indicates the pulse’s position at 0 fs. (c) Vertically summed intensity profiles of (b).
Fig. 8.
Fig. 8. Structure of DALSA’s ISIS CCD camera based on on-chip charge transfer and storage (adapted from [77]). The sensor has 64 pixels × 64 pixels , while six are shown here. Arrows indicate the charge transfer directions.
Fig. 9.
Fig. 9. UFC based on beam splitting along with ultrafast time gating. (a) Schematic of a UFC [79]; (b) schematic of shadowgraph imaging of cylindrical shock waves using a UFC (adapted from [85]); inset, configuration of the multilayered target; (c) sequence of captured shadowgraph frames showing the convergence and subsequent divergence of the shock waves generated by a laser excitation ring (red dashed circle in the first frame) in the target [85]. The shock front is pointed by the white arrows. Additional rings and structure instabilities are shown by the blue arrows and orange arrows, respectively.
Fig. 10.
Fig. 10. HISAC based on remapping the scene from 2D to 1D in space and streak imaging. (a) Schematic of a streak camera; (b) schematic of a HISAC system; (c) formation process of individual frames from the streak data; (d) sequence showing shock wave breakthrough. The time interval between frames is 336 ps . The laser focus is outlined by the white dashed circle. (b)–(d) are adapted from [88].
Fig. 11.
Fig. 11. CUP for single-shot real-time ultrafast optical imaging based on spatial encoding and 2D streaking followed by compressed-sensing reconstruction. (a) Schematic of the lossless-encoding CUP system [98]; DMD, digital micromirror device; upper inset, Illustration of complementary spatial encoding; lower inset, close-up of the configuration before the streak camera’s entrance port (black box); (b) CUP of a propagating photonic Mach cone [97]; (c) CUP of dynamic volumetric imaging [105]; (d) CUP of spectrally resolved pulse-laser-pumped fluorescence emission [96]. Scale bar: 10 mm.
Fig. 12.
Fig. 12. MA-CS CMOS sensor based on temporally encoding each of the image replicas (adapted from [106]). (a) System schematic. PD, photodiode; FD, float diffuser; SD, storage diode; (b) temporally resolved frame of laser-pulse-induced plasma emission. The inter-frame time interval is 5 ns.
Fig. 13.
Fig. 13. Comparison of representative single-shot ultrafast optical imaging techniques in imaging speeds and sequence depths. Triangles and circles represent active and passive detection domains. Blue and black colors represent the direct and reconstruction imaging methods, respectively. Solid and hollow marks represent high and low (including medium) light throughputs. The numbers in the parentheses are the years in which the techniques were published. CUP, compressed ultrafast photography; T-CUP, trillion-frame-per-second CUP; FRAME, frequency recognition algorithm for multiple exposures; HISAC, high-speed sampling camera; ISIS CCD, in situ storage image sensor CCD; LIF-DH, light-in-flight recording by digital holography; MA-CS CMOS, multi-aperture compressed sensing CMOS; SS-FDT, single-shot Fourier-domain tomography; SS-FTOP, single-shot femtosecond time-resolved optical polarimetry; STAMP, sequentially timed all-optical mapping photography; SF-STAMP, spectral-filtering STAMP; THPM, time-resolved holographic polarization microscopy; UFC, ultrafast framing camera.

Tables (1)

Tables Icon

Table 1. Comparative Summary of Representative Single-Shot Ultrafast Optical Imaging Techniques

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.