Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fourier holographic endoscopy for imaging continuously moving objects

Open Access Open Access

Abstract

Coherent fiber bundles are widely used for endoscopy, but conventional approaches require distal optics to form an object image and acquire pixelated information owing to the geometry of the fiber cores. Recently, holographic recording of a reflection matrix enables a bare fiber bundle to perform pixelation-free microscopic imaging as well as allows a flexible mode operation, because the random core-to-core phase retardations due to any fiber bending and twisting could be removed in situ from the recorded matrix. Despite its flexibility, the method is not suitable for a moving object because the fiber probe should remain stationary during the matrix recording to avoid the alteration of the phase retardations. Here, we acquire a reflection matrix of a Fourier holographic endoscope equipped with a fiber bundle and explore the effect of fiber bending on the recorded matrix. By removing the motion effect, we develop a method that can resolve the perturbation of the reflection matrix caused by a continuously moving fiber bundle. Thus, we demonstrate high-resolution endoscopic imaging through a fiber bundle, even when the fiber probe changes its shape along with the moving objects. The proposed method can be used for minimally invasive monitoring of behaving animals.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

A fiber bundle is a light-guiding medium that can deliver light energy from a source position to a desired destination. Because the light is confined and guided through flexible pathways during the light delivery, fiber bundles are often used for sample illumination through a curved passage, where free-space illumination is not available. A coherent fiber bundle, in which the individual core fibers have an ordered arrangement, can deliver light energy as well as carry image information. Because of the correspondence between each core and a single pixel, an image is formed at the output facet. Thus, a fiber bundle is used as a flexible image guide in industrial, military, and medical applications. In particular, it is employed in endoscopy in medicine, where fiber scopes are commonly used for various diagnosis [14]. A fiber bundle is also used in a wide range of imaging applications such as fluorescence imaging [57], confocal imaging [811], two-photon imaging [1215], and optical coherence tomography [1618].

Despite the handiness, it has not been utilized for high-resolution imaging due to the pixelation artifact. Because each core serves as a single pixel, the final image is inevitably pixelated because of the discrete nature of the core distribution. Object features finer than the core spacing are truncated, and their information cannot reach the output facet. To date, several approaches have been introduced to remove or mitigate the pixelation artifact based on image filtering and smoothing [1921], image compounding, and interpolation [2224]. These methods could improve the quality of the final image by suppressing the pixelation artifact of the fiber bundles. However, the basic imaging principle, the core-to-pixel correspondence, remains untouched, and as a result, the resolution is limited by the core spacing.

In the past decades, adaptive optics enabled fiber bundles to transmit object images without the pixelation artifact, thereby achieving a higher resolution than the conventional limit determined by the core distribution. By controlling the wavefront of the input light, a focused spot is generated and shifted at the output facet of the fiber bundle, and fluorescence images are acquired while scanning the spot [2529]. As measuring a transmission matrix (TM) enables optical fibers to deliver high-resolution images [3033], the TM strategy has been employed to allow fiber bundles to carry clean object images as well as to remove the image pixelation [3437]. These approaches have been successful in demonstrating high-resolution and pixelation-free imaging of biological or non-biological specimens. However, the need for prior calibration of a fiber bundle imposes a stringent requirement on these methods; that is, the fiber bundle should remain immovable between the calibration and image acquisition procedures. The bending of the fiber bundle after the calibration disrupts the image retrieval and causes loss of imaging capability. To recover the vision, re-calibration is inevitable for the new conformation. Thus, most techniques operate only as a rigid scope, which significantly restricts the use of a fiber bundle as a flexible imaging probe in real-life applications.

A recent approach—recording a reflection matrix (RM) using a fiber bundle—does not require prior calibration. The method in situ corrects both the forward and backward modal dispersions of the fiber bundle arising from the recorded matrix. As a result, it is possible to obtain the object information in any bundle conformation, such that a fully flexible endoscopy for imaging unstained biological tissues is realized [38]. Although it allows a flexible operation, the object needs to remain stationary during the matrix recording. If the shape of the fiber bundle changes during the matrix acquisition, then the imaging quality is compromised, because the matrix experiences a time-dependent modal dispersion. In another approach, the modal dispersion of a fiber bundle was measured by employing a partial reflector at the distal end of the fiber bundle. By using the partially reflected light as a reference beam, the effect of the modal dispersion on object images was quickly evaluated, and thus, a real-time imaging was demonstrated [39]. Despite its lensless configuration, the requirement of a partial reflector at the distal tip of the fiber bundle increases the system complexity as well as decreases the collection efficiency of the light from target objects. Thus, the imaging of a continuously traveling object, such as a moving animal, through a fiber bundle in an optics-free configuration remains unexplored.

In this study, we recorded an RM using a Fourier holographic endoscope equipped with a fiber bundle and systematically explored the effect of probe bending on the RM. We introduced stepwise translations to the distal tip of the fiber bundle and acquired multiple RMs. By comparing the recorded RMs, we observed that a small change in the fiber bundle conformation caused a linear phase ramp on the object’s RM. Based on this finding, we developed an image reconstruction algorithm that can retrieve clean object images through a continuously changing fiber bundle. In the reconstruction procedures, the RM taken from a moving object was divided into multiple submatrices, and the relative phase difference between two neighboring submatrices was identified and compensated. By eliminating the motion artifact in the RM, the high-resolution Fourier holographic endoscopy for imaging a continuously moving object was demonstrated through a bare fiber bundle.

2. Experimental setup

The experimental setup is illustrated in Fig. 1, and the detailed schematic of the illumination and collection geometry is presented in Fig. 2. A light wave driven by a diode laser (Finesse Pure, Laser Quantum) was used as the light source. The wavelength (λ ) and coherence length of the source light were 532 nm and 6 mm, respectively. The laser beam was split into sample and reference beams by a beam splitter (BS1). After being reflected off a two-axis galvanometer mirror (GM), the sample beam was delivered by 4-f relay lenses (L1 and L2) via a beam splitter (BS2) and then focused on the input plane (IP, see Fig. 2) by an objective lens (OL, 10×, 0.25 NA, Olympus). On the IP, one end (proximal end) of a 1-m-long fiber bundle (FIGH-10-350S, Fujikura) was located. The plane of the GM was conjugated to the back focal plane of the OL, thereby enabling the GM to control the position of the focused beam at the IP. The size of the focal spot on the IP was smaller than the diameter of the cores; thus, the sample beam was focused on each core one at a time. An imaging sample was located on a computer-controlled translation stage (DDSM100/M, Thorlabs), which can travel 100 mm in one direction, and the opposite end (distal end) of the fiber bundle was placed on the same stage facing the sample at a standoff distance d. Usually, d = 600 µm was used for imaging the objects. As the sample and the fiber bundle were connected with each other, stage movement introduced a shape change in the fiber under the same imaging condition. This configuration represents the case of imaging a moving animal with the probing fiber attached to its head. The sample beam transmitted through the illumination core traveled to the distal end, which was placed at the output plane (OP, see Fig. 2). The beam then spread while propagating in free space by a distance d, and reflected off from the object on the sample plane (SP, see Fig. 2). The scattered beam propagated d in the reverse direction and was collected by the fiber bundle via the multiple cores around the illumination core. The returning wave reaching the proximal end of the fiber at the IP was captured by the OL and then delivered to a camera (pco.edge 4.2, PCO) via lens L3 and a beam splitter (BS3). The L3 formed a 4-f configuration together with the OL. Therefore, the waveform at the IP was recorded by the camera. The reference beam sampled by the BS1 was reshaped by a beam expander composed of the lenses L4 and L5, and combined with the sample beam at the BS3, formed an off-axis interferogram at the camera.

 figure: Fig. 1.

Fig. 1. Experimental setup. The illumination and reflected beams are shown in green and yellow colors for a clear distinction, respectively. L1-5: lenses, BS1-3: beam splitters, GM: galvanometer mirror, OL: objective lens, TS: translation stage.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Image formation principle. (a) Illumination pathway. (b) Reflection pathway. IP: input plane, OP: output plane, SP: sample plane, $({{u_i},{v_i}} )$: coordinate of illumination core, $({{u_r},{v_r}} )$: coordinate of detection core, $({x,y} )$: coordinate of an object, $\phi _i^b$: phase retardation by the illumination core, $\phi _r^b$: phase retardation by the detection core, ${E_i}$: illumination wave on the object, ${E_r}$: reflected wave from the object, d: standoff distance.

Download Full Size | PDF

Because the sample beam was illuminated through a single core, the strong back reflection was concentrated only at the illumination core. In contrast, due to the standoff distance between the distal end and the sample, the reflected wave containing the object information was spread over multiple cores. Thus, the direct reflection can be eliminated simply by removing the signal at the illumination core during the image processing. To reduce the artifact caused by the signal removal and also to improve the detection signal-to-noise ratio (SNR), multiple interferograms were acquired for different illumination cores. For a stationary sample, typically, 200–3000 images were captured for different illumination cores. These images were recorded at 50–100 fps such that the imaging speed was approximately 0.1 Hz with a static object. With a high-speed camera and sparse sampling, the acquisition speed could be easily increased up to 20 fps [38].

3. Image formation principle

The Fourier holographic endoscope has a unique image formation principle, which is distinct from that of normal imaging systems. Since the theoretical framework is well described in our previous work [38], here we present it briefly. The schematic for object illumination and image detection is depicted in Fig. 2. For illumination, the sample wave is focused through a core located at point $({{u_i},{v_i}} )$ of the proximal end in the IP, where $({{u_i},{v_i}} )$ is the coordinate for the illumination wave in that plane. After transmission, the wave leaves the fiber bundle as a point source from the same core at $({{u_i},{v_i}} )$ of the distal end in the OP. During the propagation through the core from the IP to the OP, the wave undergoes a phase retardation $\phi _i^b({{u_i},{v_i}} )$. After exiting, the wave propagates a distance d in free space, as a parabolic wave, and meets the sample located at the SP with a waveform of ${E_i}({x,y} )$, where $({x,y} )$ is the coordinate in the SP. After being reflected off the sample with an object function $O({x,y} )$, a waveform of ${E_r}({x,y} )$ is generated. The wave propagates a distance d in free space in the opposite direction and reaches a detection core located at $({{u_r},{v_r}} )$ in the distal end of the fiber bundle in the OP. The wave travels back through the same detection core and arrives at $({{u_r},{v_r}} )$ of the proximal end in the IP, where $({{u_r},{v_r}} )$ is the coordinate of the reflected wave. While returning, the wave acquires an additional phase retardation of $\phi _r^b({{u_r},{v_r}} )$. After considering the contributions of all the object points, the waveform at $({{u_r},{v_r}} )$ in the IP, which is the same as the wave recorded at the camera, is described by

$${E_{camera}}({{u_r},{v_r};{u_i},{v_i}} )={-} \frac{{{e^{2ikd}}}}{{{\lambda ^2}{d^2}}}{e^{i{\phi _r}({{u_r},{v_r}} )}}{\tilde{O}_M}\left( {\frac{k}{d}({{u_r} + {u_i}} ),\frac{k}{d}({{v_r} + {v_i}} )} \right){e^{i{\phi _i}({{u_i},{v_i}} )}}, $$
where $\lambda $ is the wavelength of the light, and $k = 2\pi /\lambda $ is the wavenumber; ${\tilde{O}_M}$ is the Fourier transform of the modified object function ${O_M}({x,y} )= O({x,y} )exp \left\{ {\frac{{ik}}{d}({{x^2} + {y^2}} )} \right\}$; ${\phi _i}$ and ${\phi _r}$ are the phase retardations suffered by the light during its entry and exit and can be expressed as ${\phi _i}({{u_i},{v_i}} )= \phi _i^b({{u_i},{v_i}} )+ \frac{k}{{2d}}({u_i^2 + v_i^2} )$ and ${\phi _r}({{u_r},{v_r}} )= \phi _r^b({{u_r},{v_r}} )+ \frac{k}{{2d}}({u_r^2 + v_r^2} )$, respectively. Further details about the image formation process described by Eq. (1) can be found in Ref. [38].

4. Image reconstruction

The reflected wave interferes with the reference beam, and the resulting interferogram is recorded by the camera. Multiple interferograms are recorded with different illumination cores (Fig. 3(a)). The strong point-like signals in the interferograms are the direct reflections from the illumination cores. The measured interferograms are processed into complex field images with phase and amplitude information (Fig. 3(b)). During the process, the points of the strong reflection are easily isolated, as shown in Fig. 3(c), and removed from the images. Due to the standoff distance d between the sample and the distal end of the fiber bundle, the object information acquired by the OL at the IP is not the image in the spatial domain; instead, as shown in Eq. (1), it is close to the object’s Fourier spectrum with multiple phase terms. As a result, the object feature does not appear directly in the complex field images. However, even after the inverse Fourier transformation, the object feature still remains invisible due to the core-dependent phase retardations of the fiber bundle.

 figure: Fig. 3.

Fig. 3. Procedures for data acquisition and image reconstruction. (a) Interference images recorded by the camera. (b) Complex field images of the target signal. The color bar signifies the normalized intensity. (c) Points of strong back reflection from the illumination cores identified from (a). (d) RM constructed using (b) with the knowledge of (c). Column and row indices corresponding to $({{u_i},{v_i}} )$ and $({{u_r},{v_r}} )$, respectively. (e) A diagonal matrix whose diagonal element is the output phase retardation ${\phi _r}({{u_r},{v_r}} )$ separated from the RM by the CLASS algorithm. (f) Corrected reflection matrix. (g) Same as (e), but contains the input phase retardation ${\phi _i}({{u_i},{v_i}} )$. (h) Two-dimensional output phase retardation map made by the diagonal elements in (e). The color bar signifies the phase in radians. (i) Reconstructed object image from (f). The color bar signifies the normalized amplitude. (j) Two-dimensional input phase retardation map formed by the diagonal elements in (g). Scale bars in (a) to (c): 50 µm. Scale bars in (h) and (j): 0.1${k_0}$ with ${k_0} = 2\pi /\lambda $. Scale bar in (i): 30 µm.

Download Full Size | PDF

To evaluate this complexity systematically, we construct an object RM, where each two-dimensional complex field image is converted into a column vector and assigned to each column of the RM (Fig. 3(d)). Then, the core-dependent phase retardations and the modified object spectrum are identified using the modified closed-loop accumulation of single scattering (CLASS) algorithm [4042]. In brief, the CLASS algorithm calculates and compares the correlations among the columns in the RM to determine the input phase retardation ${\phi _i}$ that occurs when the light travels into the fiber bundle. Next, it performs similar calculations for the rows in the RM and obtains the output phase retardation ${\phi _r}$ that occurs when the light returns from the fiber bundle. These procedures are repeated alternatively and iteratively until the final phase retardations for both the input and output paths are obtained within a certain tolerance. All these procedures essentially decompose the original RM into three matrices: a diagonal matrix whose element is given by ${e^{i{\phi _r}({{u_r},{v_r}} )}}$ (Fig. 3(e)), a matrix containing the modified object spectrum function $- \frac{{{e^{2ikd}}}}{{{\lambda ^2}{d^2}}}{\tilde{O}_M}\left( {\frac{k}{d}({{u_r} + {u_i}} ),\frac{k}{d}({{v_r} + {v_i}} )} \right)$ (Fig. 3(f)), and a diagonal matrix whose element is expressed as ${e^{i{\phi _i}({{u_i},{v_i}} )}}$ (Fig. 3(g)).

Figures 3(h) and 3(j) show the output phase retardation ${\phi _r}({{u_r},{v_r}} )$ and input phase retardation ${\phi _i}({{u_i},{v_i}} )$ obtained in Figs. 3(e) and 3(g), respectively. We obtained an object image (Fig. 3(i)) from the corrected matrix in Fig. 3(f). Since each column in Fig. 3(f) contains the modified object spectrum ${\tilde{O}_M}$ with a spectral shift by $({ - {u_i}, - {v_i}} )$, we added all the columns after compensating for their corresponding the spectral shifts. Following this procedure, we obtained the synthesized object spectrum ${\tilde{O}_M}$. After taking its inverse Fourier transform, we obtained the absolute square of the image to obtain the object image as: ${|{{O_M}({x,y} )} |^2} = {|{O({x,y} )} |^2}$. The standoff distance d was determined using the quadratic phase $exp\left( {i\frac{k}{d}({{x^2} + {y^2}} )} \right)$ in ${O_M}({x,y} )$.

5. Effect of the fiber bundle motion on the reflection matrix

Before exploring the continuous motion of the fiber bundle, we investigated the effect of fiber shape change on the RM, when the fiber bundle moves in a stepwise manner. A United State Air Force (USAF) target was used as the test sample (#58-198, Edmund). The distal end of the fiber bundle was fixed at the sample holder, and the entire cage was placed on a translation stage (Fig. 4(a)). We moved the translation stage from 0 mm to 30 mm, and the RMs were measured at 5 mm intervals. At each position, 600 images were recorded to construct an RM. The amplitude images of the USAF target and output phase retardations obtained after the CLASS reconstruction are shown in Figs. 4(b) and 4(c), respectively. At first glance, the output phase retardations show almost random distributions without any correlation among them, because they were recorded at different fiber conformations. To identify the relative changes, we subtracted two consecutive phase maps. Interestingly, well-defined linear phase ramps were observed between the neighboring phase retardations, as shown in Fig. 4(d). This suggests that the output phase retardation can be approximately represented as a function of the position of the fiber bundle, i.e., $\phi _r^b({{u_r},{v_r};D + \mathrm{\Delta }D} )\approx \phi _r^b({{u_r},{v_r};D} )+ {{\boldsymbol a}_m}(D )\cdot ({{u_r},{v_r}} ).$ Here, ${{\boldsymbol a}_m}(D )$ is the slope of the phase ramp between the output phase retardations at $D + \mathrm{\Delta }D$ and D. As elucidated in Section 6, the motion-dependent phase ramp is responsible for the lateral shift of the reconstructed images, as shown in Fig. 4(b), even though the sample and distal end were fixed with respect to each other. The direction and amount of the shifts are different even if the fiber bundle moves in one direction with the same distance interval. Figure 4(e) shows the slope of the phase ramp, ${{\boldsymbol a}_m}(D )$, in the unit of ${k_0} = 2\pi /\lambda $.

 figure: Fig. 4.

Fig. 4. (a) Experimental schematic for discrete fiber bundle shift. RM was measured at every 5 mm shift of the fiber bundle. (b) Reconstructed target images at different fiber bundle positions. The color bar signifies the normalized amplitude. (c) Output phase retardations at the respective fiber bundle positions in (b). The color bar signifies the phase in radians. (d) Phase difference between neighboring output phase retardations. (e) Slope of the phase ramp in (d) as a function of the position D of the fiber bundle.

Download Full Size | PDF

With a small bending, the deformation of the fiber bundle causes optical path length differences among the cores that tilt the output wavefront. The formation of the phase ramp imposed on the output field of the fiber bundle has been expected and observed, where the tilt of the output wavefront was linearly proportional to the bending of the fiber bundle [43]. In the work, the phase ramp appeared along a single direction because the bending angle of the fiber bundle was systematically generated. In contrast, in our case, only the end facets were fixed and the body part of the fiber bundle remained unfixed while moving the object in a single direction. Thus the deformation of the fiber bundle may also include an unexpected twisting in addition to the bending. We thought this configuration could mimic a more realistic situation. The variation in the direction of the phase ramp in Fig. 4(e) can be attributed to the unexpected twisting of the fiber bundle.

6. Image reconstruction for the continuously moving fiber bundle probe

While observing the behavior of an animal, the endoscope probe is often fixed to a living specimen to interrogate the same site for a long duration. The probe and target object are fixed relative to each other, while the probe itself is bent and twisted due to the motion. To mimic the same situation, we used the same configuration as that shown in Fig. 4(a), wherein the fiber bundle probe is fixed to the target sample. As a reference, we used a stationary sample and addressed N = 2500 illumination cores to scan $({{u_i},{v_i}} )$ and recorded a set of RMs ${E_{camera}}({{u_r},{v_r};{u_i},{v_i}} )$. After the image reconstruction, a clean object image was obtained as shown in Fig. 5(d). Next, we allowed the sample to move continuously by a distance D = 5 cm (Fig. 5(b)) and measured a similar set of RMs. Since the bending configuration changed continuously during the RM acquisition, the backscattered waves captured by the fiber bundle experienced different phase retardations $\phi _r^b({{u_r},{v_r};{u_i},{v_i}} )$ for each $({{u_i},{v_i}} )$.

 figure: Fig. 5.

Fig. 5. Endoscopic imaging for a continuously moving object. (a) Experimental configuration for the static object. (b) The fiber bundle was continuously bent while the object kept moving within a distance D = 5 cm. (c) The RM was divided into 50 submatrices (dashed rectangular boxes). The column and row indices of the RM are denoted by ${{\boldsymbol w}_i}$ and ${{\boldsymbol w}_o}$, respectively. Motion correction was applied to each submatrix. (d) A reconstructed endoscopic image obtained when the target object is stationary at the initial position. (e) An image reconstructed from the dataset acquired when the target object is moving. Significant blurring occurs along the direction of the motion. (f) An image reconstructed from the same dataset used in (e), but after the application of motion correction algorithm. The motion blur is almost resolved, and sharp object features are observed. The image quality is almost comparable to that of the stationary case in (d). A slight degradation of the image resolution due to the image blurring that occurs in each submatrix.

Download Full Size | PDF

To simplify the notation, let us define ${{\boldsymbol w}_i} = ({{u_i},{v_i}} )$ and ${{\boldsymbol w}_r} = ({{u_r},{v_r}} )$. Note that the effect of motion on the input phase retardation $\phi _i^b({{{\boldsymbol w}_i}} )$ is the simple addition of the overall phase shift to ${E_{camera}}({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} )$, because one core is illuminated at a time. On the contrary, $\phi _r^b({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} )$ can be a complex function of ${{\boldsymbol w}_i}$, when abrupt bending and twists occur. However, we observed that a simple phase ramp is added across the OP of the bundle surface, when the sample has a translational motion. In other words, the motion-dependent phase retardation can be expressed as $\phi _r^b({{{\boldsymbol w}_r};{{\boldsymbol w}_i} + \mathrm{\Delta }{{\boldsymbol w}_i}} )\approx \phi _r^b({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} )+ {{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )\cdot {{\boldsymbol w}_r}$, $\phi _r^b({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} )\approx \phi _r^b({{{\boldsymbol w}_r};{{\boldsymbol w}_i} = 0} )+ {{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )\cdot {{\boldsymbol w}_r}$, where ${{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )$ is the slope of the phase ramp determined by the direction and magnitude of the bending. By inserting this $\phi _r^b({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} )$ into Eq. (1), the electric field measured at the camera while the bending configuration changes during the scanning of ${{\boldsymbol w}_i}$ can be written as

$$\begin{array}{l} {E_{camera}}({{{\boldsymbol w}_r};{{\boldsymbol w}_i} + \mathrm{\Delta }{{\boldsymbol w}_i}} )\\ ={-} \frac{{{e^{2ikd}}}}{{{\lambda ^2}{d^2}}}exp [{(i{\phi_r}({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} )} ]exp [{i{{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )\cdot {{\boldsymbol w}_r}} ]{{\tilde{O}}_M}\left( {\frac{k}{d}({{{\boldsymbol w}_r} + {{\boldsymbol w}_i}} )} \right)exp [{i{\phi_i}({{{\boldsymbol w}_i}} )} ]\times {E_{camera}}({{{\boldsymbol w}_r};{{\boldsymbol w}_i}} ).\end{array}$$

Effectively ${\tilde{O}_M}\left( {\frac{k}{d}({{{\boldsymbol w}_r} + {{\boldsymbol w}_i}} )} \right)$ is modified to $exp [{i{{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )\cdot {{\boldsymbol w}_r}} ]{\tilde{O}_M}\left( {\frac{k}{d}({{{\boldsymbol w}_r} + {{\boldsymbol w}_i}} )} \right)$ when Eq. (2) is compared with Eq. (1). Because a phase ramp is added to the object spectrum, the motion-induced phase ramp causes translation of the reconstructed object image by $- \frac{d}{k}{{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )$. Therefore, the application of the CLASS algorithm to the entire RM in Fig. 5(b) leads to blurring of the image, as shown in Fig. 5(e). We compared the target image, which is captured when the object is stationary at the initial sample position (Fig. 5(d)), with that obtained when the sample is at the final position after its scanning up to a distance of 5 cm, and found that the object image was shifted by approximately 60 µm.

In order to consider the image blurring during the core scanning, we divided Ntot = 2500 images constituting the RM into Ndiv = 50 subsets, as indicated by the yellow dashed boxes in Fig. 5(c). Within the j-th subset ($1 \le j \le 50$) composed of Ntot/Ndiv = 50 images, the image moves by approximately 60 µm/50 = 1.2 µm, which is comparable to the diffraction limit of our system. Therefore, we can assume that the sample image is almost stationary within each subset. Some more discussion on Ndiv is found in Sec. 7. We then applied the CLASS algorithm to the j-th subset and obtained an object image. By comparing the reconstructed images of the successive subsets, we obtained ${{\boldsymbol a}_m}({{{\boldsymbol w}_i}} )$ and compensated it in each subset. After the subset corrections, we applied the CLASS algorithm to the entire matrix. If necessary, we repeated this process until the algorithm converged. Figure 5(f) shows the final reconstruction with most of the image blurring resolved. The image is almost comparable to that recorded for the stationary sample (Fig. 5(d)).

To assess the reliability of our method, the reconstruction quality of the images was compared. In each image, we calculated the edge response functions along the x and y directions (as denoted in Fig. 5) and measured the resolutions. In the stationary case in Fig. 5(d), the resolutions are 1.2 µm for both directions, which is the system value determined by the experimental parameters. For the moving fiber probe shown in Fig. 5(e), the measured resolutions are 12.4 and 4.1 µm for the x and y directions, respectively. Because the fiber traverses along the x-axis, the major degradation is found in the same direction. This degradation can be attributed to the image shift generated by the phase ramps that appear while recording the RM. After the motion correction shown in Fig. 5(f), the resolution becomes 1.8 µm in both the directions, indicating a decent recovery of the initial resolution. A slight degradation of the image resolution is observed because of the image blurring within each subset.

Thus, we successfully recovered the object image when a sample was unremittingly moving by a few centimeters during the recording of a single RM, i.e., we performed in situ correction of the probe bending for a continuously moving object. This algorithm will make our method more robust for mitigating sample perturbations and adaptable to more realistic applications.

7. Discussion and conclusion

In the experiments, we fixed the fiber bundle to the target object with the same standoff distance. But it is expected that the amount of the image shift is proportional to the standoff distance, hence the image reconstruction parameter, especially for the number of sub-matrices is affected by the standoff distance. To address this point, we consider the diffraction limit $\Delta x$ and the image shift $\Delta l$. The numerical aperture ($NA$) of the fiber bundle can be expressed as $NA = \frac{n}{d}\left( {\frac{{{D_{eff}}}}{2}} \right)$, where d is the standoff distance, n the refractive index of the medium between the fiber probe and the object, and ${D_{eff}}$ the effective diameter of the fiber bundle. Then the diffraction limit of the endoscope is given by $\Delta x = 1.22\frac{\lambda }{{2NA}} = 1.22\frac{{d\lambda }}{{n{D_{eff}}}}$. And the image shift is proportional to the standoff distance as $\mathrm{\Delta }l = {\alpha _m}d$, where ${\alpha _m}$ is the maximum slope of the phase ramp. The condition that the image shift in a single sub-matrix is smaller than the diffraction limit leads to

$$\frac{{\mathrm{\Delta }l}}{{{N_{div}}}} \le \Delta x, $$
where ${N_{div}}$ is the number of sub-matrices needed for robust image reconstruction. From the rearrangement of Eq. (3),
$${N_{div}} \ge \frac{{n{\alpha _m}{D_{eff}}}}{{1.22 \cdot \lambda }}. $$

Because Eq. (4) is independent of the standoff distance, the condition for constructing the sub-matrix remains unaffected although the standoff distance changes within the imaging range [38].

Once ${N_{div}}$ was estimated by Eqs. (3), the number of images contained in a single-submatrix ${N_{sub}}$ was determined by ${N_{sub}} = {N_{tot}}/{N_{div}}$. In the experiments, we used ${N_{tot}}$ = 2500 and ${N_{div}}$ was estimated as 50. Thus, ${N_{sub}} \le $ 50. Including less number of ${N_{sub}}$ reduces the blurring effect in the image, but too small ${N_{sub}}$ will cause inaccuracy in the determination phase ramp due to the limited measurement sensitivity. For this reason, we used the largest ${N_{sub}}$ for maximum sensitivity without affecting the image resolution.

As a simple approximation, we consider that the maximum slope of the phase ramp is proportional to the travel distance of the object as ${\alpha _m} \approx kvt$, where k is a constant, v the speed of the object, and t the traveling time. With this approximation, Eq. (4) can be rewritten as

$${N_{div}}\mathrm{\ \mathbin{\lower.3ex\hbox{$\buildrel> \over {\smash{\scriptstyle\sim}\vphantom{_x}}$}}\ }\frac{{nkvt{D_{eff}}}}{{1.22 \cdot \lambda }}. $$

From Eq. (5), it is expected that more sub-matrices are required when the object moves a larger distance, or it moves faster. Larger ${N_{div}}$ means a single sub-matrix is composed of a smaller number of images. Since constructing a sub-matrix with too small number of images will cause poor accuracy in the determination of the phase ramp, we can acquire more images in the RM to keep the reconstruction quality for both cases.

The total number of images in the RM is given by ${N_{tot}} = ft$, where f is the camera frame rate. Combining the relation ${N_{sub}} = {N_{tot}}/{N_{div}}$ and Eq. (5) leads to ${N_{sub}} \propto \frac{f}{v}$. Thus, a higher acquisition speed is required for a faster object. The speed of moving objects sets a limit in real animal applications [44,45]. For instance, a living mouse moves typically at 5 cm/s and also frequently faster than that. In the current experiments, we moved the object at a speed of 1 mm/s. This was limited by the acquisition rate of the camera of 50 Hz. From the above relation, we expect that a 5 kHz camera will image an object moving at speed of 10 cm/s with a similar reconstruction quality in our experimental conditions. Thus, a high-speed camera is essential for real applications. In addition, the implementation of an active rotation tracking system will be helpful to prevent the endoscope probe from being broken by severe twisting [46].

We developed a method to reconstruct endoscopic images through a continuously moving fiber bundle. A fiber bundle was attached to a test object so that the field-of-view remained unchanged, even when the object was moved. Then, the effect of a small change in the bundle conformation on the RM was investigated. By comparing the matrices taken before and after the stepwise movement, we observed that the small bending of the bundle resulted in the addition of a linear phase ramp in the bundle-dependent phase retardation. Based on our findings, we developed an algorithm that can resolve the image degradation of the continuously moving fiber probe. We divided the RM of a moving object into multiple submatrices and compensated for the motion-induced linear phase ramps for individual submatrices. This procedure suppressed the motion blurring in the reconstructed image. Consequently, the fine features of the object were clearly observed, even when the fiber bundle was in motion along with the object.

Recently, several methods have been developed for endoscopic imaging through a single multimode fiber or a fiber bundle without prior calibration [4751]. Although these methods can reconstruct object images with arbitrary fiber conformations, their imaging principles are valid only when the imaging probes are stationary within the acquisition time. A traveling object will introduce a continuous deformation in the fiber probe, which would easily change the imaging conditions and cause a loss of endoscopic vision. Unlike these previously reported methods, our method can account for the effect of fiber shape change in the recorded RM, and can thus recover the object information with a high precision. Our method can potentially be applied to monitor the brain activities of freely moving animals by implanting a bare fiber bundle in their brains.

Funding

Institute for Basic Science (IBS-R023-D1); National Research Foundation of Korea (NRF-2021R1A2C2012069).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not available at this time but may be obtained from the authors upon reasonable request.

References

1. B. I. Hirschowitz, “Photography through the fiber gastroscope,” Am. J. Dig. Dis. 8(5), 389–395 (1963). [CrossRef]  

2. M. Sawashima and H. Hirose, “New Laryngoscopic Technique by Use of Fiber Optics,” J. Acoust. Soc. Am. 43(1), 168–169 (1968). [CrossRef]  

3. S. I. Abramson, A. A. Holmes, and C. A. Hagberg, “Awake Insertion of the Bonfils Retromolar Intubation Fiberscope™ in Five Patients with Anticipated Difficult Airways,” Anesth. Analg. 106(4), 1215–1217 (2008). [CrossRef]  

4. S.-Y. Thong and T. G.-L. Wong, “Clinical Uses of the Bonfils Retromolar Intubation Fiberscope: A Review,” Anesth. Analg. 115(4), 855–866 (2012). [CrossRef]  

5. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005). [CrossRef]  

6. B. A. Flusberg, A. Nimmerjahn, E. D. Cocker, E. A. Mukamel, R. P. J. Barretto, T. H. Ko, L. D. Burns, J. C. Jung, and M. J. Schnitzer, “High-speed, miniaturized fluorescence microscopy in freely moving mice,” Nat. Methods 5(11), 935–938 (2008). [CrossRef]  

7. M. Hughes, T. P. Chang, and G.-Z. Yang, “Fiber bundle endocytoscopy,” Biomed. Opt. Express 4(12), 2781–2794 (2013). [CrossRef]  

8. A. F. Gmitro and D. Aziz, “Confocal microscopy through a fiber-optic imaging bundle,” Opt. Lett. 18(8), 565–567 (1993). [CrossRef]  

9. C. Liang, M. R. Descour, K.-B. Sung, and R. Richards-Kortum, “Fiber confocal reflectance microscope (FCRM) for in-vivo imaging,” Opt. Express 9(13), 821–830 (2001). [CrossRef]  

10. S. Kung-Bin, L. Chen, M. Descour, T. Collier, M. Follen, and R. Richards-Kortum, “Fiber-optic confocal reflectance microscope with miniature objective for in vivo imaging of human tissues,” IEEE Trans. Biomed. Eng. 49(10), 1168–1172 (2002). [CrossRef]  

11. G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Opt. Fiber Technol. 19(6), 760–771 (2013). [CrossRef]  

12. W. Göbel, J. N. D. Kerr, A. Nimmerjahn, and F. Helmchen, “Miniaturized two-photon microscope based on a flexible coherent fiber bundle and a gradient-index lens objective,” Opt. Lett. 29(21), 2521–2523 (2004). [CrossRef]  

13. W. Zong, R. Wu, M. Li, Y. Hu, Y. Li, J. Li, H. Rong, H. Wu, Y. Xu, Y. Lu, H. Jia, M. Fan, Z. Zhou, Y. Zhang, A. Wang, L. Chen, and H. Cheng, “Fast high-resolution miniature two-photon microscopy for brain imaging in freely behaving mice,” Nat. Methods 14(7), 713–719 (2017). [CrossRef]  

14. B. N. Ozbay, G. L. Futia, M. Ma, V. M. Bright, J. T. Gopinath, E. G. Hughes, D. Restrepo, and E. A. Gibson, “Three dimensional two-photon brain imaging in freely moving mice using a miniature fiber coupled microscope with active axial-scanning,” Sci. Rep. 8(1), 8108 (2018). [CrossRef]  

15. F. Lin, C. Zhang, Y. Zhao, B. Shen, R. Hu, L. Liu, and J. Qu, “In vivo two-photon fluorescence lifetime imaging microendoscopy based on fiber-bundle,” Opt. Lett. 47(9), 2137–2140 (2022). [CrossRef]  

16. T. Xie, D. Mukai, S. Guo, M. Brenner, and Z. Chen, “Fiber-optic-bundle-based optical coherence tomography,” Opt. Lett. 30(14), 1803–1805 (2005). [CrossRef]  

17. J. H. Han and J. U. Kang, “Effect of multimodal coupling in imaging micro-endoscopic fiber bundle on optical coherence tomography,” Appl. Phys. B 106(3), 635–643 (2012). [CrossRef]  

18. L. M. Wurster, L. Ginner, A. Kumar, M. Salas, A. Wartak, and R. Leitgeb, “Endoscopic optical coherence tomography with a flexible fiber bundle,” J. Biomed. Opt. 23(06), 1 (2018). [CrossRef]  

19. M. Dickens, M. Houlne, S. Mitra, and D. Bornhop, “Soft computing method for the removal of pixelation in microendoscopic images,” Proc. SPIE3165 (1997).

20. J.-H. Han, J. Lee, and J. U. Kang, “Pixelation effect removal from fiber bundle probe based optical coherence tomography imaging,” Opt. Express 18(7), 7427–7439 (2010). [CrossRef]  

21. M. Dumripatanachod and W. Piyawattanametha, “A fast depixelation method of fiber bundle image for an embedded system,” in 2015 8th Biomedical Engineering International Conference (BMEiCON, 2015), 1–4.

22. M. Elter, S. Rupp, and C. Winter, “Physically Motivated Reconstruction of Fiberscopic Images,” in 18th International Conference on Pattern Recognition (ICPR'06, 2006), 599–602.

23. S. Rupp, C. Winter, and M. Elter, “Evaluation of spatial interpolation strategies for the removal of comb-structure in fiber-optic images,” in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (2009), 3677–3680.

24. G. W. Cheon, J. Cha, and J. U. Kang, “Random transverse motion-induced spatial compounding for fiber bundle imaging,” Opt. Lett. 39(15), 4368–4371 (2014). [CrossRef]  

25. A. J. Thompson, C. Paterson, M. A. A. Neil, C. Dunsby, and P. M. W. French, “Adaptive phase compensation for ultracompact laser scanning endomicroscopy,” Opt. Lett. 36(9), 1707–1709 (2011). [CrossRef]  

26. E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013). [CrossRef]  

27. D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014). [CrossRef]  

28. U. Weiss and O. Katz, “Two-photon lensless micro-endoscopy with in-situ wavefront correction,” Opt. Express 26(22), 28808–28817 (2018). [CrossRef]  

29. E. Scharf, J. Dremel, R. Kuschmierz, and J. Czarske, “Video-rate lensless endoscope with self-calibration using wavefront shaping,” Opt. Lett. 45(13), 3629–3632 (2020). [CrossRef]  

30. Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-Free and Wide-Field Endoscopic Imaging by Using a Single Multimode Optical Fiber,” Phys. Rev. Lett. 109(20), 203901 (2012). [CrossRef]  

31. T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3(1), 1027 (2012). [CrossRef]  

32. S. Bianchi and R. Di Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012). [CrossRef]  

33. S. Turtaev, I. T. Leite, T. Altwegg-Boussac, J. M. P. Pakan, N. L. Rochefort, and T. Čižmár, “High-fidelity multimode fibre-based endoscopy for deep brain in vivo imaging,” Light: Sci. Appl. 7(1), 92 (2018). [CrossRef]  

34. Y. Choi, C. Yoon, M. Kim, W. Choi, and W. Choi, “Optical Imaging With the Use of a Scattering Lens,” IEEE J. Select. Topics Quantum Electron. 20(2), 61–73 (2014). [CrossRef]  

35. M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23(10), 12648–12668 (2015). [CrossRef]  

36. C. Yoon, M. Kang, J. H. Hong, T. D. Yang, J. Xing, H. Yoo, Y. Choi, and W. Choi, “Removal of back-reflection noise at ultrathin imaging probes by the single-core illumination and wide-field detection,” Sci. Rep. 7(1), 6524 (2017). [CrossRef]  

37. J. Shin, D. N. Tran, J. R. Stroud, S. Chin, T. D. Tran, and M. A. Foster, “A minimally invasive lens-free computational microendoscope,” Sci. Adv. 5(12), eaaw5595 (2019). [CrossRef]  

38. W. Choi, M. Kang, J. H. Hong, O. Katz, B. Lee, G. H. Kim, Y. Choi, and W. Choi, “Flexible-type ultrathin holographic endoscope for microscopic imaging of unstained biological tissues,” Nat. Commun. 13(1), 4469 (2022). [CrossRef]  

39. N. Badt and O. Katz, “Real-time holographic lensless micro-endoscopy through flexible fibers via fiber bundle distal holography,” Nat. Commun. 13(1), 6055 (2022). [CrossRef]  

40. S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J.-S. Lee, Y.-S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9(4), 253–258 (2015). [CrossRef]  

41. S. Kang, P. Kang, S. Jeong, Y. Kwon, T. D. Yang, J. H. Hong, M. Kim, K. D. Song, J. H. Park, J. H. Lee, M. J. Kim, K. H. Kim, and W. Choi, “High-resolution adaptive optical imaging within thick scattering media using closed-loop accumulation of single scattering,” Nat. Commun. 8(1), 2157 (2017). [CrossRef]  

42. M. Kim, Y. Jo, J. H. Hong, S. Kim, S. Yoon, K.-D. Song, S. Kang, B. Lee, G. H. Kim, H.-C. Park, and W. Choi, “Label-free neuroimaging in vivo using synchronous angular scanning microscopy with single-scattering accumulation algorithm,” Nat. Commun. 10(1), 3152 (2019). [CrossRef]  

43. R. Kuschmierz, E. Scharf, D. F. Ortegón-Gonzále, T. Glosemeyer, and J. W. Czarske, “Ultra-thin 3D lensless fiber endoscopy using diffractive optical elements and deep neural networks,” Light: Advanced Manufacturing 2(4), 1–424 (2021). [CrossRef]  

44. W. Zong, H. A. Obenhaus, E. R. Skytøen, H. Eneqvist, N. L. de Jong, R. Vale, M. R. Jorge, M.-B. Moser, and E. I. Moser, “Large-scale two-photon calcium imaging in freely moving mice,” Cell 185(7), 1240–1256.e30 (2022). [CrossRef]  

45. N. Accanto, F. G. C. Blot, A. Lorca-Cámara, V. Zampini, F. Bui, C. Tourain, N. Badt, O. Katz, and V. Emiliani, “A flexible two-photon fiberscope for fast activity imaging and precise optogenetic photostimulation of neurons in freely moving mice,” Neuron 111(2), 176–189.e6 (2023). [CrossRef]  

46. A. Li, H. Guan, H.-C. Park, Y. Yue, D. Chen, W. Liang, M.-J. Li, H. Lu, and X. Li, “Twist-free ultralight two-photon fiberscope enabling neuroimaging on freely rotating/walking mice,” Optica 8(6), 870–879 (2021). [CrossRef]  

47. S. Farahi, D. Ziegler, I. N. Papadopoulos, D. Psaltis, and C. Moser, “Dynamic bending compensation while focusing through a multimode fiber,” Opt. Express 21(19), 22504–22514 (2013). [CrossRef]  

48. M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9(8), 529–535 (2015). [CrossRef]  

49. N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Lett. 41(13), 3078–3081 (2016). [CrossRef]  

50. A. Orth, M. Ploschner, E. R. Wilson, I. S. Maksymov, and B. C. Gibson, “Optical fiber bundles: Ultra-slim light field imaging probes,” Sci. Adv. 5(4), eaav1555 (2019). [CrossRef]  

51. V. Tsvirkun, S. Sivankutty, K. Baudelle, R. Habert, G. Bouwmans, O. Vanvincq, E. R. Andresen, and H. Rigneault, “Flexible lensless endoscope with a conformationally invariant multi-core fiber,” Optica 6(9), 1185–1189 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Experimental setup. The illumination and reflected beams are shown in green and yellow colors for a clear distinction, respectively. L1-5: lenses, BS1-3: beam splitters, GM: galvanometer mirror, OL: objective lens, TS: translation stage.
Fig. 2.
Fig. 2. Image formation principle. (a) Illumination pathway. (b) Reflection pathway. IP: input plane, OP: output plane, SP: sample plane, $({{u_i},{v_i}} )$: coordinate of illumination core, $({{u_r},{v_r}} )$: coordinate of detection core, $({x,y} )$: coordinate of an object, $\phi _i^b$: phase retardation by the illumination core, $\phi _r^b$: phase retardation by the detection core, ${E_i}$: illumination wave on the object, ${E_r}$: reflected wave from the object, d: standoff distance.
Fig. 3.
Fig. 3. Procedures for data acquisition and image reconstruction. (a) Interference images recorded by the camera. (b) Complex field images of the target signal. The color bar signifies the normalized intensity. (c) Points of strong back reflection from the illumination cores identified from (a). (d) RM constructed using (b) with the knowledge of (c). Column and row indices corresponding to $({{u_i},{v_i}} )$ and $({{u_r},{v_r}} )$, respectively. (e) A diagonal matrix whose diagonal element is the output phase retardation ${\phi _r}({{u_r},{v_r}} )$ separated from the RM by the CLASS algorithm. (f) Corrected reflection matrix. (g) Same as (e), but contains the input phase retardation ${\phi _i}({{u_i},{v_i}} )$. (h) Two-dimensional output phase retardation map made by the diagonal elements in (e). The color bar signifies the phase in radians. (i) Reconstructed object image from (f). The color bar signifies the normalized amplitude. (j) Two-dimensional input phase retardation map formed by the diagonal elements in (g). Scale bars in (a) to (c): 50 µm. Scale bars in (h) and (j): 0.1${k_0}$ with ${k_0} = 2\pi /\lambda $. Scale bar in (i): 30 µm.
Fig. 4.
Fig. 4. (a) Experimental schematic for discrete fiber bundle shift. RM was measured at every 5 mm shift of the fiber bundle. (b) Reconstructed target images at different fiber bundle positions. The color bar signifies the normalized amplitude. (c) Output phase retardations at the respective fiber bundle positions in (b). The color bar signifies the phase in radians. (d) Phase difference between neighboring output phase retardations. (e) Slope of the phase ramp in (d) as a function of the position D of the fiber bundle.
Fig. 5.
Fig. 5. Endoscopic imaging for a continuously moving object. (a) Experimental configuration for the static object. (b) The fiber bundle was continuously bent while the object kept moving within a distance D = 5 cm. (c) The RM was divided into 50 submatrices (dashed rectangular boxes). The column and row indices of the RM are denoted by ${{\boldsymbol w}_i}$ and ${{\boldsymbol w}_o}$, respectively. Motion correction was applied to each submatrix. (d) A reconstructed endoscopic image obtained when the target object is stationary at the initial position. (e) An image reconstructed from the dataset acquired when the target object is moving. Significant blurring occurs along the direction of the motion. (f) An image reconstructed from the same dataset used in (e), but after the application of motion correction algorithm. The motion blur is almost resolved, and sharp object features are observed. The image quality is almost comparable to that of the stationary case in (d). A slight degradation of the image resolution due to the image blurring that occurs in each submatrix.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

E c a m e r a ( u r , v r ; u i , v i ) = e 2 i k d λ 2 d 2 e i ϕ r ( u r , v r ) O ~ M ( k d ( u r + u i ) , k d ( v r + v i ) ) e i ϕ i ( u i , v i ) ,
E c a m e r a ( w r ; w i + Δ w i ) = e 2 i k d λ 2 d 2 e x p [ ( i ϕ r ( w r ; w i ) ] e x p [ i a m ( w i ) w r ] O ~ M ( k d ( w r + w i ) ) e x p [ i ϕ i ( w i ) ] × E c a m e r a ( w r ; w i ) .
Δ l N d i v Δ x ,
N d i v n α m D e f f 1.22 λ .
N d i v   x >   n k v t D e f f 1.22 λ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.