Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improved depth resolution and depth-of-field in temporal integral imaging systems through non-uniform and curved time-lens array

Open Access Open Access

Abstract

Observing and studying the evolution of rare non-repetitive natural phenomena such as optical rogue waves or dynamic chemical processes in living cells is a crucial necessity for developing science and technologies relating to them. One indispensable technique for investigating these fast evolutions is temporal imaging systems. However, just as conventional spatial imaging systems are incapable of capturing depth information of a three-dimensional scene, typical temporal imaging systems also lack this ability to retrieve depth information—different dispersions in a complex pulse. Therefore, enabling temporal imaging systems to provide these information with great detail would add a new facet to the analysis of ultra-fast pulses. In this paper, after discussing how spatial three-dimensional integral imaging could be generalized to the time domain, two distinct methods have been proposed in order to compensate for its shortcomings such as relatively low depth resolution and limited depth-of-field. The first method utilizes a curved time-lens array instead of a flat one, which leads to an improved viewing zone and depth resolution, simultaneously. The second one which widens the depth-of-field is based on the non-uniformity of focal lengths of time-lenses in the time-lens array. It has been shown that compared with conventional setup for temporal integral imaging, depth resolution, i.e. dispersion resolvability, and depth-of-field, i.e. the range of resolvable dispersions, have been improved by a factor of 2.5 and 1.87, respectively.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Temporal imaging systems (TIS), since the last two decades, have gained variant applications ranging from optical communications [1], optical data acquisition and compression [2,3] to ultra-fast microscopy [4], quantum information processing [5,6], spectroscopy [7,8], and etc. [912]. Apart from numerous studies trying to expand and develop the applications of TISs, researchers have also focused on their quality and performance improvement in view of resolution enhancement [13,14] and aberration reduction [15], among others.

Yet, one important quality feature of a TIS, which has remained to be improved, is to make them able to capture information laid in various depths. By depth, here, it means different dispersion lengths of each pulse or the group delay dispersion (GDD) that these pulses experience until they reach to the time-lens (TL). These pulses can be viewed as different fragments of a bigger pulse. Therefore, in a TIS, just as a 3-dimensional (3D) spatial object is composed of varying diffraction lengths from the lens, a multi-dispersion pulse (or in some sense, a 3D pulse) consists of different dispersion lengths or GDDs (this concept has been visualized in Fig. 1). Thus, temporal depth imaging systems (TDIS) are needed to acquire this information.

 figure: Fig. 1.

Fig. 1. Visualization of depth concept in spatial (left) and temporal (right) optics. In left, two different depth distances of $z_1$ and $z_2$ of a same object are depicted and in right, two different GDD values ($D_1$ and $D_2$) of a signal (blue curve) are illustrated.

Download Full Size | PDF

TDISs have applications in better studying and investigating the evolution of fast and rare phenomena. Multi-pulse formation phenomenon which happens in passive mode-locked lasers, and results in noise-like pulses (NLP), is attributed to dispersion and saturable gain of spectral filters [16,17]. Therefore, extracting the dispersion of each pulse in the NLP helps in better investigating the evolution of pulses in the cavity and can improve the future designs of ultra-fast fiber lasers. Furthermore, rogue-waves [18,19] are other extreme events that may occur in optical fibers as a combined effect of nonlinearity and dispersion. The same phenomenon may occur unpredictably in oceanic waves that may damage vessels. TDISs can provide us with a high temporal depth resolution that enables a more accurate investigation of rogue waves. This analysis includes extracting the dispersion of different random pulses that have been generated during the evolution of the rogue-waves.

In spatial optics, the well-known 3D imaging is a technique to retrieve the depth information of a full 3D scene. Among many 3D imaging methods, integral imaging (InI) is one of the most promising approaches due to it’s ability to record full hologram of the 3D object with incoherent light through spatial lenses, without the need to record the phase of light (for a more in-depth analysis of spatial InI see [20,21]). InI works based on the optical back propagation principle. For an object in a specific depth distance, the back projected elemental images (EI) through the same lens array (LA) as in the pick-up stage will overlap in the same depth distance resulting in a bright and clear image of that object. Still, reconstructing in other depth values results in non-overlapping images of each lens making the image of that object to become blurred in other depth distances. This will specify the depth of each object in the reconstructed 3D image.

Very recently, a research led by Fridman et al. has introduced and analyzed both theoretically and experimentally a technique to extract temporal depth information [22]. They used a method based on the relation between the lateral space of the images of two input pulses of different depth (dispersion) and their depth difference and lens location. The slope of the curve relating the separation between pulse images in each EI and the lens temporal location represented their depth (dispersion) difference. However, their method has been applied to a simple input signal including two pulses of different dispersion values while the slope of the curve yields this dispersion difference. For a more realistic signal which includes numerous dispersion values extracting the depths difference of each two pairs would be computationally and experimentally cumbersome.

In this paper, we approach this problem with the temporal counterpart of the spatial InI. More importantly, we have introduced two methods to improve the depth resolution and depth-of-field (DoF) of temporal InI. In spatial optics, many researchers have introduced various methods to overcome some shortcomings of spatial InI such as low resolution and limited depth-of-field region. Two of these methods which are highly effective are based on a curved lens array instead of a flat one and a non-uniform lens array in which different lenses have different focal lengths [2325]. We have adopted these solutions into temporal InI systems.

The rest of this paper is organized as follows: In Section 2, we briefly discuss pickup and reconstruction steps for both spatial and temporal InI systems. The proposed methods for enhancement of DoF and depth resolution are presented in Section 3. In Section 4, results from numerical simulations have been provided. Finally, in Section 5, the proposed methods and their results have been concluded.

2. Spatial and temporal integral imaging: forward and reconstruction steps

InI technique is composed of two steps. First, the forward imaging step in which EIs are obtained and second, the reconstruction step that uses the EIs captured by the first step to reproduce or reconstruct the full 3D scene. In this section, we will briefly discuss these steps of spatial InI systems and based on that, we will discuss the forward and reconstruction steps in temporal InI using the so-called space-time duality in electromagnetics. We will also show mathematically how depth information in spatial InI (diffraction information) and depth information in temporal InI (dispersion information) is laid in EIs. After showing how spatial InI could be generalized to the time domain in this section, two separate methods would be introduced to improve the performance of temporal InI in the next section.

2.1 Forward and reconstruction model of a spatial integral imaging system

The main goal of an LA-based InI system is to make it possible to record and reconstruct depth information through capturing two-dimensional (2D) EIs with incoherent light. As shown in Fig. 2, forward step of spatial depth imaging is based on utilizing an LA or a pinhole array. Each 2D slices of a 3D scene are mapped by this LA onto its image plane and therefore, we will have $m$ images, namely EIs, where $m$ is the number of lenses in the LA. The difference between EIs of the same 2D depth slice is the lateral position differences induced by the lenses in the array. Besides, EIs of two distinct 2D depth slices of a 3D scene differ by magnification factor since it depends on the distance between object to the lens and the distance from the lens to the image plane. Thus, we will have a bunch of EIs that, as will be discussed later in this section, depth information of the 3D scene is stored in them. As will be discussed in the next subsection, in a similar manner, and by utilizing temporal ray optic notion called temporal rays, we will describe the forward and reconstruction model of a temporal InI system. Temporal ray representation is a useful tool to better understand the temporal optics concepts by visualizing it just like spatial geometrical optics. Given that each spectral component of a pulse travels at a particular velocity in a dispersive media, group delay in travelling wave coordinates for each spectral component has a direct relationship with group delay dispersion (this concept has been discussed pedagogically in [15]). Therefore, to each spectral component, we can attribute a line with a particular slope in the space-time coordinates.

 figure: Fig. 2.

Fig. 2. One-dimensional scheme of spatial InI. In the yellow rectangle (pick-up step), EIs are obtained through each lens and then these EIs are transferred to the reconstruction step (gray rectangle) in order to virtually reconstruct the image of a desired depth length. Virtual LA’s specifications are the same as the specifications of the LA in the pick-up step. As can be seen in the gray rectangle, reconstruction at lengths equal to $z_1$ and $z_2$ leads to an accumulated intensity of the corresponding images at these lengths in the pick-up step (Images are obtained in our InI laboratory).

Download Full Size | PDF

Now, based on [20], we are first going to briefly explain the procedure to reconstruct the 3D spatial image from all the information stored in the EIs and then generalize the same procedure for a temporal InI system. By this computer synthesized reconstruction, we can achieve any voxel values at arbitrary depth distances. This computational reconstruction uses geometrical optics to explain the procedure of retrieving depth information from the EIs captured optically by the pickup LA. The procedure of reconstruction starts with computational (virtual) LA called display LA which has the same specifications as the pickup LA. This LA is fed by the EI at the $p$’th row and $q$’th column ($I_{pq}$) and produces the inversely mapped image at distance $z$, namely $O_{pq}(x, y, z)$. According to some simple geometrical optics laws, $O_{pq}(x, y, z)$ can be expressed as [20]:

$$O_{pq}(x, y, z)=\frac{I_{pq}({-\frac{x}{M}}+({1+\frac{1}{M}})s_x{p}, {-\frac{y}{M}}+({1+\frac{1}{M}})s_y{q})}{(z+g)^2+[(x-s_xp)^2+(y-s_yq)^2]({1+\frac{1}{M}})^2},$$
where M is the magnification factor equal to $-g/z$, $I_{pq}$ is the pth and qth elemental image, $s_x$ and $s_y$ are the EI pitches along $x$ and $y$ directions and $g$ is the distance between the LA plane and the sensor.

Based on this relation, the location of the image of each $(x, y, z)$ point in the $p, q$ EI depends both on the depth of that point and the EI number. Therefore, for the points in a specific depth distance, the lateral location of their image on the EI array varies with a specific periodicity which is directly related to the depth value. The superposition of all the inversely mapped EIs yields the whole 3D image at all $(x, y, z)$ locations in 3D space as:

$$O(x, y, z)=\sum_{p=0}^{m-1}\sum_{q=0}^{n-1}{O_{pq}(x, y, z)}.$$
Figure 2 shows the computational reconstruction method on a display plane at two different depths. As depicted in this figure, each EI is inversely projected through each of the lenses in the virtual LA. The inverse projection of all the EIs through all lenses in the array at distance $z=z_1$ and $z=z_2$ leads to the formation of focused images of the objects located at the same depths while the superposition of the inversely mapped images of objects of other distances destructs each other. Therefore, we can obtain the full 3D scene by repeating the same process with computational virtual LA in all depth distances. Figure 2 shows the result of this procedure graphically. In this image which is from our previous work [26], we can observe that reconstruction at a particular distance leads to only one of the objects which is located at the same depth distance to become apparent.

One important point to note here is that every 2D image plane of a 3D scene experiences a different diffraction value up until it reaches to pickup LA. Now, by the fact that diffraction and dispersion are mathematically analogous with each other, in the next subsection, we introduce the temporal integral image reconstruction method and use it to obtain the depth information of a multi dispersion pulse.

2.2 Forward and reconstruction model of a temporal integral imaging system

The information provided by spatial InI is the information that tell us what the image of a 3D scene is in a particular distance from the LA. In other words, given that this distance have a direct relationship with the amount of diffraction that the image experiences on its way to the LA, one can say that the role of InI system is to distinguish images that have experienced different values of diffraction.

By considering space-time duality, it is now reasonable to think of temporal InI as a technique that distinguishes pulses composed of sections with different dispersions. The main difference between spatial and temporal InI, however, is that the lateral dimension in spatial InI is 2D while for it’s temporal counterpart it is 1D, i.e. time. Figure 3 shows an illustration of this method. As depicted in this figure, first, a single Gaussian pulse propagates along the way to the TL array and then at two points on its way to the TL array, two other pulses are added to it. Therefore, the complete pulse that travels to the TL array is composed of two Gaussian and one rectangular pulses that have experienced different dispersions (this is quite similar to the spatial case where an object consisting of three points in three different depth distances experience different diffraction values until the lens). Each TL is responsible for imparting a quadratic phase term to the input pulse just as a conventional spatial lens multiplies this phase to an input beam of light. These devices are realized via different methods among which the most conventional and commercially available ones are TLs produced by nonlinear optics methods such as four-wave-mixing (FWM). In [12], these temporal optic apparatuses have been explained.

In spatial InI, the pickup LA is composed of lenses wherein each lens has a shifted quadratic phase profile due to its lateral position. For example, the central lens has a phase profile ${\phi }_{0,0}(x, y)\propto {x^2+y^2}$ while the phase profile of it’s adjacent left or up lenses are ${\phi }_{-1,0}(x, y)\propto {(x+1)^2+y^2}$ and ${\phi }_{0,1}(x, y)\propto {x^2+(y-1)^2}$, respectively. By comparison, the phase profile of TLs in the TL array is, for example, ${\phi }_{0}(t)\propto {t^2}$ and ${\phi }_{1}(t)\propto {(t-1)^2}$, for the central TL and it’s right adjacent. Note again that the lateral dimension in the temporal InI has been reduced from 2 dimensions in spatial imaging to 1 dimension. Similar to its spatial counterpart, the image captured through the $p$th TL in the temporal LA is called temporal EI, shown hereafter by $\tilde {I}_{{p}}$.

 figure: Fig. 3.

Fig. 3. Schematics of temporal InI. In the pick-up step (left), the input signal consists of two Gaussian pulses (red and blue) and one rectangular pulse (purple) at dispersion depths of $D_1$, $D_2$, and $D_3$, respectively, which together form a multi-dispersion pulse. Then, temporal EIs are obtained through a TL array, and after passing the EIs through the virtual TL array (right), at dispersion lengths equal to $D_1$, $D_2$, and $D_3$, one can observe the purple, red, and blue curves which are well-correlated with their corresponding input pulses.

Download Full Size | PDF

Now, the same procedure as spatial reconstruction could be performed for temporal depth reconstruction. As mentioned earlier in this section, the information that we deal with in the temporal InI is the depth information that stems from the particular dispersion values that different parts of a complex pulse have experienced until they arrive at the pickup TL array. Just as spatial case, the inversely mapped image of the ${p}$th EI can be written as:

$$O^t_{{p}}(t, D)=\frac{\tilde{I}_{{p}}({-\frac{t}{\tilde{M}}}+({1+\frac{1}{\tilde{M}}})\tilde{s}_t{{p}})}{(D+{\tilde{g}})^2+(t-\tilde{s}_t{{p}})^2({1+\frac{1}{\tilde{M}}})^2},$$
where in this case $\tilde {S}_t$ can be regarded as the temporal aperture of each TL in the array, $\tilde {g}$ is the dispersive distance between the TLs and the detector, and $\tilde {M}$ is the magnification factor which is equal to $-\tilde {g}/D$. By summing up all these inversely mapped images at a particular dispersion length, $D$, through the virtual LA, we have:
$$O^t(t, D)=\sum_{p=0}^{m-1}{O^t_p(t, D)},$$
which is the reconstructed pulse at an arbitrary GDD value equal to $D$. The temporal reconstruction step is also illustrated in Fig. 3. The virtual LA here is made up of a virtual array of TLs which has the same specifications as the pickup temporal TL array. As depicted in Fig. 3, EIs are mapped onto a specific image plane of the virtual TL array corresponding to a specific dispersion value in a way that the amplitude of the desired pulse to be reconstructed (the pulse with that dispersion value) is built up and pulses with other dispersion lengths become less intense compared to the pulse under reconstruction.

3. Using non-uniform and curved time lens array to improve DoF and depth resolution

Despite being one of the most promising techniques in 3D imaging, InI systems have shortcomings in terms of depth resolution and DoF [2730]. In spatial optics, many researchers have offered solutions to these drawbacks. Two of the most prominent approaches are using a curved LA and a non-uniform LA in which lenslets have different focal lengths [3134]. In this section, we attempt to generalize these solutions to the case of temporal InI. In the next section, the enhancement of DoF and depth resolution via these two methods is proved through numerical simulations.

3.1 Depth-of-field enhancement via non-uniform time-lens array

A single lens has a limited DoF, an area wherein the image is highly focused. In the pick-up step of InI, when the objects have a relatively large distance with respect to each other (in both temporal and spatial InI), it is probable that the whole 3D scene cannot be placed in the DoF region of each lens. Therefore, the reconstructed object would have a low resolution in some depth distances placed out of the DoF region. As mentioned earlier, a solution for this is to use an LA with different focal lengths. Since the lenses have different focal lengths, their DoF also varies. Thus, the overall DoF region of the LA which is the superposition of all the DoFs of single lenses becomes wider compared to DoF of an LA consisted of uniform lenses. As could be seen in Fig. 4, incorporating lenses with different focal lengths leads to a wider DoF. As could be seen in this figure, two signals of considerably different depth distances have been reconstructed with a high lateral resolution in the non-uniform set-up. In contrast, the conventional uniform lens array yields a low-resolution image of the star as its depth falls out of the DoF of the lenses. The effect of using non-uniform TL array in imaging a wider range of dispersion values with good resolution is demonstrated through numerical simulations in the next section.

An experimental configuration for this non-uniform TL array imaging system is depicted in Fig. 6(a). TLs in this configuration are based on FWM process. In a general sense, this setup includes generation of linearly chirped pump pulses with different chirpiness for FWM process, generation of input multi-dispersion pulse and mixing them in a highly nonlinear fiber. Since the generated pump pulses have different chirp factor, the resulting TL would be non-uniform. In Fig. 6(a), pulses from a mode-locked laser (MLL) goes through a dispersion-flattened highly nonlinear fiber (DF-HNLF) in order to make a supercontinuum so that both the signal and pump pulses can be carved out from it by proper filtering. As could be seen in Fig. 6(a), a super-Gaussian filter makes a flat-top Gaussian pulse from the supercontinuum to be used for generating the pump signals and then it becomes split into 3 (or more, depending on the choice of number of TLs in the array) branches. Notice that the use of flat-top Gaussian pulse instead of a Gaussian one is to avoid aberrations. As shown, pulses in these branches first go through delay lines ($\Delta _{t1}$ and $\Delta _{t2}$) and then pass through dispersion compensated fibers to get linearly chirped. Notice the length of these fibers and thus the focal GDD values of the corresponding TLs, are not the same and as such, the corresponding TLs become non-uniform in this manner. The output signals then become amplified and filtered in order to achieve desired noiseless pumps for time lensing. After producing the pump signals, they become mixed with the input signal by passing through highly nonlinear fibers (HNLF1, HNLF2, and HNLF3) and then after passing through the second dispersive medium (indicated as output dispersion in the figure), three elemental images are produced which could be recorded by an ultra-fast oscilloscope. Notice that the production of the input pulse is also done by filtering the same supercontinuum and passing it through two distinct lengths of single mode fibers (SMF1 and SMF2) to make a multi-dispersion pulse and then using a proper length of dispersion compensating fiber (DCF) as the first dispersive medium.

 figure: Fig. 4.

Fig. 4. Schematic representation of DoF enhancement using a non-uniform lens array. Top figure shows a conventional InI system in which reconstruction out of DoF region has led to a blurred image (upper gray star) and bottom figure shows improvement in DoF region of the system, in turn, has led to a high resolution of the reconstructed gray star image.

Download Full Size | PDF

3.2 Depth resolution enhancement via a curved time lens array

When the input object is complicated and made up of various depth features, obtaining those information with high depth resolution becomes challenging. One of the main difficulties in spatial InI systems (this is also true for temporal InI systems as will be discussed in this section) is the limited space of the sensor corresponding to each lens image (EI), which limits the depth resolution as will be discussed in the following.

In order to describe the concept of depth resolution in a 3D spatial/temporal InI system and the effect of system parameters on its value, we proceed with the analysis performed in [35], which has approached the relationship between the 3D object and the EIs through the following steps. We start with the integral image of a point source object located at $(x_s, z_s)$ which is composed of a series of replicas at

$$x_m(z_s)=M_sx_s+mT_s$$
spread in different EIs (here for simplicity the $y$ coordinate is neglected). In this relation, $M_s=-g/z_s$ is the magnification factor, $T_s$ is the pick-up period and is defined as the distance between the replicas of the images of the point source and depends on the LA’s pitch ($S_x$) as
$$T_s=(1+g/z_s){S_x}.$$
Therefore, the integral image of the 1D object centered at $z_s$ can be calculated as:
$$I_s(x)=rect(\frac{x-x_s}{\Delta{x}})\sum(\delta{(x-x_m)}\otimes{\frac{1}{M_s}}O(\frac{x}{M_s})),$$
in which $\otimes$ indicates the convolution operator, $\Delta {x}=(n_x-1)T_s$ ($n_x$ is the number of lenses that contribute to the integral image) , and $O(x)$ is the object’s intensity distribution. Therefore, considering the above relation, having adequate number of EIs in hand, the periodicity shows itself clearly and thus the depth value could be retrieved.

As for spatial InI, to show how temporal depth information lies in the periodicity of integral image we introduce the temporal versions of Eqs. (5)–(7). Considering the integral image of a pulse located at $(t_s, D_s)$ (the lateral coordinate is now $t_s$ (time) and the depth here is the GDD denoted as $D_s$), it is composed of replicas at $\tilde {t}_m$ as:

$$\tilde{t}_m(D_s)=\tilde{M}_st_s+m\tilde{T}_s,$$
where $\tilde {M}_s=-{\tilde {g}}/{D_s}$ is the temporal magnification factor, ($\tilde {g}$ is the GDD from temporal LA to image plane), and $\tilde {T}_s$ is the lateral distance (time) between these replicas which itself could be written as:
$$\tilde{T}_s=(1+{\tilde{g}}/D_s)S_t.$$
Therefore, as in spatial case, the temporal integral image of a single dispersion pulse centered at temporal depth $D_s$ can be written as:
$$\tilde{I}_s(t)=rect(\frac{t-t_s}{\Delta{t}})\sum(\delta{(t-t_m)}\otimes{\frac{1}{\tilde{M}_s}}O(\frac{t}{\tilde{M}_s})),$$
in which ${\Delta {t}}=(n_t-1)\tilde {T}_s$ and $n_t$ is the number of TLs that contribute to the temporal integral image. Again, we can infer from the last equation that depth information of a complex pulse composed of different dispersions appears itself in the periodicity of the integral image of each depth.

Based on the performed analysis, retrieving the information of each depth is possible through calculating the Fourier of the temporal integral image:

$${\tilde{I}_s}(\omega)=\Delta{t}sinc(\Delta{t}\omega){\otimes}\textrm{exp}(i\frac{2{\pi}{\tilde{g}}}{t _s}\omega{t_s})\tilde{O}(\tilde{M}_s{\omega})\frac{1}{\tilde{T}_s}\sum(\omega-\frac{m}{\tilde{T}_s}),$$
where $\otimes$ denotes the convolution operation. As could be noticed in this relation, each depth image shows itself as a specific frequency component of this spectrum which could be retrieved through Fourier filtering. Still, this is possible if enough number of periods of each depth image occurs in the integral image to show itself as a narrow-band and separable frequency component in $\tilde {I}_s(\omega )$. On the other hand, if the TL array is capable of capturing more rays from a depth slice of a pulse of specific dispersion value, the $rect$ function in Eq. 7 becomes larger in time and therefore the $sinc$ function in Eq. 11 becomes narrower in the Fourier domain. Therefore, depth information of a specific depth distance could be easily filtered out from the spectrum of temporal integral image. Using this Fourier analysis on the temporal integral image, it is now apparent that capturing a higher number of periods of the image of each pulse of specific dispersion in the integral image leads to its better depth resolvability.

Nevertheless, in conventional InI systems, the size of each EI is limited due to preventing the overlap between neighboring EIs. This limited space for capturing the perceived rays by each lens leads to the loss of some says of high inclinations angles spatially those coming from nearer objects. This effect could be seen for some rays coming from the circular object in Fig. 5(a). This, in turn, limits the number of periods captured from its image in the integral image which as discussed above limits its depth resolvability. The same limitation exists in the temporal InI system but for a different reason. In the pick-up stage of a temporal InI, a pulse propagates in a number of branches until it passes through the TL in each branch. Then, all of the pulses in each branch reach to a single ultra-fast detector. Therefore, the output of each TL has a limited time equal to the TL aperture to be recorded on the sensor. Other parts of the pulse outside this time span will be lost. These parts of the pulse again correspond to the temporal rays with high inclination angles which are probably coming from pulses of lower dispersion values, i.e., nearer depth distances. This effect again limits the number of captured periods from some depth distances (pulses of specific dispersion values) which in turn limits their depth resolvability.

 figure: Fig. 5.

Fig. 5. Comparison of a conventional and curved InI system. The curving effect is performed by a large aperture TL in front of the TL array as depicted above. Using this structure, temporal rays falling out of the recording region in the conventional InI system can now become captured in the corresponding temporal region of the ultra-fast recorder.

Download Full Size | PDF

In this paper, a curved temporal LA is proposed in order to alleviate this problem and thus improve the depth resolution. By curving the LA, the more inclined rays corresponding to some periods of specific depth values find the chance to be recorded in the corresponding EIs as shown in Fig. 5. In this way, more number of frequency cycles of the image of each depth would be recorded in the integral image array. This would make the corresponding frequency component of each depth in $\tilde {I}_s (\omega )$ narrower, improving its differentiability i.e., more depth resolution. There are various implementations of curved LA in spatial integral imaging systems. Among them, one promising scheme is the method in which instead of using a curved array, which has some practical difficulties, a large aperture lens is placed right in front of the lens array [36]. This large aperture lens provides a curving effect (Fig. 5). Using space-time analogy and the notion of temporal rays, we can adopt this technique to the temporal InI. As could be seen from Fig. 5, by using this method depth resolution has been improved due to capturing more frequency cycles from the image of each depth. In the next section, we prove quantitatively how this method increases depth resolution by numerical simulations.

A schematic of experimental configuration of the curved TL array is illustrated in Fig. 6(b). The pump and input signals are generated as a same procedure in the non-uinform structure (through filtering the generated supercontinuum). Here the pump pulse for the curving TL should be filtered in a way to have a wider temporal width compared to the pump pulses of the TLs in the array. The generated flat-top Gaussian pulse then becomes linearly chirped via passage through a dispersion compensating fiber ($\textrm {DCF}_{\textrm {P}}$) and subsequently becomes amplified and filtered in order to achieve a desired noiseless pump for a wide aperture time lensing. The multi-dispersion input signal is generated similarly as in the non-uniform setup. This multi-dispersion pulse together with the generated pump pulse of the wide-aperture curving time-lens then get coupled to a highly nonlinear fiber ($\textrm {HNLF}_{\textrm {o}}$) to initiate the FWM process (time lensing). The output of $\textrm {HNLF}_{\textrm {o}}$ then becomes split into 3 (or more) branches to become mixed with the pump signals of the TL array. Pump signals of the TL array are generated similarly via another mode-locked laser (MLL2) and using tunable delay lines ($\Delta _{t1}$ and $\Delta _{t2}$) in 3 (or more) branches. Each of these pumps becomes mixed with one of the three outputs of the $\textrm {HNLF}_{\textrm {o}}$ in a highly nonlinear fiber (HNLF1, HNLF2, and HNLF3 in Fig. 6(b)) to realize the TL array. After filtering non-desired harmonics by a band-pass filter, the signals are passed through a second dispersive medium to generate the elemental images. Note that as depicted in Fig. 6(b), a delay line must be inserted between the two mode-locked lasers so as to make a proper timing.

 figure: Fig. 6.

Fig. 6. Schematic of experimental configuration of (a): non-uniform temporal InI system and (b): curved InI system. MLL: mode-locked laser. DF-HNLF: dispersion-flattened highly nonlinear fiber. HNLF1, HNLF2, HNLF3, $\textrm {HNLF}_{\textrm {o}}$: highly nonlinear fiber. SGF: super Gaussian filter. OBPF: optical band-pass filter. DCF, $\textrm {DCF}_{\textrm {s}}$ and $\textrm {DCF}_{\textrm {p}}$: dispersion compensating fiber. SMF1 and SMF2: single mode fiber. AMP: optical amplifier.

Download Full Size | PDF

4. Results and analysis

In this section, we show numerical simulations of our proposed methods to confirm their superiority in view of DoF and depth resolution. First, we show the results of the temporal curved InI system. The structure of the simulated temporal curved InI, as discussed earlier, is the same as a flat temporal InI system except for a large aperture TL before the pick-up TL array and after the virtual display TL array. Both the TLs in the LA and the large aperture TLs have the same specifications in pick-up and display steps. Each of the TLs in the array has a temporal aperture equal to 10 $\textrm {ps}$ and their focal GDD is chosen in a way to set the temporal depth resolution of the system at 25 $\textrm {ps}^2$. This means that the minimum resolvable GDD difference between pulses is 25 $\textrm {ps}^2$. For the simulation of the temporal curved InI, two Gaussian pulses of same temporal durations equal to 5 $\textrm {ps}$ (full width at half maximum) are set as the input pulses with GDD values (from the pulse’s initial position to the position of TL array) equal to 28.5 $\textrm {ps}^2$ and 3.5 $\textrm {ps}^2$, respectively. We name the Gaussian pulse with the larger GDD value as $I_1$ and the other one which has a smaller GDD value as $I_2$. The large aperture TL has a temporal aperture duration of 50 ps. The curvature of this TL, which relates indirectly to its focal length, has been varied and accordingly, results have been generated for comparison. For simplicity, we name the large aperture TL as the curving TL (CTL).

In this simulation, the number of TLs is 5. Figure 7 compares the reconstructed image of $I_1$ (solid pulse) in a temporal curved InI and regular InI systems. For performing the curved lens array reconstruction, ray optics relations could not be used any more. Therefore, the whole reconstruction set-up has been modelled based on wave-optics relations. For the flat lens-array this relation involves each elemental image with its corresponding TL transfer function, convolving each elemental image with the dispersion impulse response from the EI plane to the lens plane, passing through each lens transfer function, again convolving with the dispersion impulse response from the lens to the image plane and summation on all EI indexes:

$$\begin{aligned} {O}^t(t, D) &= \biggl(\sum_{a=0}^{m-1}\biggl\{\biggl[ \left(\sum_{{p}=0}^{m-1}{\tilde{I}_{{{p}}}(t)} \right )\otimes{\frac{e^\frac{it^2}{t\tilde{g}}}{\sqrt{2{\pi}i\tilde{g}}}}\biggr ]{\times}e^{\frac{-i\left(t-\tilde{s}_t{\times}\left(a-\frac{m-1}{2} \right ) \right )^2}{2D_f}} \\ &\quad{\times} {\Pi}{\left(\frac{t-\tilde{s}_t\times{\left(a-\frac{m-1}{2} \right )}}{\tilde{s}_t} \right )}\biggr\}\biggr){\otimes}{\frac{e^\frac{it^2}{2D}}{\sqrt{2{\pi}iD}}}. \end{aligned}$$
This reconstruction relation could be modified for the curved structure by adding the transfer function of the CTL after the transfer function of each TL as follows:
$$\begin{aligned} {O}^t_c(t, D) &= \biggl(\sum_{a=0}^{m-1}\biggl\{\biggl[ \left(\sum_{{p}=0}^{m-1}{{{}\tilde{I}_C}_{{p}}(t)} \right )\otimes{\frac{e^\frac{it^2}{t\tilde{g}}}{\sqrt{2{\pi}i\tilde{g}}}}\biggr ]{\times}e^{\frac{-i\left(t-\tilde{s}_t{\times}\left(a-\frac{m-1}{2} \right ) \right )^2}{2D_f}} \\ &{\times}{\Pi}{\left(\frac{t-\tilde{s}_t\times{\left(a-\frac{m-1}{2} \right )}}{\tilde{s}_t} \right )}{\times}{e^{\frac{-i(t)^2}{2{D^{c}_{f}}}}\times{\Pi}(\frac{t}{T_c})}\biggr\}\biggr){\otimes}{\frac{e^\frac{it^2}{2D}}{\sqrt{2{\pi}iD}}}, \end{aligned}$$
where ${{}\tilde {I}_C}_{{p}}(t)$ is the $p$th elemental image captured by the curved TL array, $\Pi$ represents a rectangular function for temporal aperture, ${D^{c}_{f}}$ and $T_c$ are the dispersive focal length and aperture size of the CTL, respectively.

 figure: Fig. 7.

Fig. 7. Reconstruction in the dispersive depth of the $I_1$ pulse (solid curve) with (a) conventional temporal InI and (b) temporal curved InI imaging systems.

Download Full Size | PDF

As could be seen in Fig. 7, in the case of conventional temporal InI structure, when GDD difference between input pulses is less than the depth resolution of the system (here, GDD difference is set as 10 $\textrm {ps}^2$) and reconstruction is performed at the dispersion value of $I_1$ pulse, its reconstructed image (blue curve in Fig. 7(a)) becomes considerably distorted, as expected. Furthermore, the $I_2$ pulse also appears to some extent in reconstruction depth corresponding to the $I_1$ pulse which is undesirable. In contrast, in temporal curved InI system, reconstruction at the depth distance corresponding to $I_1$ pulse (solid curve in Fig. 7(b))leads to a less distorted image and also the intensity of the $I_2$ pulse, compared to conventional InI, becomes smaller at this reconstruction depth. In other words, this GDD difference is resolvable with curved InI structure while it is smaller than the minimum resolvable depth distance of conventional InI. Thus, depth resolvability has been improved by using the curved LA structure. Assuming a focal GDD equal to 10 $\textrm {ps}^2$ for the CTL, depth resolvability improved from 25 ps for conventional to 10 ps for curved temporal InI system. In order to further evaluate and compare the performance of the curved InI and conventional InI structures, the depth resolvability of the curved InI system has been calculated for different curvature values of the CTL. In order to do this simulation, for each curvature value, the GDD of the $I_1$ pulse was assumed to be fixed at 28.5 $\textrm {ps}^2$ and the GDD of the $I_2$ pulse was increased from 3.2 $\textrm {ps}^2$ until the value where it is still resolvable from $I_1$. Figure 8 has been generated using a criterion we defined for the resolvability of reconstructed pulses. This criterion states that the reconstructed pulse could be resolved if its mean square error (MSE) with the corresponding input pulse is less than 0.3. Figure 8 clearly shows that reducing value of the focal length of the CTL (increasing its curvature) improves the depth resolvability, as expected.

 figure: Fig. 8.

Fig. 8. Depth resolution enhancement via curved InI system for (a) different focal GDDs of CTL, (b) aperture of CTL, and (c) input pulse widths.

Download Full Size | PDF

In another analysis of the proposed temporal curved InI system, the effect of different widths of the input pulse and also pump pulse of the CTL on the resulting improvement of depth resolution have been taken into account. Indeed, when the input pulses are in a state that the system exhibits its depth resolution limit (minimum GDD difference between two input pulses), increasing the width of the input pulses would lead to an overlapping in time domain and thus, it degrades the quality of the reconstructed pulse at each dispersion length, in turn degrading the depth resolution. This effect is shown in Fig. 8(b). In simulations, we have first decreased the dispersion difference between input pulses so as to reach to the depth resolution limit of the system, and then increased the pulse widths at this minimum dispersion difference. In this figure, the solid curve represents the normalized MSE between the reconstructed image and its corresponding input pulse for one of the pulses for various pulse widths for both conventional and curved InI systems. By increasing the FWHM width of the input pulse, it could be observed in this figure that the MSE also increases, indicating reduction in similarity of the input pulse and its reconstructed image. Furthermore, it can be observed that by increasing the input pulse widths, the amount of improvement in depth resolution of the proposed structure declines with respect to the conventional TDIS. This reduction in depth resolution improvement of the curved structure for larger pulses is due to the fact that the size of the input pulse becoming comparable with the curving lens aperture. This, in turn, degrades the effect of the curving lens in capturing more inclined rays of different depths and thus improved depth resolution.

Note that apart from the effect of focal GDD on the depth resolution, which is depicted in Fig. 8(a), the temporal aperture of the CTL also plays a role in the depth resolution of the imaging system. Therefore, in another scenario, temporal duration of the pump pulse of the CTL, i.e., CTL’s aperture, has been varied in order to analyze its effect on the depth resolution. Figure 8(b) demonstrates this effect. In this figure, the pump width has been swept from 50 ps to 200 ps while the focal GDD of the CTL and the input pulse widths are fixed at 0.1 $\textrm {ps}^2$ and 5 $\textrm {ps}$, respectively, and the minimum resolvable GDD difference between the two input pulses has been plotted. Other specifications of the structure, such as TL array configuration is the same as discussed in the first paragraph in Section 4. Again, the criterion for determination of depth resolution is the same as previously mentioned criterion, i.e., the MSE of the input pulse and corresponding reconstructed pulse being less than 0.3. As it could be seen from Fig. 8(b), increasing the temporal aperture of the CTL results in improvement of the depth resolution. Comparing Fig. 8(a) and Fig. 8(b), it can be observed that the effect of decreasing the focal GDD of the CTL on the depth resolution is more than the effect of increasing its temporal aperture. As discussed earlier in Section 3.1, CTL is responsible for attaining more frequency cycles from a specific depth by changing the inclination of time rays, and thus, leading to a better reconstruction. The amount of this time ray inclination is in a direct relationship with the CTL’s focal GDD. However, CTL’s aperture determines the temporal window that this inclination functionality of the CTL takes place. By fixing a low GDD value for the CTL (here, 0.1 $\textrm {ps}^2$), increasing its temporal aperture to become wider than the TL array itself, would lead to the capturing of more inclined rays of different depths and thus more number of frequency cycles of the image of each depth, i.e., better depth resolution, but for time rays of inclination angles higher than a certain value, their contribution to the periodicity of the integral image is not guaranteed by increasing the aperture of CTL. Therefore, as it could be seen in Fig. 8(b), although increasing the aperture size leads to the improvement of depth resolution, this improvement is not as high as the effect of focal GDD of CTL.

In the second part of simulations, a numerical analysis was performed to show the DoF enhancement of the temporal InI system via the non-uniform TL array. The structure we chose for the TL array consists of $5$ TLs which two of them have focal GDDs equal to 1.25 $\textrm {ps}^2$ and the other $3$ TLs have focal GDDs equal to 1 $\textrm {ps}^2$. The input signal consists of two pulses of with temporal widths bigger than the lateral resolution limit of the system in order to only evaluate DoF parameter. These solid and dashed pulses have been placed in a considerably large dispersive distance from each other (larger than the DoF of the conventional temporal InI system) as shown in Fig. 9(a). According to the discussions in the previous section, we expect that in the conventional temporal InI system, the out-of-focus pulse, the pulse placed in the dispersive distance out of the DoF range, to become blurred (to lose its resolution). To perform the simulations and obtain results, we have reconstructed the image in the dispersive distance corresponding to the solid (Fig. 9(b)) and dashed (Fig. 9(c)) pulses for conventional uniform lens-array. The low resolution of the results is due to the reason that both pulses are placed out of the DoF of the conventional array from both sides. In contrast, the reconstruction results in the dispersive distance corresponding to the solid (Fig. 9(d)) and dashed (Fig. 9(e)) pulses for the non-uniform TL where the good resolution of the results shows the wide DoF of this structure which covers the dispersive distances of both pulses. Therefore, DoF region has been widened in the case of non-uniform TL array structure which is expected based on the presented discussions in the previous section.

 figure: Fig. 9.

Fig. 9. Simulation results for conventional and non-uniform temporal InI system. (a) input pulses with their corresponding GDD values. (b) and (c) reconstruction results for conventional system and (d) and (e) for non-uniform TL array.

Download Full Size | PDF

To demonstrate this improvement in DoF range more quantitatively, as shown in Fig. 10, we have compared this region for conventional temporal InI and non-uniform temporal InI systems. To do this, the perfectness and resolution of the reconstruction results in different depth distances have been evaluated based on MSE of reconstructed pulse and input pulse. The value of MSE has been sketched versus the dispersive depth distance for conventional temporal InI and non-uniform temporal InI structures in Fig. 10. As could be seen in this figure, the depth range with the acceptable lateral resolution has been increased in non-uniform structure from 12.8 $\textrm {ps}^2$ to 23.9 $\textrm {ps}^2$ meaning an almost twofold improvement in the DoF region.

 figure: Fig. 10.

Fig. 10. Comparison of DoF regions for conventional (uniform) and non-uniform TL array structures.

Download Full Size | PDF

5. Conclusion

In this paper, first, we have analyzed a temporal counterpart of the spatial integral imaging technique. The depth information of a complex multi-dispersion pulse can be retrieved via this temporal integral imaging system. Two structures were proposed in order to improve DoF and depth resolution. In the first proposed structure, the depth resolvability has been improved through curving the TL array. The obtained improvement in depth resolvability has been analytically proven and also confirmed through numerical simulations. The second proposed structure uses a non-uniformity in the focal GDDs of TLs to increase the DoF region. The obtained improvement in the DoF range was approved via numerical simulations. Simulation results approved a wider DoF region with a factor of 1.87 through non-uniform TL array structure and 2.5 times improvement in depth resolvability via the proposed curved temporal InI structures.

Funding

Sharif University of Technology; Iran National Science Foundation (98018266).

Disclosures

The authors declare no conflicts of interest.

References

1. Z. Geng, B. Corcoran, C. Zhu, and A. James Lowery, “Time-lenses for time-division multiplexing of optical OFDM channels,” Opt. Express 23(23), 29788 (2015). [CrossRef]  

2. J. C. Chan, A. Mahjoubfar, C. L. Chen, and B. Jalali, “context-aware image compression,” PLoS One 11(7), e0158201 (2016). [CrossRef]  

3. A. Mahjoubfar, D. V. Churkin, S. Barland, N. Broderick, S. K. Turitsyn, and B. Jalali, “Time stretch and its applications,” Nat. Photonics 11(6), 341–351 (2017). [CrossRef]  

4. K. Goda, K. K. Tsia, and B. Jalali, “Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena,” Nature 458(7242), 1145–1149 (2009). [CrossRef]  

5. G. Patera, D. Horoshko, and M. Kolobov, “Space-time duality and quantum temporal imaging,” Phys. Rev. A 98(5), 053815 (2018). [CrossRef]  

6. J. Shi, G. Patera, M. I. Kolobov, and S. Han, “Quantum temporal imaging by four-wave mixing,” Opt. Lett. 42(16), 3121 (2017). [CrossRef]  

7. F. Saltarelli, V. Kumar, D. Viola, F. Crisafi, F. Preda, G. Cerullo, and D. Polli, “Broadband stimulated Raman scattering spectroscopy by a photonic time stretcher,” Opt. Express 24(19), 21264 (2016). [CrossRef]  

8. D. R. Solli, J. Chou, and B. Jalali, “Amplified wavelength-time transformation for real-time spectroscopy,” Nat. Photonics 2(1), 48–51 (2008). [CrossRef]  

9. B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994). [CrossRef]  

10. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens,” Opt. Lett. 14(12), 630 (1989). [CrossRef]  

11. B. H. Kolner, “Generalization of the concepts of focal length and f-number to space and time,” J. Opt. Soc. Am. A 11(12), 3229 (1994). [CrossRef]  

12. R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space-time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013). [CrossRef]  

13. T. Yaron, A. Klein, H. Duadi, and M. Fridman, “Temporal superresolution based on a localization microscopy algorithm,” Appl. Opt. 56(9), D24 (2017). [CrossRef]  

14. X. Zhao, S. Xiao, C. Gong, T. Yi, and S. Liu, “The implementation of temporal synthetic aperture imaging for ultrafast optical processing,” Opt. Commun. 405, 368–371 (2017). [CrossRef]  

15. C. V. Bennett and B. H. Kolner, “Aberrations in temporal imaging,” IEEE J. Quantum Electron. 37(1), 20–32 (2001). [CrossRef]  

16. Y. Du and X. Shu, “Pulse dynamics in all-normal dispersion ultrafast fiber lasers,” J. Opt. Soc. Am. B 34(3), 553 (2017). [CrossRef]  

17. A. F. J. Runge, N. G. R. Broderick, and M. Erkintalo, “Observation of soliton explosions in a passively mode-locked fiber laser,” Optica 2(1), 36 (2015). [CrossRef]  

18. N. Akhmediev, B. Kibler, F. Baronio, M. Belić, W.-P. Zhong, Y. Zhang, W. Chang, J.M. Soto-Crespo, P. Vouzas, P. Grelu, C. Lecaplain, K. Hammani, S. Rica, A. Picozzi, M. Tlidi, K. Panajotov, A. Mussot, A. Bendahmane, P. Szriftgiser, G. Genty, J. Dudley, A. Kudlinski, A. Demircan, U. Morgner, S. Amiraranashvili, C. Bree, G. Steinmeyer, C. Masoller, N. G. R. Broderick, A. F. J. Runge, M. Erkintalo, S. Residori, U. Bortolozzo, F. T. Arecchi, S. Wabnitz, C. G. Tiofack, S. Coulibaly, and M. Taki, “Roadmap on optical rogue waves and extreme events,” J. Opt. 18(6), 063001 (2016). [CrossRef]  

19. D. Solli, C. Ropers, P. Koonath, and B. Jalali, “Optical rogue waves,” Nature 450(7172), 1054–1057 (2007). [CrossRef]  

20. S. H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483 (2004). [CrossRef]  

21. M. Martinez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512 (2018). [CrossRef]  

22. A. Klein, T. Yaron, E. Preter, H. Duadi, and M. Fridman, “Temporal depth imaging,” Optica 4(5), 502 (2017). [CrossRef]  

23. Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421 (2004). [CrossRef]  

24. J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28(20), 1924 (2003). [CrossRef]  

25. S.-C. Kim, C.-K. Kim, and E.-S. Kim, “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Res. 2(2), 6 (2011). [CrossRef]  

26. M. Ghaneizad, Z. Kavehvash, and H. Aghajan, “Human detection in occluded scenes through optically inspired multi-camera image fusion,” J. Opt. Soc. Am. A 34(6), 856 (2017). [CrossRef]  

27. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef]  

28. R. Martinez-Cuenca, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12(21), 5237 (2004). [CrossRef]  

29. M. Martinez-Corral, B. Javidi, R. Martinez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Appl. Opt. 43(31), 5806 (2004). [CrossRef]  

30. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324 (2002). [CrossRef]  

31. Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004). [CrossRef]  

32. J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28(20), 1924–1926 (2003). [CrossRef]  

33. Z. Kavehvash, K. Mehrany, and S. Bagheri, “Optimization of the lens-array structure for performance improvement of integral imaging,” Opt. Lett. 36(20), 3993 (2011). [CrossRef]  

34. Z. Kavehvash, K. Mehrany, and S. Bagheri, “Improved resolution three-dimensional integral imaging using optimized irregular lens-array structure,” Appl. Opt. 51(25), 6031 (2012). [CrossRef]  

35. G. Saavedra, R. Martinez-Cuenca, M. Martinez-Corral, H. Navarro, M. Daneshpanah, and B. Javidi, “Digital slicing of 3D scenes by Fourier filtering of integral images,” Opt. Express 16(22), 17154 (2008). [CrossRef]  

36. D.-H. Shin, B. Lee, and E.-S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45(28), 7375 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Visualization of depth concept in spatial (left) and temporal (right) optics. In left, two different depth distances of $z_1$ and $z_2$ of a same object are depicted and in right, two different GDD values ($D_1$ and $D_2$) of a signal (blue curve) are illustrated.
Fig. 2.
Fig. 2. One-dimensional scheme of spatial InI. In the yellow rectangle (pick-up step), EIs are obtained through each lens and then these EIs are transferred to the reconstruction step (gray rectangle) in order to virtually reconstruct the image of a desired depth length. Virtual LA’s specifications are the same as the specifications of the LA in the pick-up step. As can be seen in the gray rectangle, reconstruction at lengths equal to $z_1$ and $z_2$ leads to an accumulated intensity of the corresponding images at these lengths in the pick-up step (Images are obtained in our InI laboratory).
Fig. 3.
Fig. 3. Schematics of temporal InI. In the pick-up step (left), the input signal consists of two Gaussian pulses (red and blue) and one rectangular pulse (purple) at dispersion depths of $D_1$, $D_2$, and $D_3$, respectively, which together form a multi-dispersion pulse. Then, temporal EIs are obtained through a TL array, and after passing the EIs through the virtual TL array (right), at dispersion lengths equal to $D_1$, $D_2$, and $D_3$, one can observe the purple, red, and blue curves which are well-correlated with their corresponding input pulses.
Fig. 4.
Fig. 4. Schematic representation of DoF enhancement using a non-uniform lens array. Top figure shows a conventional InI system in which reconstruction out of DoF region has led to a blurred image (upper gray star) and bottom figure shows improvement in DoF region of the system, in turn, has led to a high resolution of the reconstructed gray star image.
Fig. 5.
Fig. 5. Comparison of a conventional and curved InI system. The curving effect is performed by a large aperture TL in front of the TL array as depicted above. Using this structure, temporal rays falling out of the recording region in the conventional InI system can now become captured in the corresponding temporal region of the ultra-fast recorder.
Fig. 6.
Fig. 6. Schematic of experimental configuration of (a): non-uniform temporal InI system and (b): curved InI system. MLL: mode-locked laser. DF-HNLF: dispersion-flattened highly nonlinear fiber. HNLF1, HNLF2, HNLF3, $\textrm {HNLF}_{\textrm {o}}$: highly nonlinear fiber. SGF: super Gaussian filter. OBPF: optical band-pass filter. DCF, $\textrm {DCF}_{\textrm {s}}$ and $\textrm {DCF}_{\textrm {p}}$: dispersion compensating fiber. SMF1 and SMF2: single mode fiber. AMP: optical amplifier.
Fig. 7.
Fig. 7. Reconstruction in the dispersive depth of the $I_1$ pulse (solid curve) with (a) conventional temporal InI and (b) temporal curved InI imaging systems.
Fig. 8.
Fig. 8. Depth resolution enhancement via curved InI system for (a) different focal GDDs of CTL, (b) aperture of CTL, and (c) input pulse widths.
Fig. 9.
Fig. 9. Simulation results for conventional and non-uniform temporal InI system. (a) input pulses with their corresponding GDD values. (b) and (c) reconstruction results for conventional system and (d) and (e) for non-uniform TL array.
Fig. 10.
Fig. 10. Comparison of DoF regions for conventional (uniform) and non-uniform TL array structures.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

O p q ( x , y , z ) = I p q ( x M + ( 1 + 1 M ) s x p , y M + ( 1 + 1 M ) s y q ) ( z + g ) 2 + [ ( x s x p ) 2 + ( y s y q ) 2 ] ( 1 + 1 M ) 2 ,
O ( x , y , z ) = p = 0 m 1 q = 0 n 1 O p q ( x , y , z ) .
O p t ( t , D ) = I ~ p ( t M ~ + ( 1 + 1 M ~ ) s ~ t p ) ( D + g ~ ) 2 + ( t s ~ t p ) 2 ( 1 + 1 M ~ ) 2 ,
O t ( t , D ) = p = 0 m 1 O p t ( t , D ) ,
x m ( z s ) = M s x s + m T s
T s = ( 1 + g / z s ) S x .
I s ( x ) = r e c t ( x x s Δ x ) ( δ ( x x m ) 1 M s O ( x M s ) ) ,
t ~ m ( D s ) = M ~ s t s + m T ~ s ,
T ~ s = ( 1 + g ~ / D s ) S t .
I ~ s ( t ) = r e c t ( t t s Δ t ) ( δ ( t t m ) 1 M ~ s O ( t M ~ s ) ) ,
I ~ s ( ω ) = Δ t s i n c ( Δ t ω ) exp ( i 2 π g ~ t s ω t s ) O ~ ( M ~ s ω ) 1 T ~ s ( ω m T ~ s ) ,
O t ( t , D ) = ( a = 0 m 1 { [ ( p = 0 m 1 I ~ p ( t ) ) e i t 2 t g ~ 2 π i g ~ ] × e i ( t s ~ t × ( a m 1 2 ) ) 2 2 D f × Π ( t s ~ t × ( a m 1 2 ) s ~ t ) } ) e i t 2 2 D 2 π i D .
O c t ( t , D ) = ( a = 0 m 1 { [ ( p = 0 m 1 I ~ C p ( t ) ) e i t 2 t g ~ 2 π i g ~ ] × e i ( t s ~ t × ( a m 1 2 ) ) 2 2 D f × Π ( t s ~ t × ( a m 1 2 ) s ~ t ) × e i ( t ) 2 2 D f c × Π ( t T c ) } ) e i t 2 2 D 2 π i D ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.