Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth-of-field engineering in coded aperture imaging

Open Access Open Access

Abstract

Extending the depth-of-field (DOF) of an optical imaging system without effecting the other imaging properties has been an important topic of research for a long time. In this work, we propose a new general technique of engineering the DOF of an imaging system beyond just a simple extension of the DOF. Engineering the DOF means in this study that the inherent DOF can be extended to one, or to several, separated different intervals of DOF, with controlled start and end points. Practically, because of the DOF engineering, entire objects in certain separated different input subvolumes are imaged with the same sharpness as if these objects are all in focus. Furthermore, the images from different subvolumes can be laterally shifted, each subvolume in a different shift, relative to their positions in the object space. By doing so, mutual hiding of images can be avoided. The proposed technique is introduced into a system of coded aperture imaging. In other words, the light from the object space is modulated by a coded aperture and recorded into the computer in which the desired image is reconstructed from the recorded pattern. The DOF engineering is done by designing the coded aperture composed of three diffractive elements. One element is a quadratic phase function dictating the start point of the in-focus axial interval and the second element is a quartic phase function which dictates the end point of this interval. Quasi-random coded phase mask is the third element, which enables the digital reconstruction. Multiplexing several sets of diffractive elements, each with different set of phase coefficients, can yield various axial reconstruction curves. The entire diffractive elements are displayed on a spatial light modulator such that real-time DOF engineering is enabled according to the user needs in the course of the observation. Experimental verifications of the proposed system with several examples of DOF engineering are presented, where the entire imaging of the observed scene is done by single camera shot.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Depth-of-field (DOF) in imaging systems is commonly defined as the axial range between the nearest and the farthest objects that gives an image judged to be in focus. Extending the DOF without harming the image resolution is a natural goal in many optical imaging systems [14]. The DOF of a conventional imaging system is dependent on the ratio between the illumination wavelength and the square of the numerical aperture [5]. Decreasing the numerical aperture, or increasing the wavelength, increase the DOF of the optical system. However, both changes also increase the minimal object size that the system can resolve.

In order to overcome the challenge of extending the DOF, without damaging the image resolution, various techniques had been proposed such as image fusion [3,6,7], wavefront coding [811], and scattering based methods [12]. In image fusion, multiple images of the same scene are acquired, where in each image a different slice of the scene is in focus. The recorded stack of images are combined to a single image, in which the entire parts of the observed volume are in focus. Image fusion extends the DOF but there are various challenges associated with this technique such as careful alignment, misregistration and complex fusion algorithm [13]. In case of the scattering-based method, extended DOF is achieved by accumulating the point spread functions (PSFs) recorded for different axial location along the desired extended DOF [12]. In this technique, the extended DOF can be achieved for larger depth, but big PSF dataset needs to be recorded in order to generate the accumulated PSFs, making the calibration process time consuming.

Another technique for extending the DOF of imaging systems is by wavefront coding [811]. In wavefront coding, DOF is extended by optical filtering with a non-rotationally-symmetric phase mask and digital signal processing of the recorded pattern. In this technique, optical system is modified using a phase mask such that the point spread function is not sensitive to a misfocus. The in-focus image is reconstructed by a digital deconvolution of the sampled intermediate image with the point spread function. Hence, the optical transfer function is insensitive to the misfocus and extended DOF is achieved [8]. However, this technique is sensitive to external and reconstruction noise [10]. In the aspects of using phase masks and digital post processing, the current proposed technique is similar to the wavefront coding. However, the proposed method is more general in the sense that it enables general DOF engineering (DOFE) rather than only DOF extension.

Another class of techniques for extending the DOF is by use of engineered diffractive optical elements, such as light sword elements [14] and axicons [15,16]. Both these methods create on-axis focal segment which results in extended DOF. However, the light sword elements have a lower image resolution than the conventional systems [14]. In case of axicons, the field of view is relatively narrow due to the limited off-axis imaging capabilities of the axicon [15]. Apart from that, images created from the axicons have poor contrast due to a relatively narrow spatial bandwidth of the system.

In this study, we engineer the DOF of an imaging system known by the name of coded aperture correlation holography (COACH) [17]. COACH is self-interference incoherent holography in which the interference occurs between a plane wave and light modulated by a coded phase mask. Both, the plane and the modulated waves are originated from the same object point, hence the terminology of self-interference holography is used for this family of incoherent holograms [18,19]. It is interesting to note that another self-interference incoherent holography system, implemented on a rotational sheering interferometer, has the property of quasi-infinite DOF [20]. However, DOFE, or even the control on the length of the DOF have not been demonstrated by the self-interference holographic technique of [20]. Back to COACH, a short time after its invention, it was realized that three dimensional (3D) imaging could also be achieved by recording only the intensity of the chaotically modulated light beams. The interference between the modulated and the plane wave has become unnecessary for 3D imaging, and hence the new technique has been termed as interferenceless COACH (I-COACH) [21]. Further developments of this technology are I-COACH without refractive lenses [22], a single camera-shot I-COACH system [23], I-COACH with extended field of view [24] and nonlinear reconstruction method for single shot imaging [25]. The I-COACH technique seems effective also for synthetic aperture imaging [26,27], for improving the resolution of imaging systems [2830] and for coherent imaging [31]. However, the control over the axial properties of I-COACH is to the best of our knowledge demonstrated herein for the first time.

In this study, we propose a novel method of DOFE to control over the imaging qualities of a 3D scene in the sense of imaging chosen subvolumes from the scene and ignoring others. To control a single subvolume, out of the entire observed scene, we use a modified version of I-COACH, in which a product of three independent phase masks is used as the system aperture. The three phase masks are quadratic, quartic and quasi-random, each of which is aimed for a different purpose, as detailed next. The quadratic phase function, known as the phase transformation of a spherical lens, determines the initial axial point of the subvolume in the imaging space. The radial quartic phase function (RQPF) [32,33] is responsible for the length of the imaged subvolume. The quasi-random function, termed coded phase mask (CPM) [17,19], is indirectly in charge of the imaging itself. Multiplexing L threesomes of phase masks, diffractive spherical lens (DSL), RQPF and CPM, enables one to engineer the DOF of the system for imaging L different subvolumes from the observed 3D scene.

This study offers a new concept of in-focus imaging of several user-selected subvolumes from the object space. Such concept can be useful when several targets are expected to be at several a-priory known subvolumes and there is a need to image them in a high quality and simultaneously. The simultaneous imaging by single shot recording is achieved due to the nonlinear reconstruction proposed in [25] and improved in [30]. In addition, a lateral separation between two subvolumes is demonstrated, such that a mutual concealment of objects placed on the same sightline is avoided. Furthermore, losses of the optical power due to the scattering nature of the CPM is minimized by constraining a sparse point response [29] at the sensor plane during the CPM synthesis. Finally, we avoid the tedious process of calibrating the system by recording point spread holograms (PSHs) used to reconstruct the images. Unlike previous I-COACH systems, in which the PSHs were recorded in a guidestar process [2130], in the present study the PSHs were generated computationally.

The article consists of four sections. The methodology of the proposed technique is described in the second section. The experimental procedure, results and comparisons with other imaging methods are presented in third section. The last section is devoted for discussions and conclusions of this study.

2. Methodology

The scheme in Fig. 1 illustrates the optical configuration of the imaging system with engineered DOF for a single subvolume from the entire object space. The object subvolume of the length d is illuminated through lens L0, by a spatially incoherent and quasi-monochromatic light source. The light scattered from objects located inside the subvolume is incident on the phase mask displayed on a spatial light modulator (SLM). The SLM is positioned at a distance of u from the front end of the subvolume. The phase mask on the SLM is a product of the above mentioned three masks DSL, RQPF and CPM. The light beyond the SLM is projected on the sensor plane positioned a distance v from the SLM.

 figure: Fig. 1.

Fig. 1. Scheme of the optical system for recording the object hologram.

Download Full Size | PDF

The three phase masks are now discussed in detail. Based on considerations of signal-to-noise ratio (SNR) [34], the CPM is designed to project several randomly distributed dots on the camera plane for each object point positioned on the front plane of the object volume. The CPM is computed by a modified Gerchberg Saxton algorithm (GSA) [34,35], schematically illustrated in Fig. 2. The GSA starts with a random phase mask and a uniform amplitude on the CPM plane. Next, the complex valued CPM is Fourier transformed to the camera plane, in which the phase is left unchanged, and the magnitude is constrained to be a pattern of sparse dots randomly distributed over a predefined area of the size b×b. Following the inverse Fourier transform back to the CPM plane, the phase is again left unchanged and the magnitude is constrained to be 1 over all the plane. The algorithm runs till the difference between the CPM on two consecutive cycle is negligible. The predefined area on the sensor plane controls the amount of light scattered by the CPM. We define the scattering degree as σ=(b/B), where b is the width of the predefined area and B is the maximum width of the camera plane. The other important parameter of the predefined pattern on the sensor plane is N, the number of the dots randomly distributed over the sensor. The number of dots is chosen as the optimal balance between two conflicting desires. On one hand, we wish to minimize the background noise by taking complicated pattern with maximum number of dots and on the other hand, we wish to maximize the SNR of the camera by minimizing N [34].

 figure: Fig. 2.

Fig. 2. Modified Gerchberg-Saxton algorithm for synthesizing the CPMs, where A is amplitude and Φ is phase corresponding to the complex value C.

Download Full Size | PDF

The next phase mask displayed on the SLM is the DSL with a predefined focal length. The DSL has been used in several I-COACH designs [29,30,34] always to satisfy the Fourier relation between the CPM and the camera planes needed for the GSA. Specifically, to the current study of the DOFE, the focal length of the DSL determines the initial axial point of the imaged subvolume. In other words, the DSL creates an image of the front plane of the object volume by determining the DSL focal length as f=(1/u+1/v)−1.

The third phase mask composed on the SLM with the CPM and the DSL is the RQPF introduced into the I-COACH for the first time. The phase distribution of the RQPF is exp[i2π(r/p)4], where r is the radial coordinate on the SLM plane and p is the modulation parameter, which controls the length of the DOF. The RQPF extends the DOF of each dot on the camera plane and by that determines the length of the imaged subvolume. In [32,33] the authors have shown that displaying RQPF in front of a spherical lens and illuminating the setup by a plane, or by a spherical wave, creates a sword-like light beam [32] along the optical axis. The term sword-like light beam means a light wave with quasi-constant intensity along a finite propagation interval and a beamlike shape in every transverse cross-section along this interval. The starting point of this beam is the focusing point created by the DSL in a distance of $v = {({1 / f} - {1 / u})^{ - 1}}$ from the SLM, and the length of the beam is determined by the modulation parameter of the RQPF. The two phase-masks, the CPM and the DSL create together a group of dots, which can be considered as a set of focal points. Therefore, integrating the RQPF as the third phase mask extends the DOF of each focal point of the set. The reason for choosing the RQPF for creating the sword beams is rooted in the McCutchen theorem [36]. It is claimed by this theorem that for a spherical wave converging to some focal point, and for a radial function g(ρ) that modulates the spherical wave, the intensity distribution along the optical axis is the magnitude square of the scaled one-dimensional Fourier transform of g(ρ), where the origin of the Fourier axis is at the focal point, ρ=r2 and r is the radial coordinate on the plane of g(ρ). Therefore, if the goal is to obtain as close as possible a uniform intensity along the z axis, g(ρ) should be a phase function (in order to minimize the light absorption) that its Fourier transform is uniform along some interval. A quadratic phase function in ρ (quartic in r) g(ρ)=exp[i2πρ2/p4] = exp[i2π(r/p)4] satisfies these conditions, because the Fourier transform of an infinite quadratic phase function is another infinite quadratic phase function with a constant magnitude. In the present case of a finite quadratic phase function, the magnitude of its Fourier transform is approximately uniform along some finite interval, which its length is controlled by the parameter p. The interval of an extended DOF of the length D can be projected to the object space by the factor $M_T^{ - 2}$, where $M_T^{} = v/u$ is the transverse magnification of the system. Thus, the length of the observed subvolume is $d = DM_T^{ - 2}$. Each of the object points, at a different depth inside the subvolume of the length d, creates the same random pattern of in-focus dots. Therefore, the entire object points inside the subvolume are reconstructed with the same quality regardless of their exact depth inside the subvolume. To conclude this point, RQPF, when composed with the CPM and the DSL, extends the DOF of the coded aperture imaging system. The phase distribution of the RQPF is exp[i2π(r/p)4], where r is the radial coordinate on the SLM plane and p is the modulation parameter, which controls the length d of the DOF. d(u,W,λ,p) (DOF length as the function of the object-SLM gap u, the diameter of the SLM W, the wavelength λ, and the modulation parameter p) can be calculated directly using the well-known rule that optical rays travel in the direction of the wavefront gradient [5,37]. To simplify the calculation, we concentrate in a single dot located at the origin of the camera plane. However, because the optical system is linear and space invariant, the calculation is valid for any dot on the camera plane. A point at the origin of the camera plane is created by CPM of a constant phase. According to Fig. 1, the complex amplitude right beyond the SLM with the two phase masks, the DSL and the RQPF, illuminated by a point source from z=-u is,

$$\begin{aligned} U({\bar{r},z} ) &= {a_o}\;\textrm{exp} [{ikS({\bar{r},z} )} ] \\ &= {a_o}\;\textrm{exp} [{{{i2\mathrm{\pi} ({{{{r^2}} / {2u - {{{r^2}} / {2f + {{\lambda {r^4}} / {{p^4} + z}}}}}}} )} / \lambda }} ], \end{aligned}$$
where k=2π/λ is the wave number. The ray vector of the light originated from z=−u, passes through the edge of the SLM in a height of W/2 from the z axis and propagates toward the sensor is,
$${ {\nabla S(\vec{r},z)} |_{r = W/2}} = ({{W / {2u}} - {W / {2f}} + {{\lambda {W^3}} / {2{p^4}}}} )\hat{r} + \hat{z}.$$

The tangent of θ, the angle of the marginal ray beyond the SLM with the z axis, satisfies both the following two relations,

$$\tan (\theta ) = {{ - W} / {2v = }}{W / {2u}} - {W / {2f}} + {{\lambda {W^3}} / {2{p^4}}}.$$

To guarantee that each point inside the subvolume of the length d is in focus, the ray emitted from the end of the subvolume (a distance u-d before the SLM) and passes through the center of the SLM, should be focused on the camera plane, a distance v from the SLM. Assuming the quartic phase of order r4 is negligible near the SLM center, the end point should satisfy the imaging equation,

$${1 / {({u - d} )+ }}{1 / v} = {1 / f}.$$

The DOF d is obtained as the solution of the two Eqs. (3) and (4) as the following,

$$d = \frac{{\lambda {u^2}{W^2}}}{{{p^4} - \lambda u{W^2}}},$$

Next, we describe the process of hologram recording, and image reconstruction, rigorously in order to evaluate the effects of various parameters on the imaging properties of the system. The following analysis is based on the configuration of Fig. 1. The light diffracted from a point object with amplitude of $\sqrt {{I_u}}$ located at $(\vec{r},z) = ( - {\vec{r}_u}, - u)$ reaches the SLM with a complex amplitude of $\sqrt {{I_u}} {C_1}L({{{{\vec{r}}_u}} / u})Q({1 / u})$ where C1 is a complex constant, $L({{\vec{q}} / z}) = {{\textrm{exp} [i2\mathrm{\pi} ({q_x}x + {q_y}y)} / {(\lambda z)}}]$ is a linear phase and $Q(w) = \textrm{exp} [i\mathrm{\pi} w({x^2} + {y^2})/\lambda ]$ is a quadratic phase function. The distribution of the phase mask on the SLM is $\textrm{exp} [i2\mathrm{\pi} \phi (\vec{r})] \cdot \textrm{exp} [i2\mathrm{\pi} {({r / p})^4}] \cdot Q({{ - 1} / f}),$ where $\textrm{exp} [i2\mathrm{\pi} \phi (\vec{r})]$ corresponds to the CPM, exp[i2π(r/p)4] corresponds to the RQPF and $Q({{ - 1} / f})$ is the DSL. Following the light propagation from the SLM, the intensity on the sensor plane is,

$$I({{{\vec{r}}_0}} )= \left|{\sqrt {{I_u}} {C_1}L({{{{{\vec{r}}_u}} / u}} )Q({{1 / u}} )\textrm{exp} [{i2\mathrm{\pi} \phi ({\vec{r}} )} ]} \right.{ {\textrm{exp} [{i2\mathrm{\pi} {{({{r / p}} )}^4}} ]Q({{{ - 1} / f}} )\ast Q({{1 / v}} )} |^2},$$
where ${\vec{r}_0}$ is the transverse vector on the sensor plane and ${\ast} $ denotes two-dimensional convolution. Based on the previous studies [26], when imaging condition is satisfied between the object and the sensor plane, then the intensity distribution on the sensor is the magnitude square of the scaled two dimensional (2D) Fourier transform of the phase distribution corresponding to the product CPM·RQPF ${\cdot} L({{{{{\vec{r}}_u}} / u}} )$ and is given by,
$$\begin{aligned} I({{{\vec{r}}_0}} ) &= {I_o}{|{\nu [{{1 / {\lambda v}}} ]{\mathfrak{F}}\{{L({{{{{\vec{r}}_u}} / u}} )\textrm{exp} [{i2\mathrm{\pi} \phi ({\vec{r}} )} ]\textrm{exp} [{i2\mathrm{\pi} {{({{r / p}} )}^4}} ]} \}} |^2} \\ &= {I_{PSH}}({{{\vec{r}}_o} - {{v{{\vec{r}}_u}} / u}} ), \end{aligned}$$
where Io is a constant, ${\mathfrak{F}}$ is the 2D Fourier transform, IPSH is the intensity of the PSH and the scaling operator ν[·] is defined by the equation ν[α] f(x)=f(αx).

A 2D object illuminated by a spatially incoherent light and located at z=-u can be considered as a collection of M uncorrelated object points given as,

$${I_{OBJ}}({\vec{r}} )= \sum\limits_m^M {{a_m}\delta ({{{\vec{r}}_{}} - {{\vec{r}}_m}} )}.$$

Since according to Eq. (7), the imaging system is linear and space invariant for intensity signals, the intensity distribution IOH (termed object hologram) on the sensor plane due to the object is a sum of all the shifted point responses as follows,

$${I_{OH}}({{{\vec{r}}_0}} )= \sum\limits_m^M {{a_m}{I_{PSH}}\left( {{{\vec{r}}_0} - \frac{v}{u}{{\vec{r}}_m}} \right)} = {I_{OBJ}}\left( {\frac{{{{\vec{r}}_0}}}{{{M_T}}}} \right) \ast {I_{PSH}}({{{\vec{r}}_0}} ),$$

Next we discuss the technique of the nonlinear reconstruction (NLR) from the object hologram [25]. In the previous studies of I-COACH, the image reconstruction was done by a cross-correlation between the object hologram and a modified version of the PSH [2124]. However, it is known that most of the image information of an object is encoded in the phase, rather than in the magnitude, of its Fourier spectrum [38]. Hence, in the NLR technique, the phase distribution of the object spectrum is extracted unchanged from the object hologram, whereas the magnitudes of the object hologram and the PSH are modified in order to increase the SNR of the image. Explicitly, the magnitudes of the Fourier transforms of the PSH, and of the object hologram, are raised by the power r and o, respectively. For an object positioned in the input, the reconstructed image is given by,

$$\begin{aligned}I_{REC}^{} & = {{\mathfrak{F}}^{ - 1}}\{{{{|{\tilde{I}_{OH}^{}} |}^o}} \textrm{exp} [{i\arg ({\tilde{I}_{OH}^{}} )} ] {{{|{\tilde{I}_{PSH}^{}} |}^r}\textrm{exp} [{ - i\arg ({\tilde{I}_{PSH}^{}} )} ]} \}\\ &= {{\mathfrak{F}}^{ - 1}}\{{{{|{{\mathfrak{F}}\{{I_{OBJ}^{}({{{\vec{r}} / {{M_T}}}} )\ast I_{PSH}^{}} \}} |}^o}} \\ &\times \textrm{exp} [{i\arg ({{\mathfrak{F}}\{{I_{OBJ}^{}({{{\vec{r}} / {{M_T}}}} )\ast I_{PSH}^{}} \}} )} ] {{{|{\tilde{I}_{PSH}^{}} |}^r}\textrm{exp} [{ - i\arg ({\tilde{I}_{PSH}^{}} )} ]} \}\\ &= {{\mathfrak{F}}^{ - 1}}\{{{{|{\tilde{I}_{OBJ}^{}\tilde{I}_{PSH}^{}} |}^o}\textrm{exp} [{i\arg ({\tilde{I}_{OBJ}^{}} )+ i\arg ({\tilde{I}_{PSH}^{}} )} ]} \\ &\times {{{|{\tilde{I}_{PSH}^{}} |}^r}\textrm{exp} [{ - i\arg ({\tilde{I}_{PSH}^{}} )} ]} \}\\ &= {{\mathfrak{F}}^{ - 1}}\{{{{|{\tilde{I}_{OBJ}^{}} |}^o}{{|{\tilde{I}_{PSH}^{}} |}^{o + r}}\textrm{exp} [{i\arg ({\tilde{I}_{OBJ}^{}} )} ]} \}\cong I_{OBJ}^{}({{{\vec{r}} / {{M_T}}}} ),\quad \quad \; \end{aligned}$$
where $\tilde{I}_{PSH}^{} = {\mathfrak{F}}\{ I_{PSH}^{}\}$, $\tilde{I}_{OH}^{} = {\mathfrak{F}}\{ I_{OH}^{}\}$ and $\tilde{I}_{OBJ}^{} = {\mathfrak{F}}\{ I_{OBJ}^{}\} .$ Eq. (10) indicates that the parameters satisfying the relation r + o=0 theoretically reconstructs an image close (identical in the case $o ={-} r = 1$) to the object. However, the experience [25,29] indicates that by satisfying the relation r + o=0, the obtained reconstructed images are relatively noisy. Therefore, to determine the optimal parameters in the present experiments, the values of o and r are varied from −1 to 1 with the step size of 0.1, and the entropy is calculated as a blind figure of merit. The reconstructed image having the lowest entropy value determines the optimal value of o and r. In order to calculate the entropy for different o and r parameters, the reconstructed images are normalized first and then the entropy is calculated from the normalized images. The entropy and normalization equations are,
$$\begin{aligned} &E({o,r} )={-} \sum\limits_k {\sum\limits_l {{\psi_{o,r}}(k,l)\log [{{\psi_{o,r}}(k,l)} ]} } ,\\ &{\psi _{o,r}}(k,l) = \frac{{{I_{REC,o,r}}({k,l} )}}{{\sum\limits_k {\sum\limits_l {{I_{REC,o,r}}({k,l} )} } }}, \end{aligned}$$
where E(o,r) is the entropy value for different o and r parameters, k and l are the matrix indices of the reconstructed image IREC, and ψo,r(k,l) is the normalized version of IREC.

3. Experimental procedure and results

The experiments of the proposed technique were carried out for three different scenarios of the DOFE, all demonstrated on the modified I-COACH described in section 2. In the first experiment we tested a DOF extension in a single subvolume. Inside the single subvolume, two objects were positioned with an axial gap between them, so that under direct conventional imaging, when one object was in focus the other was out of focus. In case of the modified I-COACH with the DOFE, on the other hand, both objects were reconstructed in focus simultaneously from a single-shot hologram. In the second experiment, multi-volume configuration was tested, in which objects were placed in three different subvolumes along the optical axis. Two sets of three phase masks in each were multiplexed on the SLM to demonstrate a single shot reconstruction for objects inside two unconnected subvolumes out of the three. In the last experiment of this study, one object was positioned in a subvolume for which there was a set of three phase masks, and the other object was located in another subvolume for which the DOFE was not performed. This experiment demonstrates the selective imaging capability of the proposed system, where objects in selected subvolumes can be imaged but objects in other subvolumes yield only faded noise instead of an in-focus image. The results of the first configuration are shown in the next subsection, where the results of the second and third configurations are described in subsection B.

  • A. Single volume imaging
The optical configuration for the experiment of the DOFE is shown in Fig. 3. The experimental setup contains two channels with two identical incoherent sources (Thorlabs LED635L, 170 mW, λ=635 nm, Δλ=15 nm) illuminating two different targets through lenses L0A and L0B, both of 20 cm focal length. In first configuration, elements 3 of group 1 (8 lp/mm) of United States Air Force (USAF) resolution chart were mounted in both channels 1 and 2, respectively. Object with digit ‘3’ was closest to the SLM, and the axial gap between the objects in the two channels was 2.5 cm. The SLM (Holoeye PLUTO, 1920×1080 pixels, 8 µm pixel pitch, phase-only modulation) was located at a distance of 22.2 cm from the object in channel 1. The distance between the SLM and the digital camera (Thorlabs-DCC3260M, 5.86 µm pixel pitch) was 21 cm. In the SLM only 1080×1080 pixels were used as the aperture.

 figure: Fig. 3.

Fig. 3. Experimental setup of the DOFE-based modified I-COACH. BS1 and BS2 – Beam splitters; SLM – Spatial light modulator; L0A, L0B – Refractive lenses; LED – Light emitting diode; u and u’ are different distances of objects at the same volume

Download Full Size | PDF

The same setup can become a direct imaging system when only a proper DSL is displayed on the SLM. For direct imaging, the objects in channel 1 and channel 2 were imaged on the sensor by displaying on the SLM, each time, a DSL of 10.8 cm and of 11.3 cm focal length, respectively. The direct images of each object captured by the camera are shown in Figs. 4(a) and 4(b), indicating that the axial separation between the two objects is large enough, such that both objects cannot be seen in focus simultaneously by direct imaging.

 figure: Fig. 4.

Fig. 4. (a) and (b) Direct images of objects for channel 1 and 2, respectively, (c) phase mask displayed on the SLM, (d) Object hologram and (e) Computed PSH for No=10 and optimal scattering degree σo=0.339.

Download Full Size | PDF

In case of the modified I-COACH system with the DOFE, the phase mask displayed on the SLM was generated by the modulo-2π phase addition of the DSL, RQPF and CPM. The GSA generated the CPM, where the Fourier property of the GSA between the SLM and the sensor plane for any point source at the object plane was experimentally satisfied by composing the DSL with 10.8 cm focal length. The RQPF extended the DOF to a theoretical value of 2.7 cm, calculated by Eq. (5) with the parameters p=0.33 cm, u=24.7 cm, and W=0.86 cm. The phase mask displayed on the SLM and the object hologram captured for the scattering degree of 0.339 and 10 dots are shown in Figs. 4(c) and 4(d), respectively. These two parameters were found to be optimal in terms of maximum SNR and visibility for the given objects [34] and are used in the entire CPMs of this study.

Usually in I-COACH, the image is reconstructed by cross correlating the object hologram, shown in Fig. 4(d), with the PSH corresponding to the axial location of the object. In the previous works of I-COACH [19,39], a library of PSHs was recorded from the intensity responses of a pinhole located at various axial locations. Recording the library increases the experimental complexity of the I-COACH technique, and any change of the experimental conditions may be involved in re-recording a new library. In the present study, instead of recording the PSHs experimentally, they were created numerically in the computer. The data needed for the numerical generation of the PSH is the phase mask displaced on the SLM and the distances in the setup. SLM with dimensions of 1080 × 1080 pixels, and pixel size of 8 µm, is assumed for the numerical computation of the PSH. The PSH generated numerically were resized to compensate for difference between the pixel sizes of the SLM and of the camera (pixel size of 5.56 µm). The Numerical PSH generated for the scattering degree of σo=0.339 and No=10 dots is shown in Fig. 4(e). The object reconstruction was done by the nonlinear correlation of the PSH and the object hologram with parameter, o=0.5 and r=0.4 as described in Eq. (10).

The reconstructed image of the DOFE-based I-COACH for RQPF with p=0.33 cm is shown in Fig. 5. The measured DOF is 2.5 cm, a value that is quite close to the theoretical DOF of 2.7 cm. Comparing Fig. 5 and the direct images [Figs. 4(a) and 4(b)], it is evident that by introducing RQPF, the DOF of the modified I-COACH system can be extended to a larger axial distance than the DOF of a conventional imaging system with the same numerical aperture.

 figure: Fig. 5.

Fig. 5. Reconstructed images of the DOFE-based I-COACH

Download Full Size | PDF

A second example is demonstrated to validate the reliability of the proposed technique. USAF element 5-6 of group 3 (12.7-14.25 lp/mm) was placed at a distance of 16.9 cm from SLM and USAF element 1 of group 3 (8 lp/mm) was located at distance of 18.4 cm from the SLM. The distance between the SLM and the digital camera was 21 cm. The direct imaging results for both objects are shown Figs. 6(a) and 6(b) indicating that the axial separation between the objects is large enough such that both objects cannot be in focus simultaneously. Figure 6(c) demonstrates direct imaging using the RQPF composed with the DSL. Both images are seen better than the out-of-focus images of Figs. 6(a) and 6(b), but still the images are blurred with relatively poor contrast. As the SLM to sensor distance is not changed in this configuration, we assume that the optimum scattering degree and number of dots needed for imaging for this configuration is same as that of the pervious experiment. DSL of 9.4 cm and p = 0.32 cm was displayed on the SLM. The PSH was computed numerically with the experiment parameters as described above. The reconstructed image of the objects by the nonlinear correlation between the PSH and the object hologram is shown in Fig. 6(d), demonstrating the depth field of d=1.5 cm. This last result is close to the theoretical value of d=1.6 cm for p=0.32 cm, u=18.4 cm and W=0.86 cm as calculated using Eq. (5).

  • B. Multi-volume imaging

 figure: Fig. 6.

Fig. 6. (a) and (b) Direct images of objects in two axial positions. (c) Direct image with RQPF and DSL. (d) Reconstructed image of DOFE-based modified I-COACH.

Download Full Size | PDF

The imaging capability of the proposed system for more than one subvolume are described in this section. In the following example two subvolumes indicated in Fig. 7 are considered, where two imaging tasks are demonstrated. First, we show in-focus imaging capability in subvolume 1 and 2, where objects positioned outside these subvolumes are always out-of-focus. Second, transverse separation between images of subvolume 1 and 2 are demonstrated. The transverse separation of images can be significant for objects located on the same sightline and hence cover each other. Subvolumes 1 and 2 are positioned at the intervals 16.9-18.5 cm and 22.2-24.7 cm from the SLM, respectively. Subvolume 1 corresponds to the second configuration of the previous section, whereas the subvolume 2 corresponds to the first configuration of the previous section.

 figure: Fig. 7.

Fig. 7. Optical scheme of I-COACH for multi-volume imaging.

Download Full Size | PDF

In the first case, our aim is to selectively reconstruct from a single captured hologram, images belonging to only one of the two subvolumes, 1 or 2. As mentioned above, subvolumes 1 and 2 need different set of parameters for the DSL, CPM and RQPF. In order to have one phase aperture on the SLM for both volumes, two different sets of three phase masks were spatially multiplexed. The scheme of mask multiplexing is shown in Fig. 8. As is shown in the fifth line of Fig. 8, binary circular gratings with ring width of 30 pixels was used for dividing the SLM area between the two sets of phase masks for the subvolumes 1 and 2. The phase masks (each one is composed from the DSL, CPM, RQPF) corresponding to the subvolumes 1 and 2 were displayed on the white and black rings, respectively. Hence, the final phase aperture has properties for both subvolumes 1 and 2. Additional linear phase masks shown in the first line of Fig. 8, were composed to the two sets. These linear phase functions are aimed to separate transversely the images of the objects from both subvolumes. For transverse separation of the images, horizontal linear phase shift of +0.9° was added to the phase mask corresponding to subvolume 1, whereas a linear phase shift of −0.9° was added to the phase mask of subvolume 2. These linear phases on the SLM deflect the light beyond the SLM in two opposite directions and hence images of objects from the two subvolumes are shifted in opposite directions.

 figure: Fig. 8.

Fig. 8. Multiplexing process of two sets of phase masks for multi-volume imaging.

Download Full Size | PDF

To demonstrate the two-volume imaging, element 5-6 of group 3 (12.7-14.25 lp/mm) and element 1 of group 3 (8 lp/mm) of USAF were positioned in subvolumes 1 and 2, respectively. Two different PSHs corresponding to the different volumes were created numerically with the above-mentioned values of σo and No. The direct imaging of the objects with one image in focus (the one that arrives from subvolume 2) is shown in Fig. 9(a). The two images from the two subvolumes are positioned on the same sightline, and hence they overlap each other. The reconstruction results by DOFE-based I-COACH for the objects of subvolumes 1 and 2 are shown in Fig. 9(b) and 9(c), respectively. These results are achieved by cross-correlation with the corresponding PSH of each subvolume. In the next experiment the aim is to reconstruct images of objects present in both subvolumes by a single reconstruction. For this purpose, the same CPM was used for both subvolumes in the spatial multiplexing of the phase mask described in Fig. 8. Using the same CPM generates similar dot pattern on the sensor plane for both subvolumes, and hence from a single PSH, images of objects from both subvolumes can be reconstructed. The result of this experiment is shown in Fig. 9(d), demonstrating in-focus simultaneously and a transversely separation of images from the two unconnected subvolumes. The capability of the proposed method to reconstruct images in different volumes from a single hologram, as well as to transversely separate the images of the different volumes, is demonstrated.

 figure: Fig. 9.

Fig. 9. (a) Direct images with an overlap between the objects. (b) Reconstructed image with PSH of subvolume 1 and the CPMs of the two subvolumes are different. (c) Same as (b), but with PSH of subvolume 2. (d) Reconstructed image when the same CPM was used for both subvolumes.

Download Full Size | PDF

In the last experiment, the aim is to demonstrate that images located outside the two subvolumes 1 and 2, cannot be in focus. First, element 1 of group 3 (8 lp/mm) and element 5-6 of group 3 (12.7-14.25 lp/mm) of USAF were positioned in volume 2 and in the gap between the two subvolumes, respectively. Two phase masks with two different CPMs for subvolumes 1 and 2 were multiplexed and the object hologram was recorded. As seen in Fig. 10(a), reconstruction with PSH corresponding to volume 2, yields the image of the object corresponding to volume 2, whereas the object in between the subvolumes is not in focus. Figure 10(b) shows the reconstruction with PSH of subvolume 1. None of the images is in focus in this case. Hence, from these reconstruction results, it can be concluded that the proposed technique images properly only objects from subvolumes for which the corresponding phase masks were engineered.

 figure: Fig. 10.

Fig. 10. One object was positioned at subvolume 2 and the other was at the gap between subvolumes 1 and 2. The reconstructed images when the object hologram is correlated with PSH of (a) subvolume 2 and (b) subvolume 1.

Download Full Size | PDF

4. Summary and Conclusion

We have demonstrated a new technique for engineering the DOF of an imaging system, by integrating RQPFs into the coded aperture associated with the I-COACH system. The RQPF generates a light sword, which is actually a pseudo-nondiffracting beam having almost constant intensity along a finite propagation distance and a beamlike shape in any transverse cross-section along this propagation distance. The finite propagation interval and the beamlike shape property of the RQPF enable controlling the DOF of I-COACH systems. The modulation parameter associated with the RQPF controls the axial distance of the light sword and hence the length of the DOF. The phase mask of I-COACH displayed on the SLM is modulo 2π addition of the phases of RQPF, CPM and DSL. The CPM has been synthesized using a modified GSA to generate a sparse point response on the sensor plane. The sparse point response with the selected parameters of number of dots and their density yield optimal images in the sense of SNR and visibility [34]. The combination of RQPF, DSL and CPM yields point response of several light swords distributed randomly. The DSL determines the location of light swords in the free space. The set of RQPF, DSL and CPM defines the DOF for a single subvolume of the object space. Multiplexing more than one set enables to apply the DOFE for more than a single object subvolume. The imaging results under DOFE show similar resolution to the direct imaging although not all the images are obtained from the maximal numerical aperture of the system. This effect can be explained due to the nonlinear correlation involved in the imaging process. However, this hypothesis should be examined in future studies.

In the field of image sensors there is a well-known concept of smart imaging [40], which means that in addition to the conventional imaging the system is capable of extracting application-specific information from the captured images. The DOFE proposed in this study adds to imaging systems the feature of smart imaging, not in the level of the electronic detection [40], but in the more basic level of designing the optical point spread function (PSF) of the system. However, the DOFE herein is different from PSF engineering [41] in the sense that a system with the DOFE does not image directly the object to the system output. Instead, the DOFE synthesizes the PSF of several light swords in parallel, which later characterize the nature of the imaging. The operation in a mode of indirect imaging increases the options in the sense that different scenarios, not applicable in direct imaging, become possible with I-COACH. For example, with the same object hologram acquired once, one can reconstruct objects from several subvolumes with a certain PSH, but it is also possible to reconstruct object from only one of the subvolumes with a different PSH. Another difference of the DOFE from PSF engineering of direct imaging system is the imaging method. In direct imaging the object function is convolved with the engineered PSF. On the other hand, in I-COACH the images of an object are replicated randomly around the PSH dots and then correlated with the PSH in non-linear correlation that is aimed to increase the quality of the reconstructed images.

In conclusion, the DOFE makes imaging systems more sophisticated and powerful. The system user can define the number, the depth and the location of each observed subvolume. Since the phase masks are displayed on a computer controlled SLM, the entire parameters of the observed subvolumes can be changed in real-time. Objects inside subvolumes that the user wants to ignore can also be defined, and object positioned on the same sightline can be shifted such that an image does not cover other images.

Funding

Israel Science Foundation (ISF 1669/16); Ministry of Science, Technology and Space; ATTRACT project funded by the EC under Grant Agreement (777222).

Disclosures

The authors declare no conflicts of interest.

References

1. A. S. Ambikumar, D. G. Bailey, and G. Sen Gupta, “Extending the DOF in microscopy: A review,” in International Conference Image and Vision Computing New Zealand (IEEE Computer Society, 2016).

2. M. Mino and Y. Okano, “Improvement in the OTF of a defocused optical system through the Use of shaded apertures,” Appl. Opt. 10(10), 2219–2225 (1971). [CrossRef]  

3. R. J. Pieper and A. Korpel, “Image processing for extended DOF,” Appl. Opt. 22(10), 1449–1453 (1983). [CrossRef]  

4. R. Narayanswamy, G. E. Johnson, P. E. X. Silveira, and H. B. Wach, “Extending the imaging volume for biometric iris recognition,” Appl. Opt. 44(5), 701–712 (2005). [CrossRef]  

5. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (Wiley-Interscience, Hoboken, N.J.2007).

6. S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neural networks,” Pattern Recogn. Lett. 23(8), 985–997 (2002). [CrossRef]  

7. L. He, X. Yang, L. Lu, A. Ahmad, and G. Jeon, “A novel multi-focus image fusion method for improving imaging systems by using cascade-forest model,” J. Image Video Proc. 2020(1), 5 (2020). [CrossRef]  

8. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859–1866 (1995). [CrossRef]  

9. S. Tucker, W. T. Cathey, and E. Dowski Jr., “Extended DOF and aberration control for inexpensive digital microscope systems,” Opt. Express 4(11), 467–474 (1999). [CrossRef]  

10. V. N. Le, S. Chen, and Z. Fan, “Optimized asymmetrical tangent phase mask to obtain defocus invariant modulation transfer function in incoherent imaging systems,” Opt. Lett. 39(7), 2171–2174 (2014). [CrossRef]  

11. Y. Wu, L. Dong, Y. Zhao, M. Liu, X. Chu, W. Jia, X. Guo, and Y. Feng, “Analysis of wavefront coding imaging with cubic phase mask decenter and tilt,” Appl. Opt. 55(25), 7009–7017 (2016). [CrossRef]  

12. M. Liao, D. Lu, G. Pedrini, W. Osten, G. Situ, W. He, and X. Peng, “Extending the depth-of-field of imaging systems with a scattering diffuser,” Sci. Rep. 9(1), 7165 (2019). [CrossRef]  

13. M. Manviya and J. Bharti, “Image fusion survey: A comprehensive and detailed analysis of image fusion techniques,” Social Networking and Computational Intelligence, 649–659 (Springer, 2020).

14. G. Mikula, A. Kolodziejczyk, M. Makowski, C. Prokopowicz, and M. Sypek, “Diffractive elements for imaging with extended depth of focus,” Opt. Eng. 44(5), 058001 (2005). [CrossRef]  

15. Z. Zhai, S. Ding, Q. Lv, X. Wang, and Y. Zhong, “Extended depth of field through an axicon,” J. Mod. Opt. 56(11), 1304–1308 (2009). [CrossRef]  

16. A. Saikaley, B. Chebbi, and I. Golub, “Imaging properties of three refractive axicons,” Appl. Opt. 52(28), 6910–6918 (2013). [CrossRef]  

17. A. Vijayakumar, Y. Kashter, R. Kelner, and J. Rosen, “Coded aperture correlation holography–a new type of incoherent digital holograms,” Opt. Express 24(11), 12430–12441 (2016). [CrossRef]  

18. J. Hong and M. K. Kim, “Single-shot self-interference incoherent digital holography using off-axis configuration,” Opt. Lett. 38(23), 5196–5199 (2013). [CrossRef]  

19. J. Rosen, A. Vijayakumar, M. Kumar, M. R. Rai, R. Kelner, Y. Kashter, A. Bulbul, and S. Mukherjee, “Recent advances in self-interference incoherent digital holography,” Adv. Opt. Photonics 11(1), 1–66 (2019). [CrossRef]  

20. N. Teruyoshi, Y. Katano, T. Muroi, and N. Ishii, “Bimodal incoherent digital holography for both three-dimensional imaging and quasi-infinite–depth-of-field imaging,” Sci. Rep. 9(1), 3363 (2019). [CrossRef]  

21. A. Vijayakumar and J. Rosen, “Interferenceless coded aperture correlation holography–a new technique for recording incoherent digital holograms without two-wave interference,” Opt. Express 25(12), 13883–13896 (2017). [CrossRef]  

22. M. Kumar, A. Vijayakumar, and J. Rosen, “Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses,” Sci. Rep. 7(1), 11555 (2017). [CrossRef]  

23. M. R. Rai, A. Vijayakumar, and J. Rosen, “Single camera shot interferenceless coded aperture correlation holography,” Opt. Lett. 42(19), 3992–3995 (2017). [CrossRef]  

24. M. R. Rai, A. Vijayakumar, and J. Rosen, “Extending the field of view by a scattering window in an I-COACH system,” Opt. Lett. 43(5), 1043–1046 (2018). [CrossRef]  

25. M. R. Rai, A. Vijayakumar, and J. Rosen, “Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 26(14), 18143–18154 (2018). [CrossRef]  

26. A. Bulbul, A. Vijayakumar, and J. Rosen, “Superresolution far-field imaging by coded phase reflectors distributed only along the boundary of synthetic apertures,” Optica 5(12), 1607–1616 (2018). [CrossRef]  

27. A. Bulbul, A. Vijayakumar, and J. Rosen, “Partial aperture imaging by systems with annular phase coded masks,” Opt. Express 25(26), 33315–33329 (2017). [CrossRef]  

28. M. R. Rai, A. Vijayakumar, and J. Rosen, “Superresolution beyond the diffraction limit using phase spatial light modulator between incoherently illuminated objects and the entrance of an imaging system,” Opt. Lett. 44(7), 1572–1575 (2019). [CrossRef]  

29. M. R. Rai and J. Rosen, “Resolution-enhanced imaging using interferenceless coded aperture correlation holography with sparse point response,” Sci. Rep. 10(1), 5033 (2020). [CrossRef]  

30. M. R. Rai, A. Vijayakumar, Y. Ogura, and J. Rosen, “Resolution enhancement in nonlinear interferenceless COACH with point response of subdiffraction limit patterns,” Opt. Express 27(2), 391–403 (2019). [CrossRef]  

31. N. Hai and J. Rosen, “Interferenceless and motionless method for recording digital holograms of coherently illuminated 3D objects by coded aperture correlation holography system,” Opt. Express 27(17), 24324–24339 (2019). [CrossRef]  

32. J. Rosen, B. Salik, and A. Yariv, “Pseudo-nondiffracting beams generated by radial harmonic functions,” J. Opt. Soc. Am. A 12(11), 2446–2457 (1995). [CrossRef]  

33. J. Rosen, B. Salik, and A. Yariv, “Pseudo-nondiffracting beams generated by radial harmonic functions: Erratum,” J. Opt. Soc. Am. A 13(2), 387 (1996). [CrossRef]  

34. M. R. Rai and J. Rosen, “Noise suppression by controlling the sparsity of the point spread function in interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 27(17), 24311–24323 (2019). [CrossRef]  

35. G. Yang, B. Dong, B. Gu, J. Zhuang, and O. K. Ersoy, “Gerchberg–Saxton and Yang–Gu algorithms for phase retrieval in a nonunitary transform system: a comparison,” Appl. Opt. 33(2), 209–218 (1994). [CrossRef]  

36. C. W. McCutchen, “Generalized aperture and the three-dimensional diffraction image,” J. Opt. Soc. Am. 54(2), 240–244 (1964). [CrossRef]  

37. M. Born and E. Wolf, “Principles of Optics - Electromagnetic Theory of Propagation, Interference and Diffraction of Light,” (6th Edition, Elsevier Ltd., 1980).

38. A. V. Oppenheim and J. S. Lim, “The importance of phase in signals,” Proc. IEEE 69(5), 529–541 (1981). [CrossRef]  

39. J. Rosen, A. Vijayakumar M, R. Rai, S. Mukherjee, and A. Bulbul, “Review of 3D Imaging by Coded Aperture Correlation Holography (COACH),” Appl. Sci. 9(3), 605 (2019). [CrossRef]  

40. Q. Gao and O. Yadid-Pecht, “A low power CMOS imaging system with smart image capture and adaptive complexity 2D-DCT calculation,” J. Low Power Electron. Appl. 3(3), 267–278 (2013). [CrossRef]  

41. S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci. U. S. A. 109(3), 675–679 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Scheme of the optical system for recording the object hologram.
Fig. 2.
Fig. 2. Modified Gerchberg-Saxton algorithm for synthesizing the CPMs, where A is amplitude and Φ is phase corresponding to the complex value C.
Fig. 3.
Fig. 3. Experimental setup of the DOFE-based modified I-COACH. BS1 and BS2 – Beam splitters; SLM – Spatial light modulator; L0A, L0B – Refractive lenses; LED – Light emitting diode; u and u’ are different distances of objects at the same volume
Fig. 4.
Fig. 4. (a) and (b) Direct images of objects for channel 1 and 2, respectively, (c) phase mask displayed on the SLM, (d) Object hologram and (e) Computed PSH for No=10 and optimal scattering degree σo=0.339.
Fig. 5.
Fig. 5. Reconstructed images of the DOFE-based I-COACH
Fig. 6.
Fig. 6. (a) and (b) Direct images of objects in two axial positions. (c) Direct image with RQPF and DSL. (d) Reconstructed image of DOFE-based modified I-COACH.
Fig. 7.
Fig. 7. Optical scheme of I-COACH for multi-volume imaging.
Fig. 8.
Fig. 8. Multiplexing process of two sets of phase masks for multi-volume imaging.
Fig. 9.
Fig. 9. (a) Direct images with an overlap between the objects. (b) Reconstructed image with PSH of subvolume 1 and the CPMs of the two subvolumes are different. (c) Same as (b), but with PSH of subvolume 2. (d) Reconstructed image when the same CPM was used for both subvolumes.
Fig. 10.
Fig. 10. One object was positioned at subvolume 2 and the other was at the gap between subvolumes 1 and 2. The reconstructed images when the object hologram is correlated with PSH of (a) subvolume 2 and (b) subvolume 1.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

U ( r ¯ , z ) = a o exp [ i k S ( r ¯ , z ) ] = a o exp [ i 2 π ( r 2 / 2 u r 2 / 2 f + λ r 4 / p 4 + z ) / λ ] ,
S ( r , z ) | r = W / 2 = ( W / 2 u W / 2 f + λ W 3 / 2 p 4 ) r ^ + z ^ .
tan ( θ ) = W / 2 v = W / 2 u W / 2 f + λ W 3 / 2 p 4 .
1 / ( u d ) + 1 / v = 1 / f .
d = λ u 2 W 2 p 4 λ u W 2 ,
I ( r 0 ) = | I u C 1 L ( r u / u ) Q ( 1 / u ) exp [ i 2 π ϕ ( r ) ] exp [ i 2 π ( r / p ) 4 ] Q ( 1 / f ) Q ( 1 / v ) | 2 ,
I ( r 0 ) = I o | ν [ 1 / λ v ] F { L ( r u / u ) exp [ i 2 π ϕ ( r ) ] exp [ i 2 π ( r / p ) 4 ] } | 2 = I P S H ( r o v r u / u ) ,
I O B J ( r ) = m M a m δ ( r r m ) .
I O H ( r 0 ) = m M a m I P S H ( r 0 v u r m ) = I O B J ( r 0 M T ) I P S H ( r 0 ) ,
I R E C = F 1 { | I ~ O H | o exp [ i arg ( I ~ O H ) ] | I ~ P S H | r exp [ i arg ( I ~ P S H ) ] } = F 1 { | F { I O B J ( r / M T ) I P S H } | o × exp [ i arg ( F { I O B J ( r / M T ) I P S H } ) ] | I ~ P S H | r exp [ i arg ( I ~ P S H ) ] } = F 1 { | I ~ O B J I ~ P S H | o exp [ i arg ( I ~ O B J ) + i arg ( I ~ P S H ) ] × | I ~ P S H | r exp [ i arg ( I ~ P S H ) ] } = F 1 { | I ~ O B J | o | I ~ P S H | o + r exp [ i arg ( I ~ O B J ) ] } I O B J ( r / M T ) ,
E ( o , r ) = k l ψ o , r ( k , l ) log [ ψ o , r ( k , l ) ] , ψ o , r ( k , l ) = I R E C , o , r ( k , l ) k l I R E C , o , r ( k , l ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.