Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

White light three-dimensional imaging using a quasi-random lens

Open Access Open Access

Abstract

Coded aperture imaging (CAI) technology is a rapidly evolving indirect imaging method with extraordinary potential. In recent years, CAI based on chaotic optical waves have been shown to exhibit multidimensional, multispectral, and multimodal imaging capabilities with a signal to noise ratio approaching the range of lens based direct imagers. However, most of the earlier studies used only narrow band illumination. In this study, CAI based on chaotic optical waves is investigated for white light illumination. A numerical study was carried out using scalar diffraction formulation and correlation optics and the lateral and axial resolving power for different spectral width were compared. A binary diffractive quasi-random lens was fabricated using electron beam lithography and the lateral and axial point spread holograms are recorded for white light. Three-dimensional imaging was demonstrated using thick objects consisting of two planes. An integrated sequence of signal processing tools such as non-linear filter, low-pass filter, median filter and correlation filter were applied to reconstruct images with an improved signal to noise ratio. A denoising deep learning neural network (DLNN) was trained using synthetic noisy images generated by the convolution of recorded point spread functions with the virtual object functions under a wide range of aberrations and noises. The trained DLNN was found to reduce further the reconstruction noises.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Coded aperture imaging (CAI) technology dates to the 20th century with the first reported studies in 1968 by Ables and Dicke [1,2]. In their studies, a single pinhole – the oldest coded aperture, was replaced by multiple pinholes and a computational algorithm was applied to reconstruct the object information. Later in 1978, a simulative study using a uniformly redundant array approach was carried out and a deconvolution method was applied to reconstruct the object information [3]. A modified approach with multiple measurements, joint uniformly redundant array was investigated to improve the signal to noise ratio in 1997 [4]. CAI based spectral imaging was introduced in 2008 by the research group led by David Brady [5]. A sparse reconstruction method was applied to extract 2D spatial information and spectral information from a single camera shot. A phase coded aperture was implemented using an active device like spatial light modulator (SLM) by Chi and George in 2011 [6]. A single-shot phase imaging was demonstrated using CAI using compressive Fresnel holography and coherent diffraction imaging [7]. Based on the above directions, there were numerous research studies.

In 2017, a CAI method called interferenceless coded aperture correlation holography (I-COACH) was introduced which is capable of 3D spatial imaging [8]. However, the reconstruction methods implemented in [8] were cross-correlation by matched and phase-only filters which are not optimal reconstruction methods. Consequently, the above methods demanded multiple camera shots, three for transmission-type objects and atleast 20 for reflection-type objects. Later in the same year, a single-shot multispectral imaging method [9] using a diffuser was demonstrated similar to [5] but using a Weiner type deconvolution [10]. A year later, single shot 3D imaging using diffuser was demonstrated using an advanced inverse algorithm and an alternating direction method of multipliers was introduced for image recovery [11]. A cross-correlation based on non-linear filter was proposed recently to incorporate single shot capability in I-COACH [12]. Even though the non-linear filter was better than the phase-only filter, the reconstruction was efficient only for a limited field of view (FOV). With an extension of FOV, the background noise problems present in phase-only filter returned. Therefore, in addition to the non-linear correlation, the phase mask was engineered such that a random array of points was generated at the sensor plane rather than a random intensity distribution [13]. This approach improved the performance of the I-COACH. Several studies on this topic followed by introducing additional signal processing tools to improve the signal to noise ratio [14].

In most of the studies, an active device such as a spatial light modulator (SLM) was used which is expensive, has lower number of pixels and has a larger pixel width. Moreover, it is difficult to install an SLM in rugged industrial applications. In our recent research, we have advanced the CAI based on chaos on two fronts [15,16]. A quasi-random pinhole array was used as the optical chaotic wave generator which converts the object wave into a chaotic intensity distribution. The proposed method was less expensive than SLMs and better than diffusers which generally have a fixed scattering ratio. In the case of random pinhole array, the location of the pinholes was mathematically optimized such that a minimum background noise was obtained during cross-correlation. Secondly, a group of signal processing tools such as non-linear correlation, low pass filter, median filter, etc., were applied simultaneously to achieve an improved signal to noise ratio. During these studies, depth-wavelength reciprocity and scaling factors were exploited and an extremely short calibration procedure with lesser resources such as light sources was carried out. In the previous studies [814], a prolonged calibration procedure was needed.

In [815], a spatially incoherent light source with a narrow spectral width was used. In the recent study, the indirect imaging method was extended to a spatially incoherent and temporally low coherent case where the dynamics of a spark was recorded [16]. In this manuscript, the indirect imaging method based on chaotic optical waves is extended to white light. Instead of a quasi-random array of pinholes, a two-level phase-only quasi-random lens (QRL) is designed and fabricated using electron beam lithography. The imaging characteristics are studied numerically and semi-synthetically using experimentally recorded lateral and axial point spread functions. The manuscript consists of six sections. The methodology is presented in the second section. In the third section, the design of a QRL is discussed. The numerical study of the imaging characteristics is presented in the fourth section. The fifth section contains the details of the fabrication procedure of the QRL, analysis of imaging characteristics and 3D imaging experiments. The conclusion and discussion are presented in the final section.

2. Methodology

The optical configuration of the indirect imaging system is shown in Fig. 1. A white light source critically illuminates an object using a refractive lens and every object point receives broadband illumination. Considering a single point located at $({\overline {{r_o}} ,u} )$, a total amplitude of $\sqrt {\mathop \int \nolimits_{{\lambda _1}}^{{\lambda _2}} {I_o}(\lambda )d\lambda } $ reaches the plane of the QRL, where λ1 and λ2 are the spectral limits. To simplify the calculation, the response for a particular wavelength is calculated and assuming complete spatial incoherence, the intensities corresponding to different wavelengths at the sensor are summed. The complex amplitude at the plane of the QRL is given as ${C_1}\sqrt {{I_o}(\lambda )} L({\overline {{r_o}} /u} )Q({1/u} )$, where C1 is a complex constant, $\overline {{r_o}} = ({{x_o},{y_o}} )$, $L({\bar{o}/u} )= \textrm{exp}[{j2\pi ({{o_x}x + {o_y}y} )/({\lambda u} )} ]$ and $Q({1/u} )= \textrm{exp}[{j\pi ({{x^2} + {y^2}} )/({\lambda u} )} ]$ are the linear and quadratic phase factors respectively. The two-level binary (QRL) is a phase-only device with a phase function ΦQRL. The complex amplitude after the QRL is given as ${C_1}\sqrt {{I_o}(\lambda )} L({\overline {{r_o}} /u} )Q({1/u} )exp({ - j{\mathrm{\Phi }_{\textrm{QRL}}}} )$. The intensity pattern recorded at the sensor plane is given as a convolution with the quadratic phase function Q(1/v),

$${I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u,\lambda } )= {\left|{{C_1}\sqrt {{I_o}(\lambda )} L\left( {\frac{{\overline {{r_o}} }}{u}} \right)Q\left( {\frac{1}{u}} \right)exp({ - j{\mathrm{\Phi }_{\textrm{QRL}}}} )\otimes Q\left( {\frac{1}{v}} \right)} \right|^2}, $$
is the intensity recorded for λ and so the total intensity recorded by the image sensor is given as
$${I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u} )= \mathop \int \nolimits_{{\lambda _1}}^{{\lambda _2}} {I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u,\lambda } )d\lambda .$$

 figure: Fig. 1.

Fig. 1. Optical configuration of indirect imaging using a two-level quasi-random diffractive lens

Download Full Size | PDF

The above equation can be modified as

$${I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u} )= {I_{\textrm{PSF}}}\left( {\overline {{r_s}} - \frac{v}{u}\overline {{r_o}} ;0,u} \right).$$

A two-dimensional object consisting of multiple points and located in the object plane can be represented as

$$o({\overline {{r_o}} } )= \mathop \sum \nolimits_i {a_i}\delta ({r - {r_i}} ).$$

The object intensity pattern at the sensor plane is the summation of the shifted and scaled point spread intensity functions given as

$$\; {I_o}({\overline {{r_s}} ,u} )= \mathop \sum \nolimits_i {a_i}{I_{\textrm{PSF}}}\left( {\overline {{r_s}} - \frac{v}{u}\overline {{r_{o,i}}} ;0,u} \right).$$

The image of the object is reconstructed by a cross-correlation between Io and IPSF

$$\begin{aligned} {I_\textrm{R}}({\overline {{r_\textrm{R}}} } )&= \int\!\!\!\int \mathop \sum \nolimits_i {a_i}{I_{\textrm{PSF}}}\left( {\overline {{r_s}} - \frac{v}{u}\overline {{r_{o,i}}} ;0,u} \right)I_{\textrm{PSF}}^\ast ({\overline {{r_s}} - \overline {{r_\textrm{R}}} ;0,u} )\\ &= \int\!\!\!\int \mathop \sum \nolimits_i {a_i}\mathrm{\Lambda }\left( {\overline {{r_R}} - \frac{v}{u}\overline {{r_{o,i}}}} \right) \approx o\left( {\frac{{\overline {{r_s}} u}}{v}} \right),\end{aligned}$$
where Λ is a delta-like function with a maximum at the origin and negligible values in other places and the transverse magnification MT=(v/u) and. For a thick object consisting of N planes, the object intensity pattern is the summation of the object intensity patterns at different planes ${I_o}({\overline {{r_s}} } )= \mathop \sum \nolimits_{k = 1}^N {I_o}({\overline {{r_s}} ,{u_k}} )\; \; $ which can be reconstructed at different depths using ${I_{\textrm{PSF}}}({\overline {{r_s}} ,{u_i}} )$ when i = k.

3. Design of QRL

Before deriving the expression for the phase of a QRL, it is necessary to understand the function of a QRL. A QRL is a lens, like any other lens which collects light but instead of focusing the light to a point, it does so to an area and within the area the light is scattered. Like a conventional lens, an axial shift to the object or sensor plane introduces blurring for the QRL. The QRL being a diffractive element is also sensitive to changes in wavelength [17]. The QRL was first demonstrated for 3D imaging application in the lensless I-COACH [18]. However, the demonstration was carried out using an SLM and the design was performed using the Gerchberg-Saxton algorithm (GSA) [19] with Fresnel approximation [20] which is known to have convergence problems. In this study, a modified approach is used in designing the QRL. In the first step, the conventional GSA is applied which controls the first parameter, the scattering ratio σ. The scattering ratio is given as σ = p/P, where p is the length of the square shaped region within which the light is collected, and P is the maximum length possible as shown in Fig. 2(a). The Fourier GSA is shown in Fig. 2(b). The area selected in this case has a square shape, but it can be of any shape depending upon the application. As per the GSA construction, the optical experiment requires a Fourier lens to connect the QRL and sensor plane. The initial complex amplitude matrix is constructed using a uniform amplitude matrix and a random phase matrix which is Fourier transformed to obtain the complex amplitude at the sensor plane. At the sensor plane, the amplitude matrix is replaced by the desired matrix where there is uniform amplitude at a predefined area while the rest of the area carries a value of 0. The phase matrix obtained at the sensor plane is carried on for every iteration and the new complex amplitude matrix is inverse Fourier transformed and the process is iterated until the output converges to a matrix with non-changing values.

 figure: Fig. 2.

Fig. 2. (a) Optical configuration of optical Fourier transform and (b) schematic of GSA with Fourier transformation.

Download Full Size | PDF

The phase of the QRL can be expressed as ${\mathrm{\Phi }_{\textrm{QRL}}} = {[{{\mathrm{\Phi }_n} + {\mathrm{\Phi }_\textrm{L}}} ]_{2\pi }}$, where Φn is the phase output from the GSA after nth iteration and ΦL is the phase of the diffractive lens given as $- \pi ({{x^2} + {y^2}} )/({\lambda f} )$ with a focal length $f = {\left( {\frac{1}{u} + \frac{1}{v}} \right)^{ - 1}}$. In a high numerical aperture (NA) diffractive lens, the above approximation introduces spherical aberration which is studied here [21]. In the case of high NA lenses, the lens must be designed without the geometrical approximation. Even though the scattering ratio is fixed in GSA, the physical size of the pattern can be controlled further by the focal length f as the pixel size in the sensor plane is given as λf/(PΔ), where Δ is the pixel size in the plane of QRL. The images of Φn with σ = 0.12 after 50 iterations and ΦL for 50 cm (λ = 500 nm, Δ = 10 µm) are shown in Figs. 3(a) and 3(b) respectively. The phase image of the QRL is shown in Fig. 3(c). Manufacturing QRLs with such continuous phase profiles is challenging and so the generated profile is binarised to two levels by changing values (ΦQRL≤π) and (ΦQRL>π) to 0 and π respectively as shown in Fig. 3(d). The modulo-2π phase addition of Φn(x=0,y) and ΦL(x=0,y) to generate ΦQRL(x=0,y) and the binarisation are shown in Fig. 3(e). For an optical configuration as shown in Fig. 1, the object and image distances can be varied if the focal length equation is satisfied.

 figure: Fig. 3.

Fig. 3. Phase images of (a) quasi-random phase output from the GSA after 50th iteration, (b) diffractive lens with a focal length of 50 cm, (c) continuous QRL and (d) binary QRL. (e) Plots of the central line data with 200 pixels of (a)-(d).

Download Full Size | PDF

4. Numerical analysis

The average size of the speckles generated in the far-field by scattering of light from a scatterer is given as ∼λz/D, where z is the distance between the scatterer and the observation plane and D is the diameter of the illumination or scatterer (whichever is the lowest) and is independent of σ [22]. In the near field, the speckle size decreases with an increase in σ and vice versa. In the current study, the diffractive element is not a regular scatterer but a QRL which has both scattering as well as lens functions. Therefore, the intensity distributions at the focal plane of the QRL are in the far-field regime and so the speckle size is independent of σ. Consequently, the lateral resolution is also independent of σ. The impact of σ is mainly on the background noise generated during the autocorrelation. It must be noted that the autocorrelation is carried out on a positive function and so a background noise is generated during the autocorrelation process. A strong scatterer with a high σ scatters light uniformly to a large area on the sensor and the autocorrelation of large area uniform intensity distribution generates a uniform background noise. A weak scatterer with small σ scatters light to a small area and the autocorrelation of small area intensity distribution results in a non-uniform background noise with the maxima of the background noise present around the autocorrelation peak. It is well-known that an information can be perceived better with a uniform background noise than a non-uniform one. In addition to the above conditions, to achieve an optimal efficiency, it is necessary to match the area of the intensity distribution to the area of the image sensor. A detailed study of the variation in the autocorrelation functions for near-field and far-field conditions for different values of σ are discussed in a previous study [23].

The numerical analysis is carried out for the following simulation parameters: matrix size of 500 × 500 pixels, pixel size is 10 µm, uniform response over the wavelength range of 400 nm to 700 nm and the GSA was executed to achieve σ = 0.12, u = v = 0.4 m and f = 0.2 m. To obtain σ = 0.12 with a length of a matrix P = 500 pixels, the length (in pixels) of the side of the square area within which the light is collected is p = σ×P = 0.12×500 pixels = 60 pixels. The indirect imaging method based on correlation optics is based on the linearity of the system. The object intensity distribution Io can be expressed as ${I_{\textrm{PSF}}} \otimes o$, where o is the object function and ‘${\otimes} $’ is a 2D convolutional operator. The reconstructed image using a matched filter is given as ${I_\textrm{R}} = {I_o}\ast {I_{\textrm{PSF}}}$, where ‘${\ast} $’ is the 2D correlation operator. The expression of the reconstructed image can be written as ${I_\textrm{R}} = o \otimes {I_{\textrm{PSF}}}\ast {I_{\textrm{PSF}}}$, where ${I_{\textrm{PSF}}}\ast {I_{\textrm{PSF}}}$ is the autocorrelation function given as Λ which samples the object function o as $o \otimes \Lambda $ and Λ is a delta-like function. In other words, the Λ is the lateral resolution of the system. Since the reconstruction is carried out using a cross-correlation, Λ cannot be smaller than twice the diffraction limited spot size 1.22Λu/D, where D is the diameter of the QRL. However, it was found that a phase-only filter [24] can have a sharper reconstruction than matched filter and most recently a non-linear filter was found to reconstruct images with a high signal to noise ratio and a lateral resolving power reaching the diffraction limited spot size [12].

In the previous studies, only narrow bandwidth light sources have been used and the influence of spectral bandwidth has not been therefore investigated. In one of our recent studies [15], the intensity distributions obtained with chromatic filters of different spectral width has been compared and is shown that the narrowest case resulted in the highest spatial resolution as expected. Here, the intensity distributions for a point object were simulated for a spectral bandwidth Δλ of 1 nm, 50 nm, 100 nm and 200 nm with the minimum wavelength λmin = 400 nm for all cases. The images of the synthesized continuous and binarised QRLs are shown in Figs. 4(a) and 4(b) respectively. The images of simulated intensity distributions for Δλ = 1, 50, 100 and 200 nm are shown in Figs. 4(c)–4(f) respectively. The images of the autocorrelation functions with a matched filter for 4(c)-4(f) are shown in Figs. 4(g)–4(j) respectively. As seen, the autocorrelation function becomes broader with an increase in the spectral bandwidth. The line data (x=0,y) of Figs. 4(g)–4(j) are plotted and compared as shown in Figs. 4(k).

 figure: Fig. 4.

Fig. 4. Images of QRL in (a) continuous and (b) binary versions. Images of IPSF simulated for Δλ = (c) 1 nm, (d) 50 nm, (e) 100 nm and (f) 200 nm. Images of the autocorrelation functions of (c), (d), (e) and (f) are (g), (h), (i) and (j). (k) plot of the line data (x=0,y) of (g)-(j).

Download Full Size | PDF

Next, the axial PSF is simulated. The IPSF is simulated for different axial locations (0.3 m ≤ u ≤ 0.5 m) of the point object and cross-correlated with the IPSF (u = 0.4 m). The images of the intensity distributions simulated at u = 0.3, 0.4 and 0.5 m for Δλ = 1, 50, 100 and 200 nm are shown in Fig. 5(a). From the figures, it is seen that a QRL behaves like a diffractive lens with the best focus at the focal plane and blurs when there is an error in the axial location. The plot of the autocorrelation value IR(x=0,y=0) for object distances of 0.3 m to 0.5 m is shown in Fig. 5. It is seen that the axial PSF is broader for larger spectral width than for narrower one. From the above two numerical analyses, it is seen that a broadband illumination lowers the lateral resolution and increases the depth of focus. While the former is certainly a disadvantage, the later can be useful for some applications.

 figure: Fig. 5.

Fig. 5. (a) Simulated PSFs for different spectral widths and axial locations and (b) plot of normalised intensity of IR(x=0,y=0) obtained by cross-correlating IPSF simulated for the axial locations of the point object u = 0.3 m to 0.5 m with IPSF simulated at u = 0.4.

Download Full Size | PDF

The variation of the PSF with axial distance is quite interesting. The case for a single wavelength follows the behavior of a diffractive lens. At the image plane, the pattern is focused with sharp square edges. An axial aberration of ±0.1 m resulted in blurring with no sharp edges. An almost similar behavior is observed for Δλ = 50 nm. However, for the other cases namely Δλ = 100 nm and 200 nm the best focus appears to be when u = 0.3 m. Recalling the relation between focal length and wavelength of a diffractive lens, the radius of nth half-period zone is given as ${r_n} = \sqrt {nf\lambda } $. Rewriting this, we get, $\frac{{r_n^2}}{n} = f\lambda $, so λf = K, where K is a constant. Any increase in wavelength will result in a decrease in the focal length corresponding to that wavelength and vice versa. So, when the spectral width increases, the longer wavelengths experience a shorter focal distance. Consequently, at the image plane they are better focused than the shorter wavelengths. Therefore, an axial aberration of Δu =−0.1 m results in better focus for Δλ = 100 and 200 nm while blurring for the other two cases.

5. Experiments

5.1 Fabrication of QRL

The QRL was fabricated using electron beam lithography with a RAITH1502. The QRL was designed for f = 5 cm to be used with u = v = 10 cm, σ = 0.12, λ = 600 nm and Lx = Ly = 5 mm. The design was carried out using MATLAB and the output binary matrix is saved as a tagged image file format consisting of a high resolution 100 megapixels with a pixel width of 500 nm. The file was then converted into GDSII format using the trial version of the LinkCAD commercial software. The substrate used for fabrication was an Indium-Tin-Oxide (ITO) coated glass substrate with a thickness of 1.1 mm. The refractive index of the resist and the substrate were n = 1.5. So, the resist thickness needed to suppress the 0th order diffraction term is calculated as λ/2(n−1) = 600 nm. The sample was cleaned using acetone and Iso-propyl alcohol for 5 minutes in an ultrasonic bath in sequence and dried with N2 gas and baked at 180°C for five minutes to remove the residual solvent and water. The edges of the substrate were masked using tape. The electron beam positive resist, PMMA 950K (A7) was spin coated on to the substrate at 2000 RPM and a ramp of 500 RPM/s. After peeling-off the masking tape, the substrate was baked at 180°C for 2 minutes.

The RAITH1502 was operated with 10kV acceleration voltage, 120 µm aperture, beam current of approximately 3 nA, write field of 100 µm and working distance of 10 mm. During sample loading, the masked area of the substrate where the ITO layer remained exposed was connected to a metal clip to provide a grounding path for the electrons generated during fabrication. The electron beam dose was 150 µC/cm2 and the writing time was ∼7 hours. The fabricated elements were developed in methyl isobutyl ketone (MIBK) and isopropyl alcohol (IPA) solution in the ratio of 1:3 for 60 seconds and rinsed in IPA then in deionized distilled water. The optical microscope images of the fabricated elements are shown in Figs. 6(a)–6(f). A repeated magnification of the same point from 6(a)−6(d) shows no stitching errors. The dark regions are the areas not resolved by the microscope at 5× magnification and with limited light transmission. The surface profiler results indicated a resist thickness of ∼700 nm. The optical microscope images of the outermost regions shown in Figs. 6(e) and 6(f) show that even the outermost part of the element was fabricated successfully and the duty ratio of 50% indicate optimized values of electron beam dose, baking temperature and times and development times.

 figure: Fig. 6.

Fig. 6. (a)-(d) Optical microscope images of the QRL with repeated magnifications centered at the same point, (e) and (f) Optical microscope images of the corner area of the QRL. The bright and dark regions indicate resist removed and present respectively. The smallest feature size was about 500 nm. Top row is from left to right and bottom row is from right to left.

Download Full Size | PDF

The efficiency of binary diffractive elements can reach a maximum of 40%. Considering the error in the resist thickness, Fresnel reflections at the air-glass and glass-air interfaces, an efficiency of >30% is expected from the previous studies on this topic [21]. However, unlike a regular Fresnel zone lens where only the light contributing to the focal point is useful, in the case of QRL, if the intensity distribution lies within the sensor area, it can still contribute to the imaging.

5.2 Lateral and axial characteristics

The experimental set up is like Fig. 1. A white light source (Fiber-Lite DC-950, Dolan-Jenner industries, Full width at half maximum Δλ ∼ 270 nm) was used for illumination. A pinhole with a diameter of 100 µm was critically illuminated using a refractive lens of focal length 10 cm. The fabricated QRL is mounted at 10 cm from the pinhole and the image sensor (Thorlabs DCU223M, 1024 pixels × 768 pixels, pixel size = 4.65 µm) was mounted at 10 cm from the QRL. The spectral profile of the source was measured which is as shown in Fig. 7. Even though the spectral profile is known, the spectrum measured at the sensor is influenced both by the spectral response of the sensor as well as the response of the diffractive element. The spectral response of the Thorlabs image sensor is shown in Fig. 7. The efficiency of a diffractive element consisting of two levels at the 1st diffraction order is given by the Fourier coefficients $\eta (\lambda )= {\left|{\frac{2}{\pi }sin\left( {\frac{{\Phi (\lambda )}}{2}} \right)} \right|^2}$, where $\Phi (\lambda )= \frac{{2\pi }}{\lambda }t[{n(\lambda )- 1} ]$, $n(\lambda )= 1.488 + 0.002898{\lambda ^{ - 2}} + 0.0001579{\lambda ^{ - 4}}$ and dn/dλ = −0.037585 µm−1 [25].

 figure: Fig. 7.

Fig. 7. Plot of the normalized spectral response for the source, sensor, QRL and the combination.

Download Full Size | PDF

From the above relation, the change in the efficiency with respect to wavelength for a fabricated diffractive structure can be calculated. The efficiency variation with respect to wavelength is plotted in Fig. 7. The maximum value possible as per the above equation for η is 0.4 but due to normalization, the maximum value is changed to 1. The cumulative effect of the above individual responses engineers the overall spectral response which is given by a product of the three profiles as shown in Fig. 7. As seen in Fig. 7, even though the spectrum of the source is broader, the spectral response function of the sensor and the efficiency variation of the QRL, sculpts the spectral profile resulting in a relatively narrow spectral profile. This narrow profile is obtained without including the spectral responses from other refractive lenses used for illumination. Consequently, the speckle distribution and the lateral resolution given by the autocorrelation function is expected to be narrow for the source-QRL-sensor configuration. The images of the IPSF for white light and for red wavelength (λ ∼630 nm and Δλ = 10 nm) recorded by the image sensor are shown in Figs. 8(a) and 8(b) respectively. To have a reliable comparison, a red filter corresponding to (λ ∼630 nm and Δλ = 10 nm) was introduced in the path of white light rather than implementing a second channel. The autocorrelation function using a phase-only filter given as ${I_\textrm{R}} = {I_{\textrm{PSF}}}\ast {\tilde{I}_{\textrm{PSF}}}$, where ${\tilde{I}_{\textrm{PSF}}} = {\Im ^{ - 1}}\{{exp[{j \cdot arg({\Im \{{{I_{\textrm{PSF}}}} \}} )} ]} \}$ is applied to both Fig. 8(a) and 8(b) and the result is shown in Figs. 8(c) and 8(d) respectively. The line data (x=0,y) of Figs. 8(c) and 8(d) are normalised and plotted in Fig. 8(e) which shows that the spectral tailoring effects have significantly sharpened the autocorrelation function. The axial characteristics were studied next by shifting the location of the point abject axially, recording IPSF(9 cm ≤ u≤11 cm) and cross-correlating it with IPSF(u = 10 cm). The plot of IR(x=0,y=0) as a function of u is shown in Fig. 8(f). The width is approximately 2 mm while the axial resolving power for λ = 600 nm and for the current optical configuration is ∼2λ/NA2 = 1.9 mm.

 figure: Fig. 8.

Fig. 8. Experimentally recorded images of IPSF of (a) white light and (b) red light. Images of the autocorrelation function with a phase-only filter for (c) (a) and (d) (b) respectively. (e) Plot of the normalised line data (x=0,y) of (c) and (d). (f) Plot of normalised intensity of IR(x=0,y=0) obtained by cross-correlating IPSF simulated for the axial locations of the point object u = 9 cm to 11 cm and IPSF simulated at u = 10 cm.

Download Full Size | PDF

5.3 3D imaging

In the first step, a semi-synthetic 3D imaging is demonstrated using three virtual objects ‘S,’ ‘U,’ and ‘T,’ in ‘Bahnschrift’ font style representing Swinburne University of Technology. As the IPSF of the system has been recorded at many axial planes, it is possible to conduct a virtual 3D imaging experiment within those axial boundaries as close as possible to reality. For the virtual experiment, the three planes u = 10, 10.5 and 11 cm are considered. Recalling the fundamental principles of linearity in spatially incoherent imaging system, the 3D object’s intensity distribution can then be represented as I3D = IPSF(u=10 cm)${\otimes} $OS + IPSF(u=10.5 cm)${\otimes} $OU + IPSF(u=11 cm)${\otimes} $OT . The images of the IPSF(u=10 cm), IPSF(u=10.5 cm) and IPSF(u=11 cm) are shown in Figs. 9(a)–9(c) respectively. The images of the objects ‘S,’ ‘U,’ and ‘T,’ are shown in Figs. 9(d)–9(f) respectively. The corresponding intensity distributions synthesized for the objects OS, OU and OT are shown in Figs. 9(g)–9(i) respectively. The reconstructed images of the total intensity distribution of Figs. 9(g)–9(i) along the three planes are shown in the Figs. 9(j)–9(l) respectively. The total intensity distribution obtained by the sum of the normalized intensity distributions corresponding to the three planes is shown in Fig. 9(m). It must be noted that the above semi-synthetic demonstration is superior to a numerical study. In a numerical study, all components are synthesized within a software. In the proposed approach, the experimental PSF is used for the entire simulation and so the synthesized object intensity distributions and reconstructions are expected to be closer to reality than a regular numerical simulation.

 figure: Fig. 9.

Fig. 9. Experimentally recorded IPSFs at u = (a) 10 cm, (b) 10.5 cm and (c) 11 cm. Virtual 2D objects at u = (d) 10 cm, (e) 10.5 cm and (f) 11 cm. Object intensity distributions simulated at u = (g) 10 cm, (h) 10.5 cm and (i) 11 cm. The reconstruction results of the total intensity distribution shown in (m) at u = (j) 10 cm, (k) 10.5 cm and (l) 11 cm.

Download Full Size | PDF

The 3D imaging is carried out using USAF objects (numeric digit 6) and (numeric digit 4) located at u = 10 cm and 10.6 cm and the Io was recorded. The images of the IPSFs recorded at the above two respective locations u = 10 cm and 10.6 cm and the Io recorded for the USAF objects are shown in Figs. 10(a)–10(c) respectively. The first semi-synthetic study was carried out based on the linearity property of the imaging system. An incoherent imaging system is linear in intensity not in complex amplitude. This peculiar property allows to carry out imaging experiments in novel ways. In most of the holography experiments, the 3D imaging is demonstrated using two thin objects mounted at two different distances in two optical channels and combined into one using a beam combiner such as a beam splitter [8,12,13,15]. This two-channel demonstration often demands higher optical power as part of the light is lost at the beam splitter. But is it necessary to carry out such two channel experiments? It is surely necessary in the case of coherent imaging and holography systems where the light from the two channels is interfered. If A1 and A2 are the two optical fields at the image sensor from the two channels, then the recorded intensity distribution in the case of coherent system is ${I_{S - C}} = {|{{A_1} + {A_2}} |^2}$, whereas in the case of incoherent system ${I_{S - I}} = {|{{A_1}} |^2} + {|{{A_2}} |^2}.$ So, in the case of incoherent system, at the sensor, there is only intensity addition. This property of incoherent systems enables recording the intensities for two planes at two instants of time and with an improved light budget. Instead of adding the intensities at the sensor, the recorded intensity patterns by the sensor are digitally summed in the computer. The light from the white light source in our case was not sufficient to power two optical channels simultaneously. So, we have exploited the above property of incoherent imaging system to record the 3D object consisting of two planes. The intensity pattern for the two-plane object was synthesized by summing the two intensity distributions recorded at different times at u = 10 cm and 10.6 cm for the objects ‘6’ and ‘4’ respectively. In the recent studies, a non-linear reconstruction method has shown significant improvement in the signal to noise ratio [12,15,16]. The reconstruction by the non-linear correlation is given as ${I_R} = |{{{\cal F}^{ - 1}}\{{{{|{{{\tilde{I}}_{\textrm{PSF}}}} |}^\alpha }exp[{i\; arg({{{\tilde{I}}_{\textrm{PSF}}}} )} ]{{|{{{\tilde{I}}_o}} |}^\beta }exp[{ - i\; arg({{{\tilde{I}}_o}} )} ]} \}} |$ and the parameters α and β are tuned between −1 and 1 until a minimum entropy given as $S({p,q} )={-} \sum \sum \phi ({m,n} )log[{\phi ({m,n} )} ],$ is obtained where $\phi ({m,n} )= |{C({m,n} )} |/\mathop \sum \nolimits_M \mathop \sum \nolimits_N |{C({m,n} )} |$, (m,n) are the indexes of the correlation matrix, and C(m,n) is the correlation distribution. The reconstruction results of non-linear correlation of the Io for white light for the above two objects using phase-only filter (α = 0, β = 1) are shown in Figs. 10(d) and 10(e). The reconstruction results using non-linear filter (α = 0, β = 0.7) in combination with median filter are shown in Figs. 10(f) and 10(g). The reconstruction results with the above condition and a correlation filter are shown in Figs. 10(h) and 10(i).

 figure: Fig. 10.

Fig. 10. Experimentally recorded images of the IPSFs at u = (a) 10 cm and (b) 10.6 cm. (c) Image of the total intensity distribution experimentally recorded for the USAF objects located at u = 10 cm and 10.6 cm. Reconstruction results using (d),(e) Phase-only filter, (f)(g) non-linear filter and median filter (h)(i) non-linear filter, median filter and correlation filter and (j)(k) non-linear filter, median filter and correlation filter and denoising neural network. Scale – 1 mm.

Download Full Size | PDF

To improve further the reconstruction results, the output from the combined filters is fed into a deep learning neural network (DLNN), ‘DnCNN’ in the Deep Learning toolbox of MATLAB software [26]. The DnCNN consists of 59 layers composed of convolutional layers, rectified linear activation function ReLU and batch normalization units. The number of layers were optimised to obtain a reasonable training duration with the processing limit of the CPU (Intel Core, i5-8250U, 1.6 GHz and 1.8 GHz). The training rate was set as 0.001 with Epochs = 20 and an adaptive moment estimation optimizer was selected. From the results seen in Fig. 10(h) and 10(i) which are Gaussian type noises with some blurring, the DLNN was trained using the results obtained from the three filters for different USAF objects such as numbers and grating lines. The training objects were made diverse by including reconstruction results with an axial aberration Δu of up to ±2 mm and for Δβ of ±0.1 and their combinations with a Gaussian noise of ∼0.001 to 0.2. A total of 126 training objects were used. The total training time was 2009 minutes and 19 seconds. After the training step, the result obtained after applying the three filters are given as input to the DLNN and the noise reduced output is generated from the DLNN. The results obtained from the DLNN are shown in Figs. 10(j) and 10(k). The results obtained from the DLNN are relatively sharper with the features appearing more defined and a reduction in background noise is seen. The advantage of the approach is that the training did not require any experimental data of the object but only the PSF library. The availability of the PSF library enables the possibility of synthesizing the entire training set so close to the experimental data without the need for doing the experiment. The aberrations due to axial location errors, scattering noises, etc. can be simulated by both pre- and post-processing of the synthesized object holograms.

6. Summary and discussion

Coded aperture imaging methods based on reconstruction by cross-correlation is often implemented for a narrow spectral width [8,15]. In the cases which used broadband sources, a physical spectral filter is often used [2729] and in other cases a comb of temporally low coherent sources was used [15,16]. As discussed in the manuscript, the resolving power of imaging by coded aperture based indirect imagers depends on the correlation lengths which inturn depends on the smallest speckle that can be obtained in the sensor. The speckle sizes in chaos based coded apertures are highly sensitive to the spectral bandwidth which is analysed here [15]. It is shown that the size of the speckle is directly proportional to the spectral bandwidth. There were only few studies on white light with coded apertures which again proved that the localization effects in the intensity decreased with an increase in the spectral width.

In this study, a broadband white light is used for illumination of the samples and a quasi-random lens was used for beam modulation which along with the sensor engineered the input spectrum to a narrower profile. The simulation and experimental studies revealed that the white light illumination broadened the speckles and reduced the localization effects and increased the correlation lengths resulting in a decrease in lateral and axial resolutions. The presence of the fabricated diffractive element affected the system in three ways: narrowed the spectral response, introduced chromatic aberrations, and increased the overall efficiency. While the narrowing of spectral width improved the localization effects by introducing different diffraction efficiency values to different wavelengths, the chromatic aberrations negatively affected the speckle localization. For example, if an intensity maximum for a particular wavelength occurs at a location in the sensor, due to chromatic aberrations, the maximas for other wavelengths form in its’ vicinity and add up resulting in a broader profile. Therefore, for white light imaging, a refraction dominated element such as a freeform optical device which is sensitive to changes in distance but not wavelength is preferred [30].

The application of non-linear filter together with median filter, correlation filter and DLNN enabled 3D imaging with reasonable signal to noise ratio in comparison to matched filter, phase-only filter and even non-linear filter. The coded aperture imaging methods offer a useful platform for implementing DLNN as it is possible to generate the training set using the recorded PSFs rather than from real experiments. In the current study, a pinhole size of 100 µm was used which is another reason for the low lateral and axial resolution. In the future studies, the larger pinhole will be replaced by a smaller one and with tight focusing with a microscopic objective lens.

In summary, coded aperture based indirect imagers have an information bandwidth limit which is governed by the imaging configuration, space-bandwidth product, number of pixels of the sensor and the number of levels in the sensor, number of spectral channels and number of planes of the 3D object. Further studies are needed to fully understand the information bandwidth limits and develop computational optical techniques and modulation methods to improve the performances of CAI based imagers. We believe that the proposed method of white light imaging with CAI is expected to compete with the existing broadband holography techniques such as integral holography and FINCH [31,32].

Funding

Australian Research Council (LP190100505).

Acknowledgements

The work was carried out at the Nanolab facility of Swinburne University of Technology. VA thanks Prof. Joseph Rosen, BGU for the useful discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. G. Ables, “Fourier transform photography: a new method for X-ray astronomy,” Publ. Astron. Soc. Aust. 1(4), 172–173 (1968). [CrossRef]  

2. R. H. Dicke, “Scatter-hole cameras for X-rays and gamma rays,” Astrophys. J. 153(2), L101–L106 (1968). [CrossRef]  

3. E. E. Fenimore and T. M. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17(3), 337–347 (1978). [CrossRef]  

4. A. Busboom, H. D. Schotten, and H. E. -Boll, “Coded aperture imaging with multiple measurements,” J. Opt. Soc. Am. A 14(5), 1058–1065 (1997). [CrossRef]  

5. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]  

6. W. Chi and N. George, “Optical imaging with phase-coded aperture,” Opt. Express 19(5), 4294–4300 (2011). [CrossRef]  

7. R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39(22), 6466–6469 (2014). [CrossRef]  

8. A. Vijayakumar and J. Rosen, “Interferenceless coded aperture correlation holography–a new technique for recording incoherent digital holograms without two-wave interference,” Opt. Express 25(12), 13883–13896 (2017). [CrossRef]  

9. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]  

10. A. Vijayakumar, T. Katkus, S. Lundgaard, D. Linklater, E. P. Ivanova, S. H. Ng, and S. Juodkazis, “Fresnel incoherent correlation holography with single camera shot,” Opto-Electron. Adv. 3(8), 200004 (2020). [CrossRef]  

11. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

12. M. R. Rai, A. Vijayakumar, and J. Rosen, “Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 26(14), 18143–18154 (2018). [CrossRef]  

13. M. R. Rai and J. Rosen, “Noise suppression by controlling the sparsity of the point spread function in interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 27(17), 24311–24323 (2019). [CrossRef]  

14. C. Liu, T. Man, and Y. Wan, “Optimized reconstruction with noise suppression for interferenceless coded aperture correlation holography,” Appl. Opt. 59(6), 1769–1774 (2020). [CrossRef]  

15. A. Vijayakumar, S. H. Ng, J. Maksimovic, D. Linklater, E. P. Ivanova, T. Katkus, E. P. Ivanova, and S. Juodkazis, “Single shot multispectral multidimensional imaging using chaotic waves,” Sci. Rep. 10(1), 13902 (2020). [CrossRef]  

16. A. Vijayakumar, S. H. Ng, T. Katkus, and S. Juodkazis, “Spatio-Spectral-Temporal Imaging of Fast Transient Phenomena Using a Random Array of Pinholes,” Adv Photo Res 2(1-9), 2000032 (2021). [CrossRef]  

17. A. Vijayakumar and J. Rosen, “Spectrum and space resolved 4D imaging by coded aperture correlation holography (COACH) with diffractive objective lens,” Opt. Lett. 42(5), 947–950 (2017). [CrossRef]  

18. M. Kumar, A. Vijayakumar, and J. Rosen, “Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses,” Sci. Rep. 7(1), 11555 (2017). [CrossRef]  

19. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35(2), 227–246 (1972).

20. V. Anand, T. Katkus, D. P. Linklater, E. P. Ivanova, and S. Juodkazis, “Lensless Three-Dimensional Quantitative Phase Imaging Using Phase Retrieval Algorithm,” J. Imaging 6(9), 99 (2020). [CrossRef]  

21. A. Vijayakumar and S. Bhattacharya, “Characterization and correction of spherical aberration due to glass substrate in the design and fabrication of Fresnel zone lenses,” Appl. Opt. 52(24), 5932–5940 (2013). [CrossRef]  

22. J. C. Dainty, ed., Laser speckle and related phenomena (Springer-Verlag, New York, 1984).

23. S. Mukherjee, A. Vijayakumar, and J. Rosen, “Spatial light modulator aided noninvasive imaging through scattering layers,” Sci. Rep. 9(1), 17670 (2019). [CrossRef]  

24. J. L. Horner and P. D. Gianino, “Phase-only matched filtering,” Appl. Opt. 23(6), 812–816 (1984). [CrossRef]  

25. A. Vijayakumar and S. Bhattacharya, “Design of multifunctional diffractive optical elements,” Opt. Eng. 54(2), 024104 (2015). [CrossRef]  

26. Matlab Mebin “Deep neural network based noise removal – CNN (DeepLearning),” (https://www.mathworks.com/matlabcentral/fileexchange/84445-deep-neural-network-based-noise-removal-cnn-deeplearning), MATLAB Central File Exchange (2021).

27. J. Liu, C. Zaouter, X. Liu, S. A. Patten, and J. Liang, “Coded-aperture broadband light field imaging using digital micromirror devices,” Optica 8(2), 139–142 (2021). [CrossRef]  

28. K. Monakhova, K. Yanny, N. Aggarwal, and L. Waller, “Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array,” Optica 7(10), 1298–1307 (2020). [CrossRef]  

29. T. Sakuyama, T. Funatomi, M. Iiyama, and M. Minoh, “Diffraction-Compensating Coded Aperture for Inspection in Manufacturing,” IEEE Trans. Ind. Inf. 11(3), 782–789 (2015). [CrossRef]  

30. M. Bawart, S. Bernet, and M. Ritsch-Marte, “Programmable freeform optical elements,” Opt. Express 25(5), 4898–4906 (2017). [CrossRef]  

31. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef]  

32. P. Bouchal and Z. Bouchal, “Concept of coherence aperture and pathways toward white light highresolution correlation imaging,” New J. Phys. 15(12), 123002 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Optical configuration of indirect imaging using a two-level quasi-random diffractive lens
Fig. 2.
Fig. 2. (a) Optical configuration of optical Fourier transform and (b) schematic of GSA with Fourier transformation.
Fig. 3.
Fig. 3. Phase images of (a) quasi-random phase output from the GSA after 50th iteration, (b) diffractive lens with a focal length of 50 cm, (c) continuous QRL and (d) binary QRL. (e) Plots of the central line data with 200 pixels of (a)-(d).
Fig. 4.
Fig. 4. Images of QRL in (a) continuous and (b) binary versions. Images of IPSF simulated for Δλ = (c) 1 nm, (d) 50 nm, (e) 100 nm and (f) 200 nm. Images of the autocorrelation functions of (c), (d), (e) and (f) are (g), (h), (i) and (j). (k) plot of the line data (x=0,y) of (g)-(j).
Fig. 5.
Fig. 5. (a) Simulated PSFs for different spectral widths and axial locations and (b) plot of normalised intensity of IR(x=0,y=0) obtained by cross-correlating IPSF simulated for the axial locations of the point object u = 0.3 m to 0.5 m with IPSF simulated at u = 0.4.
Fig. 6.
Fig. 6. (a)-(d) Optical microscope images of the QRL with repeated magnifications centered at the same point, (e) and (f) Optical microscope images of the corner area of the QRL. The bright and dark regions indicate resist removed and present respectively. The smallest feature size was about 500 nm. Top row is from left to right and bottom row is from right to left.
Fig. 7.
Fig. 7. Plot of the normalized spectral response for the source, sensor, QRL and the combination.
Fig. 8.
Fig. 8. Experimentally recorded images of IPSF of (a) white light and (b) red light. Images of the autocorrelation function with a phase-only filter for (c) (a) and (d) (b) respectively. (e) Plot of the normalised line data (x=0,y) of (c) and (d). (f) Plot of normalised intensity of IR(x=0,y=0) obtained by cross-correlating IPSF simulated for the axial locations of the point object u = 9 cm to 11 cm and IPSF simulated at u = 10 cm.
Fig. 9.
Fig. 9. Experimentally recorded IPSFs at u = (a) 10 cm, (b) 10.5 cm and (c) 11 cm. Virtual 2D objects at u = (d) 10 cm, (e) 10.5 cm and (f) 11 cm. Object intensity distributions simulated at u = (g) 10 cm, (h) 10.5 cm and (i) 11 cm. The reconstruction results of the total intensity distribution shown in (m) at u = (j) 10 cm, (k) 10.5 cm and (l) 11 cm.
Fig. 10.
Fig. 10. Experimentally recorded images of the IPSFs at u = (a) 10 cm and (b) 10.6 cm. (c) Image of the total intensity distribution experimentally recorded for the USAF objects located at u = 10 cm and 10.6 cm. Reconstruction results using (d),(e) Phase-only filter, (f)(g) non-linear filter and median filter (h)(i) non-linear filter, median filter and correlation filter and (j)(k) non-linear filter, median filter and correlation filter and denoising neural network. Scale – 1 mm.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

$${I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u,\lambda } )= {\left|{{C_1}\sqrt {{I_o}(\lambda )} L\left( {\frac{{\overline {{r_o}} }}{u}} \right)Q\left( {\frac{1}{u}} \right)exp({ - j{\mathrm{\Phi }_{\textrm{QRL}}}} )\otimes Q\left( {\frac{1}{v}} \right)} \right|^2}, $$
$${I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u} )= \mathop \int \nolimits_{{\lambda _1}}^{{\lambda _2}} {I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u,\lambda } )d\lambda .$$
$${I_{\textrm{PSF}}}({\overline {{r_s}} ;\overline {{r_o}} ,u} )= {I_{\textrm{PSF}}}\left( {\overline {{r_s}} - \frac{v}{u}\overline {{r_o}} ;0,u} \right).$$
$$o({\overline {{r_o}} } )= \mathop \sum \nolimits_i {a_i}\delta ({r - {r_i}} ).$$
$$\; {I_o}({\overline {{r_s}} ,u} )= \mathop \sum \nolimits_i {a_i}{I_{\textrm{PSF}}}\left( {\overline {{r_s}} - \frac{v}{u}\overline {{r_{o,i}}} ;0,u} \right).$$
$$\begin{aligned} {I_\textrm{R}}({\overline {{r_\textrm{R}}} } )&= \int\!\!\!\int \mathop \sum \nolimits_i {a_i}{I_{\textrm{PSF}}}\left( {\overline {{r_s}} - \frac{v}{u}\overline {{r_{o,i}}} ;0,u} \right)I_{\textrm{PSF}}^\ast ({\overline {{r_s}} - \overline {{r_\textrm{R}}} ;0,u} )\\ &= \int\!\!\!\int \mathop \sum \nolimits_i {a_i}\mathrm{\Lambda }\left( {\overline {{r_R}} - \frac{v}{u}\overline {{r_{o,i}}}} \right) \approx o\left( {\frac{{\overline {{r_s}} u}}{v}} \right),\end{aligned}$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.