Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimized sensing of sparse and small targets using lens-free holographic microscopy

Open Access Open Access

Abstract

Lens-free holographic microscopy offers sub-micron resolution over an ultra-large field-of-view >20 mm2, making it suitable for bio-sensing applications that require the detection of small targets at low concentrations. Various pixel super-resolution techniques have been shown to enhance resolution and boost signal-to-noise ratio (SNR) by combining multiple partially-redundant low-resolution frames. However, it has been unclear which technique performs best for small-target sensing. Here, we quantitatively compare SNR and resolution in experiments using no regularization, cardinal-neighbor regularization, and a novel implementation of sparsity-promoting regularization that uses analytically-calculated gradients from Bayer-pattern image sensors. We find that sparsity-promoting regularization enhances the SNR by ~8 dB compared to the other methods when imaging micron-scale beads with surface coverages up to ~4%.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

19 September 2018: Typographical corrections were made to the body text.

1. Introduction

Biosensors have numerous applications in various fields such as disease screening and detection [1], environmental monitoring [2], and food processing control [3]. Maximum sensitivity biosensors have the capability of detecting individual analyte particles, which can include human cells [4–8], bacteria [9], viruses [10–14], exosomes [15], proteins [16–18], or DNA [19,20]. It is often a challenge to individually detect such small particles. It is also desirable for biosensors to be compact and cost-effective to enable point-of-care use. Point-of-care biosensors have gained significant attention due to their immediacy, convenience, and accessibility in medical testing. Lens-free holographic microscopy is a sensitive and portable approach, making it an attractive biosensor solution when paired with automated image processing routines [21–25]. Furthermore, it offers a field-of-view typically larger than 20 mm2 (here we have 28 mm2) together with sub-micron resolution, which facilities a large dynamic range in terms of analyte concentration.

Figure 1 is an illustration of lens-free holographic microscopy. A sample on a transparent substrate lies in between a light source and a CCD or CMOS image sensor. Typically, the sample-sensor distance d is between 10 µm and 1 mm and the distance between sample and light source is 5-20 cm, rendering unity system magnification. In a sparse sample, some light will be scattered by objects in the sample while most of the light will remain unperturbed, creating an inline hologram at the image sensor plane. The recorded hologram can be reconstructed computationally to provide an image of the sample plane. Micro-scale targets such as bacteria and cells can be imaged directly while nanoscale targets like viruses, DNA strands, and exosomes can be detected indirectly via functionalized microsphere beads [16,19] using lens-free holographic microscopy.

 figure: Fig. 1

Fig. 1 LFHM schematic. A 4x4 LED array is used to sequentially illuminate a sample from slightly different angles and a bandpass filter is inserted between the LED array and the sample to improve the temporal coherence of the system. A sample is inserted ~30 cm above the LED array and d800 µm below the CMOS sensor.

Download Full Size | PDF

The ability to detect such small particles depends primarily on the signal-to-noise ratio (SNR) of these particles in the final image relative to the background. While SNR can be enhanced through sample preparation techniques such as wetting films or nanolenses [9–11,26–30], computational methods such as those investigated in this Article can be applied with or without sample preparation enhancements. In addition to its SNR, the system’s optical resolution also affects its ability to sense individual particles, especially when the spacing between objects is small. The resolution of a single lens-free holographic microscope (LFHM) image is limited by the pixel size of the image sensor, which is typically no smaller than ~1 µm, even with state-of-the-art color image sensors. The pixel super-resolution (PSR) [24,31–33] technique has been shown to improve resolution and boost SNR by acquiring multiple images. Shifting or multiple light sources are utilized to illuminate a sample from slightly different angles so that multiple partially redundant low-resolution (LR) holograms are captured and used to synthesize a high-resolution (HR) hologram. Assisted by the PSR technique, the resolution of a LFHM is no longer limited by pixel size, and instead may be limited by diffraction, coherence, or hologram SNR.

Synthesizing a HR hologram from multiple partially redundant LR holograms can be treated as an optimization problem, where a regularized least-squares optimization routine can be used to estimate the HR hologram. Different regularization methods have been proposed to reduce noise in either the HR hologram or its reconstruction, including minimizing the variation in neighboring pixels [33,34], minimizing the high-frequency content of the hologram [24], minimizing the total variation in the reconstruction [35–37], optimizing for sparsity in the reconstruction after a basis transform [38–41], and optimizing for natural sparsity in the reconstruction [32,42,43]. Despite these many studies of regularized optimization in holography, there is a lack of quantitative experimental results for which method performs best in the specific case of small-target sensing, which is particularly relevant for practical bio-sensing applications of LFHMs.

A logical choice for reconstructing small targets with the best SNR and resolution is to optimize for natural sparsity in the reconstruction domain. Regularization based on the l1 norm of the image is known to promote sparsity [44–52]. This approach is used extensively in compressive sensing, where essential signal information is recovered with fewer measurements than required by the Shannon-Nyquist theorem [37,44,45]. Fournier et al. have recently studied how a form of sparsity-based regularization can be used to improve resolution and enhance field of view in lens-free holographic imaging [32]. However, their analysis is primarily in the simulation domain, and they do not quantify SNR or resolution in their experimental results.

In this Article, we quantitatively compare peak SNR (PSNR) and resolution in experimental PSR reconstructions using three different methods: no regularization (None), cardinal-neighbor (CN) regularization which smoothens the hologram, and regularization promoting sparsity in the reconstruction domain (Sp). In addition to the quantitative experimental PSNR and resolution measurements, other unique aspects of our work include (1) the derivation of specialized functions to handle upsampling and downsampling of the Bayer pattern used in color image sensors, which have smaller pixel size than commercially available monochrome sensors; and (2) analytical derivation of Sp cost function gradients, which improves computational speed and accuracy. We also investigate how our results depend on the sizes and concentrations of the small targets being sensed.

2. Methods

2.1 Experimental methods

Our LFHM is composed of particularly simple and off-the-shelf hardware: a LED array (Adafruit 1487), a CMOS color image sensor (IDS UI-3592LE-C), and a bandpass filter (Thorlabs FL532-3). Figure 1 shows the illustration of our LFHM. The center 16 (4x4) green LEDs on the LED array are used to illuminate a sample which is placed on a transparent substrate about 30 cm above the LED array and about 800 µm below the CMOS sensor. A 532-nm bandpass filter with 3-nm bandwidth is inserted in the optical path to improve the temporal coherence of the LEDs. For each LED on the LED array, the active area is around 250 µm in diameter. The spatial coherence of the LEDs is sufficient so that no pinholes are required in our LFHM [53]. We choose a color image sensor due to its affordability and small pixel size, which is only 1.25 µm. On this color image sensor, 50% of the pixels are green while 25% of the pixels are red and blue, respectively. Only the green pixels in the hologram are used, hence the effective pixel size without any form of PSR would be 1.77 µm (2 times larger than the native 1.25 µm pixel size). Our LFHM has a field of view equal to the active area of the CMOS sensor, which is 4.605×6.140 mm2, approximately 28.27 mm2.

Four types of samples are imaged with our LFHM: 5-µm diameter polystyrene (PS) microspheres, 2-µm diameter PS microspheres, 1-µm diameter PS microspheres, and a 1951 USAF resolution target. The 2-µm and 1-µm samples also contained a low concentration of 5-µm beads that serve as pixel shift estimate landmarks in the samples, which will be explained further in Section 2.2. All samples are dry without immersion media.

To prepare a 1% 5-µm diameter PS microsphere sample, 10 µL of stock microsphere suspension (Magsphere Inc. PS005UM) is mixed with 990 µL of water (Fisher Scientific) at 2500 revolutions per minute (rpm) for one minute utilizing a vortex mixer (Thermo Scientific). The suspension is washed three times to remove buffer chemicals. During each washing step, first, the mixed suspension is centrifuged (Heathrow Scientific) at 8000 rpm for two minutes so that the microspheres settle to the bottom of a centrifuge tube; then, the top 900 µL of the supernatant is pipetted out and replaced with 900 µL of water; next, the solution is vortex mixed. Finally, 1 µL of the washed microsphere suspension is dispensed on a clean and plasma treated glass slide and left to dry. Five different concentrations (0.1%, 0.33%, 1%, 3.33%, and 10%) of 5-µm bead samples were prepared using similar procedures. A 0.1% 2-µm diameter PS microsphere sample can be prepared in a similar fashion with 1 µL of 2-µm diameter PS microsphere stock suspension (Bangs Laboratories Inc. PS05N), 10 µL of 1% 5-µm diameter washed microsphere suspension, and 989 µL of water. A 1% 1-µm diameter PS microsphere sample can be prepared in a similar fashion with 10 µL of 1-µm diameter PS microsphere suspension (Molecular Probes F8768), 10 µL of 1% 5-µm diameter washed microsphere suspension, and 980 µL of water.

For each sample, a hologram is captured and saved by the CMOS sensor for each of the 16 LEDs as it is switched on for one second. The 16 LR holograms are then used to synthesize a HR hologram, which can be reconstructed using the angular spectrum method and fast Fourier transforms (FFTs) to obtain an image of the sample [53].

2.2 Computational methods

We use PSR to produce a HR hologram from multiple partially redundant LR holograms of a sample captured by the CMOS sensor with slightly different illumination angles, partially based on the approach published by Hardie et al. [33]. Non-integer pixel displacements between the LR holograms provide additional information about the scene, making PSR possible. After synthesizing a HR hologram of the scene, the pixel super-resolved hologram can be back-propagated to the sample plane to reconstruct an image of the sample.

In this LFHM, 16 LR holograms are captured to perform PSR with a PSR factor of 4, resulting in a HR pixel size of 0.3125 µm. One of the LR holograms is designated as the “home” hologram. 2D cross-correlations between the home hologram and every other LR hologram give coarse pixel shift estimates between the home hologram and each other LR hologram. Since the LR holograms are sampled at the CMOS sensor pixel size, the coarse pixel shift estimates will be integer multiples of the LR pixel size. Subpixel shifts between the home hologram and each LR hologram can be computed by finding the least-squares linear fit of the subpixel shift distances. Edge effects due to the different shifts in the LR holograms are properly cropped out.

The process of synthesizing a HR hologram from multiple partially redundant LR holograms can be modeled as an optimization problem minimizing a cost function that is defined as the total error between the HR hologram estimate projected onto shifted LR grids and all experimentally-measured LR holograms. The flowchart of the process is shown in Fig. 2. We denote the HR hologram estimate as z^, the total number of LR pixels in one LR hologram as M, the total number of LR holograms captured as p, the total number of HR pixels in the hologram estimate as N, the pixel values of the complete set of p captured LR holograms as s, and the contribution of the rth HR pixel in z projected to the mth LR pixel in s as wm,r. Then the HR hologram estimate can be expressed as:

 z^=argminz C(z),
where C(z) is the cost function. In the case of no regularization,

 figure: Fig. 2

Fig. 2 Flowchart of the regularized PSR process for synthesizing and reconstructing a HR hologram from multiple LR hologram frames.

Download Full Size | PDF

 CNone(z)=12m=1pM(smr=1Nwm,rzr)2,

and r=1Nwm,rzris the mthprojected LR pixel from the HR hologram estimate [33]. The cost function CNone(z)is minimized for a HR hologram estimate when its projected LR pixels match the measured LR pixels. In our case, these projections must take the CMOS sensor Bayer pattern into consideration. While the projections are represented by matrix multiplication in Eq. (2), we implement them here using computationally-efficient image processing techniques (see Appendix A).

A regularization term that penalizes noise and enhances SNR can be added to the cost function and tuned by a weighting factor κ. As κincreases, the HR hologram estimate z^generally becomes smoother. However, an excessively large κcan wash out key features out of the image. In this Article, three regularization methods are investigated: no regularization (None), cardinal-neighbor (CN) regularization that penalizes random fluctuations in HR pixels, and sparse-reconstruction (Sp) that promotes sparsity in the reconstruction plane.

Hardie et al. presented the CN regularization method in [33] and the associated cost function can be shown as:

CCN(z)=CNone(z)+κ2u=1N(v=1Nαu,vzv)2,
where we select
 αu,v={1140   foru=v,fordist(zu,zv)=1otherwise,,
and dist(zu,zv) denotes the Cartesian distance between HR pixel zu and HR pixel zv.

To use the assumption of sparsity as a priori knowledge to enhance SNR, we implement the Sp regularization method by defining its associated cost function as:

CSp(z)=CNone(z)+κ|P(z;-d)|B2l1=CNone(z)+κu=1N||Pu(z;-d)|B2|,
where l1 is the l1 norm, d denotes the separation between the sample plane and the image sensor plane, Pu(z;d) is an operator representing the back-propagation of light over the distance d [53], |Pu(z;d)| denotes the extraction of the uth pixel from the amplitude of the reconstructed image, and B2 stands for the background brightness of reconstructed field. With the cost function in Eq. (5), sparsity is enforced in the background-subtracted reconstructed image of the sample plane.

For all of the regularization methods, the gradient descent optimization approach is utilized to update the estimated HR pixels according to the gradient of the cost function with respect to the HR pixels in the previous iteration:

z^kn+1=z^knεngk(z^n),
for pixel number k=1,2...,N and iteration number n=0,1,2,..., where εn represents the step size at the nth iteration. The iterative loop is initiated by interpolating the LR “home” hologram to obtain an initial HR hologram estimate z^0. The cost function gradient gk(z^n) can be expressed as:
gk(z^n)=C(z^n)zk.
For no regularization and CN, the cost function gradients are explicitly shown in [33]. For Sp regularization, we analytically derive the cost function gradient in Appendix B, where we also show how it can be rapidly computed using fast Fourier transforms rather than time-consuming matrix multiplications or series computations.

The step size εn must be selected so that it is large enough to ensure convergence after a reasonable number of iterations yet small enough to be convergent. For no regularization and the CN methods, an optimal εn can be computed as in [33] by setting the partial derivative of C(z^n+1) with respect to εn equal to zero. For the Sp regularization method, the optimal εn does not have an analytical solution using the same method and we determine εn by a backtracking line search method [54].

Once the PSR regularized gradient descent optimization routine is terminated, the HR estimate z^ of the hologram can be back-propagated over a distance d to the sample plane to reconstruct a HR image of the sample Eimg(x,y) as in [53]:

Eimg(x,y)=P(z^;d)=F2D1{F2D{z^(x,y)}H(ξ,η;d)}.
H(ξ,η;d)={ei2πdnr2λ2ξ2η20for (ξ2+η2)<nr2λ2,for (ξ2+η2)nr2λ2,
F2Dand F2D1 represent the 2D Fourier transform and 2D inverse Fourier transform, i is the unit imaginary number, ξ and η represent the spatial frequencies, nr denotes the index of refraction, and λ stands for the wavelength of light.

For small-target sensing applications, the SNR of the small targets is crucial for quick and accurate detection. We found that the amplitude image of the sample |Eimg(x,y)| performs well for the small-target sensing applications studied here, although phase images may perform better when used in conjunction with nanolenses or wetting films [10,13]. The twin image artifacts of reconstructed images are left untreated throughout this Article because these associated artifacts are minimal for small targets. The amplitude image of the sample |Eimg(x,y)| typically presents beads as dark spots on a bright background. Here we invert the amplitude image after subtracting its mode in order to quantify SNR performance, and thus the amplitude images appear as bright spots on a dark background, similar to dark-field microscopy. Then the amplitude image is divided by the standard deviation of its background noise to obtain a SNR map of the sample. For each small target, we define its peak signal-to-noise ratio (PSNR) [55] as the ratio between the maximum pixel value of all pixels in the neighborhood occupied by the small-target |Eimg(x,y)|targetmax and the standard deviation of the reconstructed HR image:

 PSNR=20log10[|Eimg|targetmax  /  k=1N(|Eimg,k||Eimg¯|)2N1].

For a region of interest, multiple small-targets may be found by thresholding the amplitude image of the sample using the standard routines found in the MATLAB image processing toolbox. The PSNR can be evaluated for each small-target and the average PSNR (APSNR) over all particles in the region of interested is used to compare different regularization methods and decide which method performs best for small-target sensing applications. For all small particle samples, the reconstructed region of interest is 600 × 600 HR pixels, which corresponds to ~0.035 mm2. This relatively small area was chosen to facilitate easy comparisons with the small field of view of traditional optical microscope images.

3. Results and discussion

3.1 Regularization methods and APSNR

For the CN and Sp regularization methods, the regularization weight κ is swept in a logarithmic scale to investigate how the regularization method and weight influence APSNR. In the case of no regularization, APSNR is independent of κ. The dependence of APSNR on κ for various regularization methods are shown in Figs. 3(a)–3(c) for a 0.33% 5-µm diameter PS microsphere sample, a 0.1% 2-µm diameter PS microsphere sample, and a 1% 1-µm diameter PS microsphere sample. At each regularization weight, the number of gradient descent optimization iterations is chosen to be 30 to ensure short run times. The maximum APSNR among the 30 iterations at each different regularization weight is plotted in Fig. 3. Compared to using no regularization, the CN regularization method can only slightly enhance APSNR while the Sp method significantly enhances APSNR for all three particle sizes. For all samples and regularization methods, particle locations and sizes are confirmed by comparing the HR reconstruction images obtained with optimum κ to images captured by a conventional 40X objective with numerical aperture of 0.75 in an optical microscope. The comparisons are shown in Figs. 4(a)–4(c) for a 0.33% 5-µm diameter PS microsphere sample, a 0.1% 2-µm diameter PS microsphere sample, and a 1% 1-µm diameter PS microsphere sample, respectively. Figure 3 and Fig. 4 demonstrate that the CN method performs slightly better in enhancing SNR while the Sp method enhances SNR greatly compared to no regularization for 5-µm, 2-µm, and 1-µm particles. Therefore, the Sp method performs the best at enhancing SNR for low-concentration small-target sensing applications.

 figure: Fig. 3

Fig. 3 Effect of regularization on average peak SNR (APSNR). APSNR is independent of κ for no regularization (None). The sample concentrations for the three different bead sizes are (a) 0.33% for the 5-µm beads, (b) 0.1% for the 2-µm beads, and (c) 1% for the 1-µm beads. Error bars (and gray band) are equal to +/− one standard deviation in PSNR for all beads measured in each sample. Since APSNR is shown in logarithmic scale, a few error bars in (c) extend beyond the plot.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Reconstructed SNR images of sparse samples using different regularization methods. From left to right, the four columns correspond to a conventional microscope image, holographic reconstruction with no regularization, the CN method, and the Sp method. Each reconstruction was generated using its corresponding optimum regularization weight κ, and the images are linearly scaled such that the minimum value in each image is zero and the maximum value is one. The three rows correspond to (a) a 0.33% 5-µm diameter PS microsphere sample with reconstruction distance d = 802 µm, (b) a 0.1% 2-µm diameter PS microsphere sample with reconstruction distance d = 781 µm, and (c) a 1% 1-µm diameter PS microsphere sample with reconstruction distance d = 701 µm. All scale bars are 25 µm.

Download Full Size | PDF

The smallest possible detectible particle can be estimated by logarithmically extrapolating the experimental SNR down to 9 dB, which we have found to be the approximate limit for visual detection of particles against background noise. Using power laws based on either a cubic dependence (as would be expected for Rayleigh particles [28]), or a power dependence of 1.8 experimentally extrapolated from the 1-µm and 2-µm data, the smallest detectable particle is expected to be between 626 and 453 nm. We note that these are only rough estimates based on the extrapolation from a small number of data points. The detection of significantly smaller particles (sub-50 nm) can be accomplished using additional sample preparation procedures such as nanolens growth in conjunction with the proposed computational algorithm [11].

One disadvantage of the Sp method is its computation speed, which requires about twice as much time per iteration than the CN method and typically 4–20 times as many iterations to achieve optimal APSNR, resulting in an order of magnitude more computation time overall. A desktop computer with Intel i7-7700K processor and 16 GB memory is used to obtain the results in Table 1. GPUs can be used to reduce computation time noticeably and are shown to be >40× faster than CPUs for similar types of holographic reconstruction tasks [56].

3.2 Regularization methods and resolution

A 1951 USAF resolution target is used to investigate how the regularization methods affect the resolution of the reconstructed images. Figure 5 shows how regularization weight κ affects resolution for None, CN, and Sp methods. For the CN method, when 1κ100, the resolution is similar to that without any regularization as can be seen in Figs. 5(b), 5(c), and 5(e), while for κ200, resolution is sacrificed. For the Sp method, resolution is sacrificed even at small κ. Since the resolution target is not sparse in nature, the Sp method is expected to perform relatively poorly, which we see experimentally in Figs. 5(d) and 5(e).

 figure: Fig. 5

Fig. 5 Regularization methods and resolution. (a-d) resolution performance of different regularization methods compared to an image captured by a conventional 100X objective for a 1951 USAF resolution target. The patterned nature of the background noise stems from the Bayer pattern on the image sensor, (e) impact of regularization weight on resolution using no regularization, CN, and Sp methods with a 1951 USAF resolution target. Horizonal elements and vertical elements are characterized separately. Resolution is determined visually so a possible deviation of one element should be considered. Resolution does not depend on κ for the None method.

Download Full Size | PDF

3.3 How sparse is sparse?

Our sparse-reconstruction (Sp) regularization method takes advantage of sparsity in a sample. In Section 3.1 we showed that the Sp method can enhance the SNR for sparse samples, while in Section 3.2 we showed that it can degrade resolution with dense samples. In this section, we investigate how sparse is sparse, and at what level of surface coverage does the performance of the Sp method degrade? Five different surface coverage levels of 5-µm diameter PS microspheres are used to investigate the restrictions of the Sp method, based on five different initial solution concentrations (see Section 2.1): 0.1%, 0.33%, 1%, 3.33%, and 10%. As the sample surface coverage increases, the APSNR of the 5-µm diameter PS microspheres decreases for both the CN and Sp regularization methods as shown in Figs. 6(a) and 6(b). This decrease in APSNR for both methods can be explained by breakdown of the standard assumption in in-line holography that the wave scattered by the object is much weaker than the reference wave that is transmitted unperturbed through the sample.

 figure: Fig. 6

Fig. 6 Effect of sample concentration on APSNR performance for 5-µm diameter microspheres. The regularization methods used are (a) the CN method and (b) the Sp method.

Download Full Size | PDF

The HR images reconstructed using different regularization methods can be compared to images captured by a 40X objective with 0.75 NA to verify the locations and sizes of the objects, as shown in Fig. 7. Specifically, for the 3.33% volumetric concentration, which corresponds to a surface coverage of 3.85%, the Sp method starts to fail to preserve the shape information for the clusters of the 5-µm beads, although APSNR enhancement is still observed. In general, we find that samples with surface coverage larger than ~4% can be considered dense and the ability of Sp to preserve important features degrades.

 figure: Fig. 7

Fig. 7 Reconstructed SNR images of moderately sparse samples of 5-µm beads using different regularization methods. The reconstructions are shown using the regularization weight that results in optimum APSNR for that sample. The images are normalized based on their corresponding minimum and maximum values so that they range from 0 to 1. The four rows correspond to varying microsphere concentrations. All scale bars are 25 µm.

Download Full Size | PDF

3.4 A guide on choosing regularization methods

Combining the results from Section 3.1, 3.2, and 3.3, a general guide on choosing regularization methods for different sample types is shown in Fig. 8. For sparse samples with ~1 µm target size, sparse-reconstruction with κ~1 is recommended. For sparse samples with a target size of a few microns, sparse-reconstruction with κ~10 would be a good starting point. For samples with surface coverage above 4%, cardinal-neighbor with κ~100 is recommended. Note that the upper range of this figure (18%) is the approximate coverage of the 1951 USAF Resolution Target. For samples with unknown feature size or surface coverage, we recommend starting with cardinal-neighbor with κ~100.

 figure: Fig. 8

Fig. 8 Recommended reconstruction methods for different sample types.

Download Full Size | PDF

4. Conclusion

Sparsity is often encountered in small-target low-concentration bio-sensing. This a priori knowledge can be used to enhance SNR. For samples with small-target surface coverage below 4%, sparse-reconstruction regularization performs the best at enhancing SNR in general and can enhance SNR by ~8 dB compared to no regularization and cardinal-neighbor regularization. For denser samples, sparse-reconstruction regularization can enhance the SNR of some small-targets while sacrificing resolution and potentially losing some important features of the sample.

Appendix A Rapid computation of cost functions

The three cost functions used in this Article are defined in terms of finite summations, which are in some cases equivalent to matrix multiplications. While mathematically accurate, these are not the most computationally efficient approaches to computing the cost functions. For example, the no-regularization cost function in Eq. (2) includes the term, r=1Nwm,rzr, which represents the mth projected low resolution (LR) pixel from the high-resolution (HR) hologram. Rather than individually computing a large N×pM matrix of weights wm,r and multiplying by z, we recognize that there are at most (N/M+1)  unique weights for each shift direction (horizontal and vertical) for each LR image, where the +1 accounts for non-integer pixel shifts. Hence, to compute the projection from the HR image to the LR image, we compute this relatively small number of unique weights, and then the weighted superposition of the corresponding sets of pixels from the HR image. Finally, the unused red and blue pixels in the LR Bayer pattern are set to zero.

The cardinal-neighbor (CN) cost-function is shown in Eq. (3). Rather than compute an N×N matrix of weights αu,v, we compute the inner summation as a convolution between the HR image and the matrix,

(01/401/411/401/40).
For sparse-reconstruction (Sp), the cost function is given in Eq. (5) and depends on the l1 norm of the vector,
h(z)=|P(z;d)|B2,
where |P(z;d)| denotes the amplitude of the complex reconstructed (back-propagated) HR image, with elements given by
|Pu(z;d)|=[Pu(z;d)Pu*(z;d)]1/2,
and the background brightness of the reconstructed field can be computed from the mode of the pixel value histograms:
B2=mode{|P(z;d)|}.
As given in [53], P(z;d) can be expressed in terms of Fourier transforms as:
P(z;d)=F2D1{F2D{z}H(ξ,η;d)},
where H(ξ,η;d) is shown in Eq. (9). We compute the 2D Fourier transforms using FFTs.

Appendix B Analytical derivation of sparse cost function gradient

In [33], the cost function gradients for no regularization and CN regularization were derived, and our approaches in their computation are based on image projections and convolutions, similar to those described in Appendix A. For the sparse-reconstruction regularization method, the cost function gradient with respect to the kth HR pixel can be expressed as:

gk(z)=CSp(z)zk=  CNone(z)zk+κu=1N|hu(z)|zk.
Below, we analytically derive the last term in Eq. (16), which eliminates the need for computing finite-difference approximations to the gradient when using gradient descent optimization, significantly enhancing computational speed and accuracy.

We begin by establishing the following Fourier transform identities, which will be helpful. They are presented here in terms of continuous transforms, but also hold true for the discrete transforms used in computation.

[F2D1{V(x,y)}(ξ,η)]*=F2D{V*(x,y)}(ξ,η).
[F2D1{U(ξ,η)}(x,y)]*=F2D1{U*(ξ,η)}(x,y).
F2D1{U(ξ,η)}(x,y)=F2D1{U(ξ,η)}(x,y).
In the following, when there are no variables given after a transform, it is assumed that the natural variables are used: (ξ,η) for forward Fourier transforms, and (x,y) for inverse Fourier transforms. Using Eq. (15), (17), (18), and (19); the knowledge that z is composed of real values because it corresponds to a hologram intensity; and that H(ξ,η;±d) is an even function in the frequency domain (ξ,η), then P*(z;d) can be expressed as:
P*(z;d)=[F2D1{z}H(ξ,η;d)]*=F2D1{F2D{z}(ξ,η)H(ξ,η;d)}(x,y)=P(z;d).
The pixel values in back-propagated amplitude image in Eq. (13) can now be expressed as:
|Pu(z;d)|=[Pu(z;d)Pu(z;d)]1/2.
The derivative in the last term of Eq. (16) can then be expanded as:
|hu(z)|zk=sign(hu(z))hu(z)zk=sign(hu(z))zk[|Pu(z;d)|B2]=sign(hu(z))[Pu(z;d)Pu(z;d)zk+Pu(z;d)Pu(z;d)zk]2[Pu(z;d)Pu(z;d)]12.
To compute Eq. (22), we need to compute Pu(z;d)/zk and Pu(z;d)/zk. The former can be expanded as:
Pu(z;d)zk=zk[F2D1{F2D{z}H(ξ,η;d)}](xu,yu)=a=1Nb=1Nδkb ei2π(xbξa+ybηa)H(ξa,ηa;d) ei2π(xuξa+yuηa)=a=1Nei2π(xkξa+ykηa) H(ξa,ηa;d) ei2π(xuξa+yuηa)=F2D1{H(ξ,η;d)}(xuxk,yuyk),
which is a shifted inverse Fourier transform of H(ξ,η;d). Because H(ξ,η;±d) is even in the frequency domain (ξ,η), we can also write this result as,
Pu(z;±d)zk=F2D1{H(ξ,η;±d)}(xkxu,ykyu).
Combining Eq. (22) and (24), the last term in Eq. (16) can be expanded as,
κu=1N|hu(z)|zk=κu=1Nsign(hu(z))2[Pu(z;d)Pu(z;d)]12[Pu(z;d)F2D1{H(ξ,η;d)}(xkxu,ykyu)+Pu(z;d)F2D1{H(ξ,η;d)}(xkxu,ykyu)]=κ2[u=1Nsign(hu(z))|Pu(z;d)|Pu(z;d)F2D1{H(ξ,η;d)}(xkxu,ykyu)+u=1Nsign(hu(z))|Pu(z;d)|Pu(z;d)F2D1{H(ξ,η;d)}(xkxu,ykyu)]=κ2[sign(h(z))exp(iarg(P(z;d)))**F2D1{H(ξ,η;d)}+sign(h(z))exp(iarg(P(z;d)))**F2D1{H(ξ,η;d)}],
where arg() is the complex argument corresponding to a “phase image”, ** represents 2D convolution, and the multiplication between the vectors is performed element-wise. Applying the convolution theorem to Eq. (25):
ku=1N|hu(z)|zk=κ2F2D1{F2D{sign(h(z))exp(iarg(P(z;d)))}H(ξ,η;d)+F2D1{sign(h(z))exp(iarg(P(z;d)))}H(ξ,η;d)}.
Using Eqs. (12), (15), and (26), the cost function gradient for the Sp method in Eq. (16) can be computed efficiently with FFTs.

Funding

This work was supported by the University of Arizona and the Arizona Technology and Research Initiative Fund.

References

1. P. Mehrotra, “Biosensors and their applications - A review,” J. Oral Biol. Craniofac. Res. 6(2), 153–159 (2016). [CrossRef]   [PubMed]  

2. O. Mudanyali, C. Oztoprak, D. Tseng, A. Erlinger, and A. Ozcan, “Detection of waterborne parasites using field-portable and cost-effective lensfree microscopy,” Lab Chip 10(18), 2419–2423 (2010). [CrossRef]   [PubMed]  

3. L. D. Mello and L. T. Kubota, “Review of the use of biosensors as analytical tools in the food and drink industries,” Food Chem. 77(2), 237–256 (2002). [CrossRef]  

4. J. Musayev, C. Altiner, Y. Adiguzel, H. Kulah, S. Eminoglu, and T. Akin, “Capturing and detection of MCF-7 breast cancer cells with a CMOS image sensor,” Sens. Actuators A Phys. 215, 105–114 (2014). [CrossRef]  

5. T. Saeki, M. Hosokawa, T. K. Lim, M. Harada, T. Matsunaga, and T. Tanaka, “Digital cell counting device integrated with a single-cell array,” PLoS One 9(2), e89011 (2014). [CrossRef]   [PubMed]  

6. T. Tanaka, Y. Sunaga, K. Hatakeyama, and T. Matsunaga, “Single-cell detection using a thin film transistor photosensor with micro-partitions,” Lab Chip 10(24), 3348–3354 (2010). [CrossRef]   [PubMed]  

7. Q. Wei, E. McLeod, H. Qi, Z. Wan, R. Sun, and A. Ozcan, “On-chip cytometry using plasmonic nanoparticle enhanced lensfree holography,” Sci. Rep. 3(1), 1699 (2013). [CrossRef]   [PubMed]  

8. G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010). [CrossRef]   [PubMed]  

9. C. P. Allier, G. Hiernard, V. Poher, and J. M. Dinten, “Bacteria detection with thin wetting film lensless imaging,” Biomed. Opt. Express 1(3), 762–770 (2010). [CrossRef]   [PubMed]  

10. O. Mudanyali, E. McLeod, W. Luo, A. Greenbaum, A. F. Coskun, Y. Hennequin, C. P. Allier, and A. Ozcan, “Wide-field optical detection of nanoparticles using on-chip microscopy and self-assembled nanolenses,” Nat. Photonics 7(3), 247–254 (2013). [CrossRef]   [PubMed]  

11. E. McLeod, T. U. Dincer, M. Veli, Y. N. Ertas, C. Nguyen, W. Luo, A. Greenbaum, A. Feizi, and A. Ozcan, “High-throughput and label-free single nanoparticle sizing based on time-resolved on-chip microscopy,” ACS Nano 9(3), 3265–3273 (2015). [CrossRef]   [PubMed]  

12. A. P. Reddington, J. T. Trueb, D. S. Freedman, A. Tuysuzoglu, G. G. Daaboul, C. A. Lopez, W. C. Karl, J. H. Connor, H. Fawcett, and M. S. Ünlu, “An interferometric reflectance imaging sensor for point of care viral diagnostics,” IEEE Trans. Biomed. Eng. 60(12), 3276–3283 (2013). [CrossRef]   [PubMed]  

13. G. G. Daaboul, A. Yurt, X. Zhang, G. M. Hwang, B. B. Goldberg, and M. S. Ünlü, “High-throughput detection and sizing of individual low-index nanoparticles and viruses for pathogen identification,” Nano Lett. 10(11), 4727–4731 (2010). [CrossRef]   [PubMed]  

14. A. Ray, M. U. Daloglu, J. Ho, A. Torres, E. Mcleod, and A. Ozcan, “Computational sensing of herpes simplex virus using a cost-effective on-chip microscope,” Sci. Rep. 7(1), 4856 (2017). [CrossRef]   [PubMed]  

15. J. Su, “Reply to “Comment on ‘Label-free single exosome detection using frequency-locked microtoroid optical resonators,’”,” ACS Photonics 3(4), 718 (2016). [CrossRef]  

16. Y. Bourquin, J. Reboud, R. Wilson, Y. Zhang, and J. M. Cooper, “Integrated immunoassay using tuneable surface acoustic waves and lensfree detection,” Lab Chip 11(16), 2725–2730 (2011). [CrossRef]   [PubMed]  

17. J. Su, A. F. Goldberg, and B. M. Stoltz, “Label-free detection of single nanoparticles and biological molecules using microtoroid optical resonators,” Light Sci. Appl. 5(1), e16001 (2016). [CrossRef]   [PubMed]  

18. M. R. Monroe, G. G. Daaboul, A. Tuysuzoglu, C. A. Lopez, F. F. Little, and M. S. Unlü, “Single nanoparticle detection for multiplexed protein diagnostics with attomolar sensitivity in serum and unprocessed whole blood,” Anal. Chem. 85(7), 3698–3706 (2013). [CrossRef]   [PubMed]  

19. F. Colle, D. Vercruysse, S. Peeters, C. Liu, T. Stakenborg, L. Lagae, and J. Del-Favero, “Lens-free imaging of magnetic particles in DNA assays,” Lab Chip 13(21), 4257–4262 (2013). [CrossRef]   [PubMed]  

20. Q. Wei, W. Luo, S. Chiang, T. Kappel, C. Mejia, D. Tseng, R. Y. L. Chan, E. Yan, H. Qi, F. Shabbir, H. Ozkan, S. Feng, and A. Ozcan, “Imaging and sizing of single DNA molecules on a mobile phone,” ACS Nano 8(12), 12725–12733 (2014). [CrossRef]   [PubMed]  

21. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18(1), 77–102 (2016). [CrossRef]   [PubMed]  

22. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef]   [PubMed]  

23. W. Bishara, U. Sikora, O. Mudanyali, T.-W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11(7), 1276–1279 (2011). [CrossRef]   [PubMed]  

24. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef]   [PubMed]  

25. E. McLeod, W. Luo, O. Mudanyali, A. Greenbaum, and A. Ozcan, “Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses,” Lab Chip 13(11), 2028–2035 (2013). [CrossRef]   [PubMed]  

26. O. Mudanyali, W. Bishara, and A. Ozcan, “Lensfree super-resolution holographic microscopy using wetting films on a chip,” Opt. Express 19(18), 17378–17389 (2011). [CrossRef]   [PubMed]  

27. Y. Hennequin, C. P. Allier, E. McLeod, O. Mudanyali, D. Migliozzi, A. Ozcan, and J.-M. Dinten, “Optical detection and sizing of single nanoparticles using continuous wetting films,” ACS Nano 7(9), 7601–7609 (2013). [CrossRef]   [PubMed]  

28. E. McLeod, C. Nguyen, P. Huang, W. Luo, M. Veli, and A. Ozcan, “Tunable vapor-condensed nanolenses,” ACS Nano 8(7), 7340–7349 (2014). [CrossRef]   [PubMed]  

29. E. McLeod and A. Ozcan, “Nano-imaging enabled via self-assembly,” Nano Today 9(5), 560–573 (2014). [CrossRef]   [PubMed]  

30. Z. Göröcs, E. McLeod, and A. Ozcan, “Enhanced light collection in fluorescence microscopy using self-assembled micro-reflectors,” Sci. Rep. 5(1), 10999 (2015). [CrossRef]   [PubMed]  

31. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

32. C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, “Pixel super-resolution in digital holography by regularized reconstruction,” Appl. Opt. 56(1), 69–77 (2017).

33. R. Hardie, K. Barnard, J. Bognar, E. Armstrong, and E. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng. 37(1), 247–261 (1998). [CrossRef]  

34. S. Sotthivirat and J. A. Fessler, “Penalized-likelihood image reconstruction for digital holography,” J. Opt. Soc. Am. A 21(5), 737–750 (2004). [CrossRef]   [PubMed]  

35. N. K. Bose, S. Lertrattanapanich, and J. Koo, “Advances in superresolution using L-curve,” in ISCAS 2001. The 2001 IEEE International Symposium on Circuits and Systems (Cat. No.01CH37196), vol. 2, pp. 433–436 (2001) [CrossRef]  

36. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010). [CrossRef]  

37. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009). [CrossRef]   [PubMed]  

38. Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6(1), 37862 (2016). [CrossRef]   [PubMed]  

39. S. Bettens, H. Yan, D. Blinder, H. Ottevaere, C. Schretter, and P. Schelkens, “Studies on the sparsifying operator in compressive digital holography,” Opt. Express 25(16), 18656–18676 (2017). [CrossRef]   [PubMed]  

40. F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J.-L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923–8940 (2018). [CrossRef]   [PubMed]  

41. Y. Rivenson, A. Rot, S. Balber, A. Stern, and J. Rosen, “Recovery of partially occluded objects by applying compressive Fresnel holography,” Opt. Lett. 37(10), 1757–1759 (2012). [CrossRef]   [PubMed]  

42. L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, and D. Trede, “Inline hologram reconstruction with sparsity constraints,” Opt. Lett. 34(22), 3475–3477 (2009). [CrossRef]   [PubMed]  

43. J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6(1), 24681 (2016). [CrossRef]   [PubMed]  

44. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]  

45. Y. Rivenson, A. Stern, and J. Rosen, “Compressive multiple view projection incoherent holography,” Opt. Express 19(7), 6109–6118 (2011). [CrossRef]   [PubMed]  

46. Y. Rivenson, A. Stern, and B. Javidi, “Overview of compressive sensing techniques applied in holography [Invited],” Appl. Opt. 52(1), A423–A432 (2013). [CrossRef]   [PubMed]  

47. M. Elad, M. A. T. Figueiredo, and Y. Ma, “On the role of sparse and redundant representations in image processing,” Proc. IEEE 98(6), 972–982 (2010). [CrossRef]  

48. R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]  

49. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006). [CrossRef]  

50. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

51. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

52. A. F. Coskun, I. Sencan, T.-W. Su, and A. Ozcan, “Lensless wide-field fluorescent imaging on a chip using compressive decoding of sparse objects,” Opt. Express 18(10), 10510–10523 (2010). [CrossRef]   [PubMed]  

53. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016). [CrossRef]   [PubMed]  

54. S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge University, 2004).

55. Z. Xiong, I. Engle, J. Garan, J. E. Melzer, and E. McLeod, “Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy,” in Optical Diagnostics and Sensing XVIII: Toward Point-of-Care Diagnostics (International Society for Optics and Photonics, 2018), Vol. 10501, p. 105010E.

56. S. O. Isikman, I. Sencan, O. Mudanyali, W. Bishara, C. Oztoprak, and A. Ozcan, “Color and monochrome lensless on-chip imaging of Caenorhabditis elegans over a wide field-of-view,” Lab Chip 10(9), 1109–1112 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 LFHM schematic. A 4x4 LED array is used to sequentially illuminate a sample from slightly different angles and a bandpass filter is inserted between the LED array and the sample to improve the temporal coherence of the system. A sample is inserted ~30 cm above the LED array and d800 µm below the CMOS sensor.
Fig. 2
Fig. 2 Flowchart of the regularized PSR process for synthesizing and reconstructing a HR hologram from multiple LR hologram frames.
Fig. 3
Fig. 3 Effect of regularization on average peak SNR (APSNR). APSNR is independent of κ for no regularization (None). The sample concentrations for the three different bead sizes are (a) 0.33% for the 5-µm beads, (b) 0.1% for the 2-µm beads, and (c) 1% for the 1-µm beads. Error bars (and gray band) are equal to +/− one standard deviation in PSNR for all beads measured in each sample. Since APSNR is shown in logarithmic scale, a few error bars in (c) extend beyond the plot.
Fig. 4
Fig. 4 Reconstructed SNR images of sparse samples using different regularization methods. From left to right, the four columns correspond to a conventional microscope image, holographic reconstruction with no regularization, the CN method, and the Sp method. Each reconstruction was generated using its corresponding optimum regularization weight κ , and the images are linearly scaled such that the minimum value in each image is zero and the maximum value is one. The three rows correspond to (a) a 0.33% 5-µm diameter PS microsphere sample with reconstruction distance d = 802 µm, (b) a 0.1% 2-µm diameter PS microsphere sample with reconstruction distance d = 781 µm, and (c) a 1% 1-µm diameter PS microsphere sample with reconstruction distance d = 701 µm. All scale bars are 25 µm.
Fig. 5
Fig. 5 Regularization methods and resolution. (a-d) resolution performance of different regularization methods compared to an image captured by a conventional 100X objective for a 1951 USAF resolution target. The patterned nature of the background noise stems from the Bayer pattern on the image sensor, (e) impact of regularization weight on resolution using no regularization, CN, and Sp methods with a 1951 USAF resolution target. Horizonal elements and vertical elements are characterized separately. Resolution is determined visually so a possible deviation of one element should be considered. Resolution does not depend on κ for the None method.
Fig. 6
Fig. 6 Effect of sample concentration on APSNR performance for 5-µm diameter microspheres. The regularization methods used are (a) the CN method and (b) the Sp method.
Fig. 7
Fig. 7 Reconstructed SNR images of moderately sparse samples of 5-µm beads using different regularization methods. The reconstructions are shown using the regularization weight that results in optimum APSNR for that sample. The images are normalized based on their corresponding minimum and maximum values so that they range from 0 to 1. The four rows correspond to varying microsphere concentrations. All scale bars are 25 µm.
Fig. 8
Fig. 8 Recommended reconstruction methods for different sample types.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

  z ^ = arg min z   C ( z ) ,
  C None ( z )= 1 2 m=1 pM ( s m r=1 N w m,r z r ) 2 ,
C C N ( z ) = C N o n e ( z ) + κ 2 u = 1 N ( v = 1 N α u , v z v ) 2 ,
  α u , v = { 1 1 4 0       for u = v , for d i s t ( z u , z v ) = 1 otherwise , ,
C S p ( z ) = C N o n e ( z ) + κ | P ( z ; - d ) | B 2 l 1 = C N o n e ( z ) + κ u = 1 N | | P u ( z ; - d ) | B 2 | ,
z ^ k n + 1 = z ^ k n ε n g k ( z ^ n ) ,
g k ( z ^ n ) = C ( z ^ n ) z k .
E i m g ( x , y ) = P ( z ^ ; d ) = F 2 D 1 { F 2 D { z ^ (x,y) }H(ξ,η;d) }.
H ( ξ , η ; d ) = { e i 2 π d n r 2 λ 2 ξ 2 η 2 0 f o r   ( ξ 2 + η 2 ) < n r 2 λ 2 , f o r   ( ξ 2 + η 2 ) n r 2 λ 2 ,
  P S N R = 20 log 10 [ | E i m g | t a r g e t m a x     /     k = 1 N ( | E i m g , k | | E i m g ¯ | ) 2 N 1 ] .
( 0 1 / 4 0 1 / 4 1 1 / 4 0 1 / 4 0 ) .
h ( z ) = | P ( z ; d ) | B 2 ,
| P u ( z ; d ) | = [ P u ( z ; d ) P u * ( z ; d ) ] 1 / 2 ,
B 2 = m o d e { | P ( z ; d ) | } .
P ( z ; d ) = F 2 D 1 { F 2 D { z } H ( ξ , η ; d ) } ,
g k ( z ) = C S p ( z ) z k =     C N o n e ( z ) z k + κ u = 1 N | h u ( z ) | z k .
[ F 2 D 1 { V ( x , y ) } ( ξ , η ) ] * = F 2 D { V * ( x , y ) } ( ξ , η ) .
[ F 2 D 1 { U ( ξ , η ) } ( x , y ) ] * = F 2 D 1 { U * ( ξ , η ) } ( x , y ) .
F 2 D 1 { U ( ξ , η ) } ( x , y ) = F 2 D 1 { U ( ξ , η ) } ( x , y ) .
P * ( z ; d ) = [ F 2D 1 { z }H( ξ,η;d ) ] * = F 2 D 1 { F 2 D { z } ( ξ , η ) H ( ξ , η ; d ) } ( x , y ) = P ( z ; d ) .
| P u ( z ; d ) | = [ P u ( z ; d ) P u ( z ; d ) ] 1 / 2 .
| h u ( z ) | z k = s i g n ( h u ( z ) ) h u ( z ) z k = s i g n ( h u ( z ) ) z k [ | P u ( z ; d ) | B 2 ] = s i g n ( h u ( z ) ) [ P u ( z ; d ) P u ( z ; d ) z k + P u ( z ; d ) P u ( z ; d ) z k ] 2 [ P u ( z ; d ) P u ( z ; d ) ] 1 2 .
P u ( z ; d ) z k = z k [ F 2 D 1 { F 2 D { z } H ( ξ , η ; d ) } ] ( x u , y u ) = a = 1 N b = 1 N δ k b   e i 2 π ( x b ξ a + y b η a ) H ( ξ a , η a ; d )   e i 2 π ( x u ξ a + y u η a ) = a = 1 N e i 2 π ( x k ξ a + y k η a )   H ( ξ a , η a ; d )   e i 2 π ( x u ξ a + y u η a ) = F 2 D 1 { H ( ξ , η ; d ) } ( x u x k , y u y k ) ,
P u ( z ; ± d ) z k = F 2 D 1 { H ( ξ , η ; ± d ) } ( x k x u , y k y u ) .
κ u = 1 N | h u ( z ) | z k = κ u = 1 N s i g n ( h u ( z ) ) 2 [ P u ( z ; d ) P u ( z ; d ) ] 1 2 [ P u (z;d) F 2D 1 { H( ξ,η;d ) } ( x k x u , y k y u ) + P u (z;d) F 2D 1 { H( ξ,η;d ) } ( x k x u , y k y u ) ] = κ 2 [ u=1 N sign( h u (z) ) | P u (z;d) | P u (z;d) F 2D 1 { H( ξ,η;d ) } ( x k x u , y k y u ) + u=1 N sign( h u (z) ) | P u (z;d) | P u (z;d) F 2D 1 { H( ξ,η;d ) } ( x k x u , y k y u ) ] = κ 2 [ sign( h(z) )exp( iarg( P(z;d) ) )** F 2D 1 { H( ξ,η;d ) } +sign( h(z) )exp( iarg( P(z;d) ) )** F 2D 1 { H( ξ,η;d ) } ],
k u=1 N | h u (z) | z k = κ 2 F 2D 1 { F 2D { sign( h(z) )exp( iarg( P(z;d) ) ) }H( ξ,η;d ) + F 2D 1 { sign( h(z) )exp( iarg( P(z;d) ) ) }H( ξ,η;d ) }.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.