Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical-tweezing-based linear-optics nanoscopy

Open Access Open Access

Abstract

Previous works reported that linear optics could be used to observe sub-wavelength features with a conventional optical microscope. Yet, the ability to reach a sub-200 nm resolution with a visible light remains limited. We present a novel widely-applicable method, where particle trapping is employed to overcome this limit. The combination of the light scattered by the sample and by the trapped particles encodes super-resolution information, which we decode by post image processing, with the trapped particle locations predetermined. As the first proof of concept our method successfully resolved sample characteristic features down to 100 nm. Improved performance is achieved with the fluorescence of the trapped particles employed. Further improvement may be attained with trapped particles of a smaller size.

© 2016 Optical Society of America

1. Introduction

In recent decades, multiple methods were developed in an effort to break Abbe’s classical imaging resolution limit [1]. The limiting resolution, according to Abbe, is R = λ/[2*(NA)], where NA is the numerical aperture of the lens and λ - the illumination wavelength. It has been realized early on, that the Abbe limit can be overcome by the information theory methods [2]. In particular, the resolution of an optical system can be enhanced by encoding the spatial information into the other degrees of freedom, such as the time domain [3]. In these methods, the data are encoded, then transmitted and collected using the classical Abbe-limited propagating waves; then, the spatial information is decoded to achieve super-resolution [4–6].

While a major improvement in resolution has been achieved by increasing the effective NA in techniques such as [7,8] the 4Pi and the I5M, folding the spatial information into either the time or the wavelength domain served as the basis for the structured illumination approaches [9–11]. Importantly, these methods, as well as the techniques employing a synthetic aperture, such as the super-resolution lens-less imaging [12–14], are still limited by the diffraction. Therefore, further enhancement of the resolution is hard to achieve. Additional progress has been attained by employing non-linear labels [15], as in stimulated emission depletion (STED) [16], ground state depletion (GSD) microscopy [17], saturated patterned excitation microscopy (SPEM) [18], photo activated localization microscopy (PALM) [19], and stochastic optical reconstruction microscopy (STORM) [20], where fluorophores switched on and off sequentially in time allow resolutions on the scale of tenths of nanometers, and even better [21], to be achieved.

Very recently, a different approach, which does not require switchable state fluorophores, yet at the same time is not limited by the aperture size, has been developed [22,23], and employed for super-resolution microscopy [24]. In this approach, subwavelength nanoparticles are suspended next to the object, to improve the resolution. The electromagnetic field originating from each lateral spatial frequency of a sample propagates in free space with a different phase argument. High spatial frequency features that surpass the diffraction limit have an imaginary argument that leads to an exponential decay after a distance of the order of the illuminating wavelength. These evanescent waves can be detected by a sensor located sufficiently close to the sample, as done in the near-field scanning optical microscopy (NSOM) [25]. Unlike in NSOM, where this goal is achieved by an ultra-sharp tip in the vicinity of the object, in the work of Gur et al. [24] those waves are encoded into the non-decaying light, scattered by the nanoparticles which undergo Brownian motion close enough to the high frequency features. Thus, the encoding, coupled to the Abbe-limited propagating waves can be detected by a far field imaging system. Knowing the precise shape and location of the nanoparticles, the subwavelengths features can be decoded and resolved.

Importantly, Gur et al. [24] employed freely diffusing nanoparticles, which scan the sample by passive Brownian motion. Scanning the sample by random motion requires excessively long measurement times. This fact limits the time-resolution and dramatically increases the chances for possible radiation damages, with the sample exposed to illumination for an extended period of time. While increasing the spatial density of nanoparticles may solve some of these issues, it significantly increases the complexity of the subsequent analysis, at the stage of information decoding. Also, an increased concentration of nanoparticles may have an adverse effect on some of the samples, particularly in biology. Moreover, for the subwavelength resolution to be achieved, the nanoparticles have to be at a distance of less than the illumination wavelength from the sample to encode the evanescent waves; this limitation is only rarely met for a nanoparticle undergoing a free diffusion. Finally, particle motion next to the surface of the sample is disturbed by the hydrodynamic phenomena, as well as by the particle-surface interactions; as a result, the particle, in most cases, would either irreversibly stick to the sample, or be repelled away from its surface. These facts significantly limit the applicability of the proposed method.

In this paper, we demonstrate that scanning the sample with optically-trapped nanoparticles allows the outlined limitations to be overcome, achieving resolutions at the scale of 100 nm and better, well-beyond the Abbe-limit. In our setup, a confocal microscope is combined with holographic optical tweezers (HOT), which trap individual 100 nm size nanoparticles and move them across the sample. Optical tweezers (OT) were previously used to scan the topography of solid samples with an optically-trapped bead, providing an alternative to the classical AFM (atomic force microscopy) and reaching relief resolutions beyond the diffraction limit [26]. OT have also been used in the past to overcome camera digitization issues [27]. Nevertheless there was no attempt to employ active nanoparticle scanning of the samples to achieve optical imaging with resolutions beyond Abbe limit. The HOT capability to control particle positions in three dimensions [28] allows controllable scanning of the sample, at particle-sample distances [29] smaller than the optical wavelength. With such scanning, the acquisition time for the present super-resolution method is considerably reduced. Here we also use another HOT scanning advantage, that unlike other complex hardware solutions, uses simple programming to employ multiple beams in parallel; this fact reduces the imaging time even further. An additional reduction of the imaging times and an improvement of the signal-to-noise ratio may be achieved in the future by optimizing the beam configuration [30].

Furthermore, the scan geometry in our method is adjustable in real time, to best fit the dimensions and the size of the object at hand, allowing the beam damage to the sensitive samples to be minimized. Finally, OT are widely used in parallel with optical microscopy for particle manipulation [31], so that our super-resolution technique does not necessarily require a construction of a specialized new setup, making it affordable and widely applicable [32–34].

2. Materials and methods

2.1 Experimental setup

The experimental system is illustrated in Fig. 1. The green dashed line describes the optical imaging path where a confocal microscope (Nikon A1R, mounted on a Nikon Ti-E frame) scans the sample with a 514nm laser (OBIS Coherent), employing a 100X oil-immersed objective (Nikon Plan Apo 100x/1.40 or E Plan 100x/1.25). All data were acquired in the unidirectional resonant scanning mode, with the frame rate of 15fps. For alignment purposes, a CCD (Nikon DS-fi1) camera has been used, mounted on a different port of the Ti-E; in principle, a similar setup could be used for non-confocal super-resolution imaging, provided that bright field illumination is present. Red dashes describe the trapping path, where a 1064 nm beam from a CW laser light source (MATRIX 1064 CW, Coherent, Santa Clara, CA) passes through a half-wave plate, adjusting its polarization direction with respect to the spatial light modulator (SLM, Hamamatsu X10468-03). The waist of the beam is adjusted with a beam expander to best fit the 12 × 16 mm2 LCD panel of the SLM. The SLM modulates the phase, according to a computer-generated hologram (CGH). An additional beam expander scales the beam waist to fill an oil-immersed objective (Nikon Plan Apo λ 100x/1.45), focusing the beam on the sample. The phase-modulated beam forms multiple optical trap foci, with their positions predetermined [31] by the CGH. During data acquisition, the sample can be moved, where necessary, by a motorized stage (NEWPORT MFA-PPD). The HOT locations, the sample position and all the steps of image acquisition are automated and mutually coordinated.

 figure: Fig. 1

Fig. 1 Experimental system. Imaging is done from the upper side using a confocal microscope. OT is done from below using a1064 nm laser, an SLM and an inverted objective.

Download Full Size | PDF

Data collection by the confocal was carried out in reflectance mode, with no optical wavelength filter inserted. In addition, a combination of reflectance and fluorescence modes, where an optical filter enables one channel to record at 595/ ± 50 nm, while another channel records the reflectance, was used for some of the fluorescent samples. Both modes allow for improvement in optical resolution.

2.2 Experiment procedure

Prior to the imaging, a static configuration of Gaussian beam traps has been formed. The sample is brought towards the traps, by means of the motorized stage. The Gaussian beams trap in the particles in the lateral direction, while the trap, the radiation pressure and the coverslip wall confines them in the axial direction. By using a dilute suspension and choosing a proper laser power such that the rate of particle trapping was slow, we could capture a single nanoparticle in each of the traps. The main step of the imaging process is to scan the sample by the trapped nanoparticles. This fine scan, fitting the sample shape, was done by a translation of the trap locations, employing the holographic SLM. The Brownian motion of the nanoparticles within the optical potential wells competes with the trap stiffness, controlled by the power of the trapping laser. Therefore, acquiring multiple images for a given configuration and position of the traps gains new data, owing to the slight offsets of the nanoparticles from the centers of the traps.

2.3 Super-resolution algorithm

In our setup the particles are being moved behind the sample, while the reflected signal is measured. Therefore, we use a variation of the algorithms described in the literature [33,34]. In particular, we define s(x) as the high resolution sample, g(x,ε) - as a particle located at point ε, ĝ(x,εn) – as the reconstruction of that particle, P(x) - as the diffraction-limited resolution function, and s1 - as the binary image of our sample:

s1={1s(x)>00otherwise
For each image n, where the location of a trapped particle is εn, the high resolution image can be written as:
HR(x,ε)=s(x)+g(x,εn)s1(x)g(x,εn)
The low resolution image can then be written as:
LR(x,εn)=HR(x,εn)P(x)
We compute ĝ(x,εn), multiply it by LR, and sum over all images, in each of which the position of the particle is different:
r(x)=LR(x,εn)g^(x,εn)dεn={[s(x')+g(x',εn)s1(x')g(x',εn)]P(xx')dx'}g^(x,εn)dεn
r(x) includes three terms:
[[s(x')+g(x',εn)s1(x')g(x',εn)]g^(x,εn)dεn]P(xx')dx'=ν(x)s(x')P(xx')dx'+[g(x',εn)g^(x,εn)dεn]P(xx')dx'[g(x',εn)g^(x,εn)dεn]s1(x')P(xx')dx'
where ν(x)=g^(x,εn)dεn.

The first term is the low resolution image multiplied by ν(x). The last two terms can be written as:

s¯1(x')[g(x',εn)g^(x,εn)dεn]P(xx')dx'
where s¯1(x')=1s1(x').

Substituting zεn+x, we obtain:

s¯1(x')[g(x'+z(εn+x),εn)g^(zεn,εn)dz]P(xx')dx'=s¯1(x')[g(z+x'x,0)g^(z,0)dz]P(xx')dx'=s¯1(x')D(xx')P(xx')dx'=s¯1(x)[D(x)P(x)]
where D(x)=g(z+x'x,0)g^(z,0)dz.

D(x) is the correlation between the actual appearance of the particle g(x,ε) and its reconstruction ĝ(x,εn). In Eq. (8), the binary image of the sample is convoluted by a new kernel, D(x)P(x). For the scanning nanoparticle being smaller than P(x), we can treat D(x)P(x) as a constant k(x) multiplied by P(0), so that Eq. (8) yields:

k(x)p(0)k(x)p(0)s1(x)
Introducing this result into Eq. (5), we obtain:

r(x)=k(x)p(0)k(x)p(0)s1(x)+ν(x)LR

Thus, for the binary image of the sample s1 to be obtained, the low resolution image multiplied by v(x) should be subtracted from r(x); next, the result has to be shifted by a constant and normalized.

Importantly, unlike in the mentioned algorithms [35], [36], where a random pattern was overlayed with the sample to increase the resolution, we scan the sample by just one (or several) individual particles. Therefore, the correlation D is not a perfect δ-function and should be calculated according to the particle size and a model of reflectance from the nanoparticles.

In addition, the nanoparticle positions determined by the trap are typically inhomogeneously distributed in space, with some regions of the sample scanned more densely than the others. Therefore, a scan density distribution function J(x) must be introduced into Eq. (5):

r(x)=[LR(x,εn)g^(x,εn)J(x)dεn]

This issue can be avoided by eliminating all images corresponding to overlapping εn, though it results in a significant loss of useful data. A better solution is to to divide all images by ν(x); however, possible slight deviations between g(x,ε) and ĝ(x,εn) give rise to artificial circular features around the nanoparticle locations.

2.4 Practical application of the algorithm and physical considerations

For the algorithm to work, the scanning particles’ positions relative to the sample have to be accurately determined. For that purpose, we first employ the corrected version [37] of the classical particle tracking algorithm of Crocker and Grier [38], yielding the rough coordinates of the particles. Then, the intensity of each particle is fitted to a Gaussian, yielding an even more accurate estimate for the center position and the waist. The situations where more than one particle was inside the trap could easily be distinguished and discarded digitally during the post processing of the captured images. The nanospheres are charged and coated so that they repel each other, during the random motion of the particles inside the trap, controlled by the trap stiffness. We use this fact to separate between the particles. In particular, the movement of the particles inside the trap results in illumination variations that are readily detected. Moreover, when two (or more) spherical particles are trapped at an unresolvable distance from each other, their combined shape appears eccentric by confocal microscopy. All images where the scanning nanoparticle intensity distribution appears elliptical (aspect ratio > 4) are therefore discarded. In addition, we have also employed post processing, correlation-based positioning algorithms [39] to correct for slight vibrations of the imaging setup. We prove the validity of our particle tracking approach by numerical simulations, where pixellation effects, pixel response, finite bit depth, and 120 nm random vibrations are taken into account. In our simulations, we demonstrate that particle tracking accuracies of a third of a pixel (10 nm) are achieved. In addition, we also carry out simulations of the whole process of superresolution imaging, validating its capabilities (see Section 3.1).

When image quality is poor, which challenges the location of nanoparticle centers, the fluorescence mode can be used for particle tracking, with the imaging of the sample carried out in the reflectance mode. The signal from both the reflectance and the fluorescence channels is acquired simultaneously. In the fluorescent images, only the nanoparticles are visible, so that the computerized location of their centers is straightforward. Importantly, no fluorescent staining of the sample is required by our technique; only the nanoparticles are labeled, so that the damage to sensitive biological samples is minimized.

The current experimental configuration images the reflection contours of the sample, projected onto the camera plane. It does not reveal the axial profiles. The beam pushes the particles to be closer than a wavelength distance from the surface of the sample. However, imaging samples where the height profile changes by more than the laser wavelength (514nm in the current implementation) over a distance shorter than the size of the scanning particle (100nm in the current implementation), is challenging for the current method. This should be considered as an axial geometrical limitation on the samples.

Equation (8) demonstrates that the super-resolution image is convoluted by the autocorrelation function of the scanning particles. Assuming ideal location and Gaussian particle modelling, the obtained resolution is limited by the diameter of the scanning particle. While reducing trapped particle sizes would increase the resolution achieved by our method, it also dictates an increase of the laser power, necessary to make stiffer traps, especially for the dielectric particles [40]. However, it has been earlier demonstrated [41] that metallic nanoparticles are good candidates for a stable optical trapping with reasonable laser irradiation [26,29,42]. In particular, a force of ~0.5 pN was obtained for gold nanoparticles of 100 nm in diameter, with a laser power of 135 mW.

The main physical considerations that determine an optical trap lateral resolution are the laser power, the particle diameter, and the polarizability of the particles [26]. Remarkably, our method may actually benefit from the trap stiffness being low, with the trapped particles undergoing large lateral fluctuations (as long as the particles do not escape), as described in Section 2.2. To benefit from these fluctuations, the motion of the particles must be slower than the image acquisition time.

2.5 Samples

Two types of samples were used to test the setup. In the first type of samples, gold nanolines were formed on fused silica cover slips by e-beam lithography. The samples consisted of horizontal and vertical lines with varying line widths between 100 and 250 nm, and varying spaces, 80 to 480 nm. Figure 2 presents an atomic force microscopy (AFM) scan of a typical sample of this type.

 figure: Fig. 2

Fig. 2 An AFM image of a typical gold nanolines sample. (a) A wide area AFM scan. (b) A height line profile of the zoomed area [marked by a red rectangle in section (a)], taken along the nanolines normal (shown by a horizontal line in the inset). The measured nanoline width was 190 nm from edge to edge, and the spacing between the lines was 105 nm (from edge to edge), as marked by the vertical red lines.

Download Full Size | PDF

The second type of samples are randomly deposited nanowires. To prepare these samples, an aqueous suspension of gold nanowires is sucked by capillary forces into a 0.1 × 2 × 50 mm3 borosilicate Vitrocom® capillary. The gold nanowires are 6 μm long and 50 nm wide (A14-6000-CTAB-5, Nanopartz Inc., Loveland, CO). To fix the nanowires to the surface of the capillary, we evaporate the water from the suspension, keeping it for 45 minutes at 105 °C.

To allow scanning of the samples by nanoparticles, we introduce an aqueous suspension of gold nanospheres (100 nm in diameter; either plain or fluorescently labeled) into both types of these samples. The sample is then hermetically sealed by an Epoxy glue and fixed to a supporting glass, which we tighten to a computer-controlled XYZ stage of the optical system.

3. Results and discussion

3.1 Simulations

Simulations of a sub diffraction target were carried out in Matlab software to test the effects of vibrations onto the functionality of the algorithm. A high resolution binary target was simulated as shown in Fig. 3(a), where two lines 50 nm in width and 2 μm in length are separated by 140 nm. The pixel size is 7.7 nm. Round particles of 100 nm in diameter, were simulated at random locations. Then, the image was smeared, with all spatial frequencies above the diffraction limit of a 514 nm laser removed. Note that here we simulate a coherent light source, such as the one we use in our experiments. Using a noncoherent light source may improve the results by a factor of two. To simulate a pixel size of 31 nm, typical for our confocal imaging, we integrate the signal over areas of 31x31 nm2 normalizing the result so as to get exactly 4096 gray levels for each pixel. The resulting low resolution image is shown in Fig. 3(b). Note that, the two lines which are clearly visible in Fig. 3(a), merge together in Fig. 3(b). In addition, the intensity is uneven along the lines in Fig. 3(b), peaking at both the right and the left edges. An image of a single particle, used for sample scanning, is located far from the two lines and does not interfere with their image in Fig. 3(b).

 figure: Fig. 3

Fig. 3 Simulation test images preparation. High resolution binary images of 50 nm lines spaced 140 nm apart (a) are prepared. Binary nanoparticles are added to the high resolution images, and the results are filtered and pixellated (b), to mimic the appearance of these images under the typical low resolution conditions of the confocal imaging system. A scanning nanoparticle is shown as well, near the center of the image.

Download Full Size | PDF

This simulation is only carried over a small region of interest, as shown in Fig. 4(a). While the low resolution imaging cannot resolve between the two lines as seen in Fig. 4(a), the simulation result of the super-resolution method on the same area, scanned with a nanoparticle, clearly resolves between them as shown on Fig. 4(b), where no vibrations are introduced. Note that the algorithm managed to separately resolve the original two lines; also, note the even intensity of the edges. The same process was repeated, with random vibrations of 120 nanometer in an amplitude. Although the edges have deteriorated due to the vibrations, the simulation is clearly able to resolve between the two lines [see Fig. 4(c)]. A mean error of 0.2 pixels, with a 0.3 pixel variance, was calculated for the accuracy of nanoparticles’ location by the algorithm, indicating a satisfactory match between the located positions and the simulated locations of the nanoparticles.

 figure: Fig. 4

Fig. 4 a) Low resolution image of the region of interest scanned by a simulated nanoparticle. b) A reconstruction, employing our super resolution algorithm. c) A reconstruction of the same sample with random vibrations, of 120 nanometer in an amplitude, introduced. The scanned area is marked by a yellow rectangle.

Download Full Size | PDF

3.2 Experimental results

To demonstrate the achievement of subwavelength resolution, we test our resolution employing nanowires and lithographically-fabricated line samples (see Materials and Methods). With the thickness of the nanowires (50 nm) being much smaller than our point spread function (PSF), the FWHM of the image yields the PSF width, hence the resolution. The fabricated line samples allow the resolution between separate features to be tested, as an additional verification of our method being able to overcome the diffraction limit. The lateral resolution, defined by the smallest distance Δx between the edges of two adjacent features, corresponds to a spatial frequency of 1/100 nm−1. Although the width of the lines in the fabricated sample gives rise to a lower spatial frequency, resolving the separation between the lines (at 1/100 nm−1) would validate the achievement of super-resolution.

First, we employ the imaging setup described in Fig. 1 to image the gold nanowires. Four scanning nanospheres are localized inside four separate computer-holography-controlled optical traps. The number of particles used detemines the time needed to scan an entire region of interest. Using the SLM has the advantage of producing many traps with simple programming, unlike other hardware solutions [43]. We used four traps as a demonstration of this ability. The distance between the traps was 6 μm. The power of each trap was 50-100 mW, allowing the nanosphere to remain within the trap for a time period ranging between several seconds and several minutes. Under ideal conditions, where no laser power is lost, our current setup allows the number of separate traps to be above 100. While high power trapping confined the particles for longer time periods, it could also glue the scanning particle to the sample surface, damaging sensitive samples. We take advantage of the coverslip serving a heatsink and reducing the thermal damage and other possible disturbances due to the heating (such as a reduction in the viscosity of the solution) [44]. Hence the amount of power we used is adequate for our samples.

First, we locate an adequate region of the sample in an image obtained by a conventional wide field confocal scan. Next, we carry out rapid scanning of a much narrower region of interest (ROI), 512 × 512 pixels in size, at a typical digital resolution of 31 nm/pix. We optimize the power of the exciting laser, the pinhole size, and the channel gain to maximize the signal to noise ratio (SNR) and avoid detector saturation. In particular, we observe that for a successful implementation of the super-resolution algorithm, the SNR of the scanning particles has to be maximized. Counterintuitively, when the SNR is too low, the performance of our algorithm (and the eventual image resolution) can actually be improved by intentional smearing of the raw images, such as convolution with a Gaussian kernel. Clearly, a significant improvement of the SNR can be achieved by imaging the scanning nanoparticles in the fluorescence mode, rather than by reflectance (see Materials and Methods). Moreover, while in the fluorescence mode, SNR values as low as 1.15 still led to meaningful results, accurate particle tracking proved impossible in the reflectance mode for SNR<2. An additional improvement of particle tracking accuracy can potentially be achieved by increasing the imaging rate. At a sufficiently fast imaging rate, the correlation of particle locations in consequent images is high, allowing the particle tracking algorithm to use these correlations to improve the tracking accuracy [45]. The minimal number of frames needed to fully cover an area A is A/(ApN), where Ap is the cross section area of a particle and N is the total number of scanning particles employed. In reality, noise, random particle fluctuations, and finite accuracy of the trap require larger frame numbers. Because the precision of the trapped particle location is ~1/3 of the digital pixel, adding more images increases the SNR. We experimently cover an area of 2x2.2 µm2 with the sub-diffraction resolution, using 350 frames with a single trap. With the previously reported passive super-resolution imaging [24], more than 500 frames are needed for a 90% cover of a region using 200 nm particles, with particle density of 1.2 µm−2. Our confocal frame rate of 15 fps currently limits such area coverage to 23.3 seconds per image, which is comparable to the time needed to achieve a 60 nm resolution in STORM. Using a non-confocal setup could dramatically reduce the timescale to several seconds, comparable to the recently-proposed speed-enhanced STORM techniques [46,47]. At these very high frame rates, the 60 Hz frame rate of the current SLM device should also be taken into account; however, other SLM devices, reaching up to 1 KHz, may be employed in such case.

The resolution enhancement obtained for the nanowires with our algorithm employed in a non-fluorescent regime is demonstrated in Fig. 5, where (b) is the original confocal low-resolution (LR) image and (a) is the SR reconstruction. The resolution is significantly improved in the region marked in yellow in (a), where the SR scanning was carried out. As a result, the nanowire appears thinner in this region. The intensity profiles of both the SR and the LR images are shown in Fig. 5(c), together with the corresponding Gaussian fits. The profiles were obtained by averaging the intensity distributions measured along the dashed lines in Fig. 5(a-b). Note, the LR data in Fig. 5(c) were averaged over the same number of images as used for the SR algorithm. Remarkably, the fitted waist is 80 nm (FWHM = 160 nm) for the SR algorithm and 126 nm (FWHM = 250 nm) for the LR data. While both values are still larger than the physical thickness of the nanowires (50 nm), a significant resolution enhancement is achieved with our technique.

 figure: Fig. 5

Fig. 5 An enhancement of resolution, as obtained for a nanowire, in a non-fluorescent mode. (a) A reconstructed SR image of the nanowire, as obtained from the confocal LR images, The regions where the SR resolution scanning was carried out appear as white blobs. (b) The average of all confocal images, employed for the SR reconstruction. Section (c) demonstrates the distribution of intensity I(x) along the dashed lines in (a) and (b). Note, I(x) is an average over all the 7 dashed lines in (a-b). Gaussian fits to I(x) are shown in solid lines.

Download Full Size | PDF

To demonstrate the ability to separately resolve two adjacent features, we applied our approach to an e-beam fabricated nanolines sample (described in Section 2.5). Again, a significant enhancement of resolution is achieved, as demonstrated in Fig. 6, where (a) and (c) are the original confocal LR images and (b) is obtained by our SR algorithm. Here, as in the previous paragraph, we do not make use of particle fluorescence; yet, the resolution is still improved. As for the nanowires, we average the intensity profiles along the dashed lines in (b) and (c). The resulting intensity distribution I(x) is shown in Fig. 6(d), for both the LR (blue) and the SR (red) data. Note the much deeper minimum of I(x) in the space between the nanolines; the SR intensity drops there by more than a factor of two, compared to its maximal values. Clearly, the SR algorithm produces a better FWHM measure, with the nanoline width and the nanoline spacing obtained as 180 nm and 110 nm, respectively. The fit to a sum of two Gaussians is shown in Fig. 6(d) (black) and the nanoline spacing was calculated as the space between the Gaussian centers, minus the FWHM value. The corresponding (FWHM) nanoline width in a conventional confocal measurement is 220 nm, with the interline spacing almost completely masked by the low resolution, further emphasizing the capabilities of our algorithm. Remarkably, the AFM scans on a similar sample measure a nanoline width of 190 nm and a line spacing of 105 nm (see Fig. 2), very close to the values obtained by our SR technique.

 figure: Fig. 6

Fig. 6 Our SR algorithm significantly improves the visibility of closely-separated nanolines, fabricated by e-beam lithography. The wide LR confocal image of the sample is shown in (a), with the red square marking the actual area where we apply our algorithm. The SR image is demonstrated in (b). Note the separate nanolines clearly resolved, while being smeared by the resolution in a conventional confocal LR [shown in (c)]. The corresponding intensity profiles [along the dashed lines in (b) and (c)] are shown in (d), where red dashes are the result of SR imaging.

Download Full Size | PDF

Our technique does not require fluorescent staining of the samples, which is a great advantage over the most common SR approaches. However, the scanning nanospheres, used in our technique, can be fluorescently labeled. With the scanning nanospheres imaged in both the fluorescence and the reflectance channels, their SNR is improved, leading to an improvement of their tracking accuracy; the optical resolution is consequently enhanced. We demonstrate this remarkable resolution enhancement in Fig. 7, where the conventional confocal images are also shown, for the sake of comparison. While the SNR of the nanoparticles was as low as 1.2, we could still accurately determine the particle center locations based on the fluorescence signal. Therefore, in these settings, the SR-imaged FWHM of the nanowire was as low as 100 nm in Fig. 7(a,c) and only slightly larger for another nanowire, shown in Fig. 7(d,f). Note the very sharp image of the nanowire in Fig. 7(d), where the SR scanning was applied in the vicinity of the red-dashed lines. Importantly, Fig. 7(d) also demonstrates an additional advantage of our technique, compared to the passive Brownian scanning methods [24]. While in the passive methods, the full image must necessarily be scanned by the nanospheres, our technique allows the scanned area to be adapted to the dimensions and size of the ROI within the sample, potentially minimizing the imaging time and the radiation damages.

 figure: Fig. 7

Fig. 7 Fluorescent imaging of scanning nanoparticles leads to a further resolution enhancement for the nanowires. Two different nanowires are shown in (a-c) and (d-f). Sections (b) and (e) show the conventional confocal images of the samples. The regions where the SR resolution scanning was carried out appear as white blobs in (a) and as slightly darker blobs in (d). Note the dramatic enhancement of resolution in (d), where the nanowire appears much thinner in the SR-scanned region. The corresponding intensity profiles for the two nanowires are shown in (c) and (f), where the intensity profiles were averaged over the dashed lines in (a,b) and (d,e), respectively.

Download Full Size | PDF

Our results demonstrate the success of our method, with the optical beam manipulation employed. Importantly, other beam shapes such as a dense random distribution of intensity maxima, may be used to trap particles and scan the sample. The pattern can be moved using an SLM, or by other means such as a galvanometric mirror or an acousto-optic deflector. Alternatively, the random pattern can be changed many times (although it may take a bit too long for the particles to be caught in the new locations). Although the random speckles are easier to generate, using random distributions has many disadvantages. It has less control over the trapped particles, it lacks the ability to fit the scan area to the region of interest, it requires higher laser powers and it actually makes the experiments to be more challenging.

The simplest solution that does not require an SLM is to generate a random speckle field by a simple diffuser. Beam stirring could easily be done by an azimuthal rotation of the diffuser. Changing the pattern to a different random one could be done by a simple translation of the diffuser. Some special statistical characteristics of the speckles, discussed elsewhere [48], may further contribute to the success of this method. Optical manipulation using speckle fields has already been done in the past [49,50]. However, the characterization of such random traps is still incomplete. While in the current work, we establish the basis for our method employing a well-characterized beam shaping by an SLM, future implementation of this method, employing a random speckle field, may be possible.

4. Conclusions

We have demonstrated that an active scanning of the sample by HOT-trapped nanospheres allows the high spatial frequency features, below the diffraction limit, to be encoded into the classical diffraction-limited waves and detected by a far-field imaging system. This method improves the resolution of confocal images, allowing features down to the size of the scanning particles to be resolved. Our method is faster, more robust, and more controllable compared to the passive scanning techniques. Also, it allows for a full flexibility in choosing the size and the shape of the ROI within the samples, minimizing the radiation damage and the imaging time. Our technique does not require sample staining, which is a significant advantage compared to most other SR technologies, where sample labeling by switchable fluorescent markers challenges preparation of bio-samples.

While sample staining is not used, employing fluorescently-labeled scanning particles allows obtaining very good results even at poor SNR conditions. The suggested setup could be integrated with HOT-based microscale fabrication systems, for the fabricated structures [31] to be controlled in real time by super-resolution imaging. The proposed biocompatible and noninvasive SR imaging thus opens new directions for both the fundamental scientific research and the future applicative engineering by nano-architecture.

Appendix Experimental considerations related to the SLM specifications

Generating multiple traps by an SLM and steering these traps in space adds many technical issues that have to be considered in the experimental implementation of the current method. The generated SLM hologram is pixelated and has discrete levels of phase retardation. Steering the traps using q phase levels blazed grating hologram with a period of Λ pixels, effects the trap intensities. The first-order diffraction efficiency η is given by [51]:

η=sinc2(q1)sinc2(Λ1)sinc2(lcm[q,Λ]1),
where lcm[q, Λ] is the least common multiplier of q and Λ.

The maximal angle shift that can be achieved is αmax=λ/(2*pixel pitch), where λ is the wavelength. Note, that the worst efficiency value is obtained at αmax, corresponding to Λ=2. When choosing the number of traps and their densities, both the maximal angle shift and the actual irradiance profile on the sample have to be taken into account. In order to get the maximum range of high diffraction efficiency, the ratio that should be maximized is [51]:

Rη=λfobj2mpixelpitch,
where fobj is the focal length of the objective and m is the optical magnification of the SLM on the back focal plane of the objective.

The theoretical value of the minimal displacement of an optical trap, that can be achieved with an SLM is [52]:

αmin2πλfobjDNpixq,
where D is the back focal plane area, and Npix is the pixel number. Experimentally, common SLM setups can reach minimal trap displacements of a few nanometers.

Other important consideration is the presence of the higher replay orders, caused by the pixelated hologram according to the Nyquist’s sampling criterion. Finally, the diffraction efficiency and the replay orders are effected differently by the fill factor effects [53]. Thus, the fill factor should be optimized to reduce the replays and increase the diffraction efficiency.

While the above considerations limit the beamshaping by an SLM, note that an SLM can also be used to correct optical aberrations and astigmathism, improving the shape of the traps [54].

Acknowledgments

Acknowledgment is made to the Kahn Foundation and to ISF #1668/10 for the purchase of equipment.

References and links

1. E. Abbe, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Arch. für Mikroskopische Anat. 9(1), 413–418 (1873). [CrossRef]  

2. G. T. Di Francia, “On Resolving Power and Information,” J. Opt. Soc. Am. 46, 72_1–72 (1956).

3. W. Lukosz, “Optical Systems with Resolving Powers Exceeding the Classical Limit II,” J. Opt. Soc. Am. 57(7), 932 (1967). [CrossRef]  

4. J. L. Harris, “Resolving Power and Decision Theory,” J. Opt. Soc. Am. 54(5), 606 (1964). [CrossRef]  

5. Z. Zalevsky and D. Mendlovic, Optical Superresolution (Springer, 2003).

6. I. J. Cox and C. J. R. Sheppard, “Information Capacity and Resolution in an Optical System,” J. Opt. Soc. Am. A 3(8), 1152 (1986). [CrossRef]  

7. L. Shao, B. Isaac, S. Uzawa, D. A. Agard, J. W. Sedat, and M. G. L. Gustafsson, “I5S: Wide-Field Light Microscopy with 100-nm-scale Resolution in Three Dimensions,” Biophys. J. 94(12), 4971–4983 (2008). [CrossRef]   [PubMed]  

8. S. Hell and E. H. K. Stelzer, “Fundamental Improvement of Resolution with a 4Pi-confocal Fluorescence Microscope using Two-photon Excitation,” Opt. Commun. 93(5-6), 277–282 (1992). [CrossRef]  

9. A. Shemer, Z. Zalevsky, D. Mendlovic, N. Konforti, and E. Marom, “Time Multiplexing Superresolution Based on Interference Grating Projection,” Appl. Opt. 41(35), 7397–7404 (2002). [CrossRef]   [PubMed]  

10. M. G. L. Gustafsson, “Surpassing the Lateral Resolution Limit by a Factor of Two Using Structured Illumination Microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

11. J. T. Frohn, H. F. Knapp, and A. Stemmer, “True Optical Resolution Beyond the Rayleigh Limit Achieved by Standing Wave Illumination,” Proc. Natl. Acad. Sci. U.S.A. 97(13), 7232–7236 (2000). [CrossRef]   [PubMed]  

12. V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic Aperture Superresolution with Multiple Off-axis Holograms,” J. Opt. Soc. Am. A 23(12), 3162–3170 (2006). [CrossRef]   [PubMed]  

13. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic Aperture Fourier Holographic Optical Microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]   [PubMed]  

14. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic Aperture-based On-chip Microscopy,” Light Sci. Appl. 4(3), e261 (2015). [CrossRef]  

15. S. W. Hell, “Microscopy and its Focal Switch,” Nat. Methods 6(1), 24–32 (2009). [CrossRef]   [PubMed]  

16. S. W. Hell and J. Wichmann, “Breaking the Diffraction Resolution Limit by Stimulated Emission: Stimulated-emission-depletion Fluorescence Microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]   [PubMed]  

17. S. W. Hell and M. Kroug, “Ground-state-depletion Fluorscence Microscopy: A Concept for Breaking the Diffraction Resolution Limit,” Appl. Phys. B Lasers Opt. 60(5), 495–497 (1995). [CrossRef]  

18. R. Heintzmann, T. M. Jovin, and C. Cremer, “Saturated Patterned Excitation Microscopy-a Concept for Optical Resolution Improvement,” J. Opt. Soc. Am. A 19(8), 1599–1609 (2002). [CrossRef]   [PubMed]  

19. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging Intracellular Fluorescent Proteins at Nanometer Resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

20. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit Imaging by Stochastic Optical Reconstruction Microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]   [PubMed]  

21. E. Rittweger, K. Y. Han, S. E. Irvine, C. Eggeling, and S. W. Hell, “STED Microscopy Reveals Crystal Colour Centres with Nanometric Resolution,” Nat. Photonics 3(3), 144–147 (2009). [CrossRef]  

22. Z. Zalevsky, E. Saat, S. Orbach, V. Mico, and J. Garcia, “Exceeding the Resolving Imaging Power using Environmental Conditions,” Appl. Opt. 47(4), A1–A6 (2008). [CrossRef]   [PubMed]  

23. Z. Zalevsky, E. Fish, N. Shachar, Y. Vexberg, V. Micó, and J. Garcia, “Super-resolved Imaging with Randomly Distributed, Time- and Size-varied Particles,” J. Opt. A, Pure Appl. Opt. 11(8), 085406 (2009). [CrossRef]  

24. A. Gur, D. Fixler, V. Micó, J. Garcia, and Z. Zalevsky, “Linear Optics Based Nanoscopy,” Opt. Express 18(21), 22222–22231 (2010). [CrossRef]   [PubMed]  

25. E. Betzig, J. K. Trautman, T. D. Harris, J. S. Weiner, and R. L. Kostelak, “Breaking the Diffraction Barrier: Optical Microscopy on a Nanometric Scale,” Science 251(5000), 1468–1470 (1991). [CrossRef]   [PubMed]  

26. L. Friedrich and A. Rohrbach, “Surface Imaging Beyond the Diffraction Limit with Optically Trapped Spheres,” Nat. Nanotechnol. 10(12), 1064–1069 (2015). [CrossRef]   [PubMed]  

27. J. P. Staforelli, E. Vera, J. M. Brito, P. Solano, S. Torres, and C. Saavedra, “Superresolution Imaging in Optical Tweezers using High-speed Cameras,” Opt. Express 18(4), 3322–3331 (2010). [CrossRef]   [PubMed]  

28. S. C. Chapin, V. Germain, and E. R. Dufresne, “Automated Trapping, Assembly, and Sorting with Holographic Optical Tweezers,” Opt. Express 14(26), 13095–13100 (2006). [PubMed]  

29. P. M. Hansen, V. K. Bhatia, N. Harrit, and L. Oddershede, “Expanding the Optical Trapping Range of Gold Nanoparticles,” Nano Lett. 5(10), 1937–1942 (2005). [CrossRef]   [PubMed]  

30. A. Ilovitsh, E. Preter, N. Levanon, and Z. Zalevsky, “Time Multiplexing Super Resolution using a Barker-Based Array,” Opt. Lett. 40(2), 163–165 (2015). [CrossRef]   [PubMed]  

31. M. Yevnin, D. Kasimov, Y. Gluckman, Y. Ebenstein, and Y. Roichman, “Independent and Simultaneous Three-dimensional Optical Trapping and Imaging,” Biomed. Opt. Express 4(10), 2087–2094 (2013). [CrossRef]   [PubMed]  

32. H. Shpaisman, D. B. Ruffner, and D. G. Grier, “Light-driven Three-dimensional Rotational Motion of Dandelion-shaped Microparticles,” Appl. Phys. Lett. 102(7), 071103 (2013). [CrossRef]  

33. V. Emiliani, D. Cojoc, E. Ferrari, V. Garbin, C. Durieux, M. Coppey-Moisan, and E. Di Fabrizio, “Wave Front Engineering for Microscopy of Living Cells,” Opt. Express 13(5), 1395–1405 (2005). [CrossRef]   [PubMed]  

34. D. G. Grier and Y. Roichman, “Holographic Optical Trapping,” Appl. Opt. 45(5), 880–887 (2006). [CrossRef]   [PubMed]  

35. Z. Zalevsky, S. Gaffling, J. Hutter, L. Chen, W. Iff, A. Tobisch, J. Garcia, and V. Mico, “Passive Time-multiplexing Super-resolved Technique For Axially Moving Targets,” Appl. Opt. 52(7), C11–C15 (2013). [CrossRef]   [PubMed]  

36. J. García, Z. Zalevsky, and C. Ferreira, “Superresolved Imaging of Remote Moving Targets,” Opt. Lett. 31(5), 586–588 (2006). [CrossRef]   [PubMed]  

37. P. J. Lu, M. Shutman, E. Sloutskin, and A. V. Butenko, “Locating Particles Accurately in Microscope Images Requires Image-processing Kernels to be Rotationally Symmetric,” Opt. Express 21(25), 30755–30763 (2013). [CrossRef]   [PubMed]  

38. J. C. Crocker and D. G. Grier, “Methods of Digital Video Microscopy for Colloidal Studies,” J. Colloid Interface Sci. 179(1), 298–310 (1996). [CrossRef]  

39. Y. Beiderman, A. D. Amsel, Y. Tzadka, D. Fixler, V. Mico, J. Garcia, M. Teicher, and Z. Zalevsky, “A Microscope Configuration for Nanometer 3-D Movement Monitoring Accuracy,” Micron 42(4), 366–375 (2011). [CrossRef]   [PubMed]  

40. A. Rohrbach, “Stiffness of Optical Traps: Quantitative Agreement Between Experiment and Electromagnetic Theory,” Phys. Rev. Lett. 95(16), 168102 (2005). [CrossRef]   [PubMed]  

41. K. Svoboda and S. M. Block, “Optical Trapping of Metallic Rayleigh Particles,” Opt. Lett. 19(13), 930–932 (1994). [CrossRef]   [PubMed]  

42. F. Hajizadeh and S. N. S.Reihani, “Optimized Optical Trapping of Gold Nanoparticles,” Opt. Express 18(2), 551–559 (2010). [CrossRef]   [PubMed]  

43. D. G. Grier, “A Revolution in Optical Manipulation,” Nature 424(6950), 810–816 (2003). [CrossRef]   [PubMed]  

44. E. J. G. Peterman, F. Gittes, and C. F. Schmidt, “Laser-Induced Heating in Optical Traps,” Biophys. J. 84(2), 1308–1316 (2003). [CrossRef]   [PubMed]  

45. A. Ilovitsh and Z. Zalevsky, “Super Resolved Passive Imaging of Remote Moving Object on top of Sparse Unknown Background,” Appl. Opt. 53(28), 6340–6343 (2014). [CrossRef]   [PubMed]  

46. T. Ilovitsh, Y. Danan, A. Ilovitsh, A. Meiri, R. Meir, and Z. Zalevsky, “Superresolved Labeling Nanoscopy Based on Temporally Flickering Nanoparticles and the K-factor Image Deshadowing,” Biomed. Opt. Express 6(4), 1262–1272 (2015). [CrossRef]   [PubMed]  

47. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using Compressed Sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef]   [PubMed]  

48. O. Wagner, A. Schwarz, A. Shemer, C. Ferreira, J. García, and Z. Zalevsky, “Superresolved Imaging Based on Wavelength Multiplexing of Projected Unknown Speckle Patterns,” Appl. Opt. 54(13), D51 (2015). [CrossRef]  

49. V. G. Shvedov, A. V. Rode, Y. V. Izdebskaya, A. S. Desyatnikov, W. Krolikowski, and Y. S. Kivshar, “Selective Trapping of Multiple Particles by Volume Speckle Field,” Opt. Express 18(3), 3137–3142 (2010). [CrossRef]   [PubMed]  

50. G. Volpe, L. Kurz, A. Callegari, G. Volpe, and S. Gigan, “Speckle Optical Tweezers: Micromanipulation with Random Light Fields,” Opt. Express 22(15), 18159–18167 (2014). [CrossRef]   [PubMed]  

51. A. van der Horst and N. R. Forde, “Calibration of Dynamic Holographic Optical Tweezers for Force Measurements on Biomaterials,” Opt. Express 16(25), 20987–21003 (2008). [CrossRef]   [PubMed]  

52. C. Schmitz, J. Spatz, and J. Curtis, “High-precision Steering of Multiple Holographic Optical Traps,” Opt. Express 13(21), 8678–8685 (2005). [CrossRef]   [PubMed]  

53. K. L. Tan, S. T. Warr, I. G. Manolis, T. D. Wilkinson, M. M. Redmond, W. A. Crossland, R. J. Mears, and B. Robertson, “Dynamic Holography for Optical Interconnections. II. Routing Holograms with Predictable Location and Intensity of each Diffraction Order,” J. Opt. Soc. Am. A 18(1), 205–215 (2001). [CrossRef]   [PubMed]  

54. Y. Roichman, A. Waldron, E. Gardel, and D. G. Grier, “Optical Traps with Geometric Aberrations,” Appl. Opt. 45(15), 3425–3429 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Experimental system. Imaging is done from the upper side using a confocal microscope. OT is done from below using a1064 nm laser, an SLM and an inverted objective.
Fig. 2
Fig. 2 An AFM image of a typical gold nanolines sample. (a) A wide area AFM scan. (b) A height line profile of the zoomed area [marked by a red rectangle in section (a)], taken along the nanolines normal (shown by a horizontal line in the inset). The measured nanoline width was 190 nm from edge to edge, and the spacing between the lines was 105 nm (from edge to edge), as marked by the vertical red lines.
Fig. 3
Fig. 3 Simulation test images preparation. High resolution binary images of 50 nm lines spaced 140 nm apart (a) are prepared. Binary nanoparticles are added to the high resolution images, and the results are filtered and pixellated (b), to mimic the appearance of these images under the typical low resolution conditions of the confocal imaging system. A scanning nanoparticle is shown as well, near the center of the image.
Fig. 4
Fig. 4 a) Low resolution image of the region of interest scanned by a simulated nanoparticle. b) A reconstruction, employing our super resolution algorithm. c) A reconstruction of the same sample with random vibrations, of 120 nanometer in an amplitude, introduced. The scanned area is marked by a yellow rectangle.
Fig. 5
Fig. 5 An enhancement of resolution, as obtained for a nanowire, in a non-fluorescent mode. (a) A reconstructed SR image of the nanowire, as obtained from the confocal LR images, The regions where the SR resolution scanning was carried out appear as white blobs. (b) The average of all confocal images, employed for the SR reconstruction. Section (c) demonstrates the distribution of intensity I(x) along the dashed lines in (a) and (b). Note, I(x) is an average over all the 7 dashed lines in (a-b). Gaussian fits to I(x) are shown in solid lines.
Fig. 6
Fig. 6 Our SR algorithm significantly improves the visibility of closely-separated nanolines, fabricated by e-beam lithography. The wide LR confocal image of the sample is shown in (a), with the red square marking the actual area where we apply our algorithm. The SR image is demonstrated in (b). Note the separate nanolines clearly resolved, while being smeared by the resolution in a conventional confocal LR [shown in (c)]. The corresponding intensity profiles [along the dashed lines in (b) and (c)] are shown in (d), where red dashes are the result of SR imaging.
Fig. 7
Fig. 7 Fluorescent imaging of scanning nanoparticles leads to a further resolution enhancement for the nanowires. Two different nanowires are shown in (a-c) and (d-f). Sections (b) and (e) show the conventional confocal images of the samples. The regions where the SR resolution scanning was carried out appear as white blobs in (a) and as slightly darker blobs in (d). Note the dramatic enhancement of resolution in (d), where the nanowire appears much thinner in the SR-scanned region. The corresponding intensity profiles for the two nanowires are shown in (c) and (f), where the intensity profiles were averaged over the dashed lines in (a,b) and (d,e), respectively.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

s 1 ={ 1s( x )>0 0otherwise
HR( x,ε )=s( x )+g( x, ε n ) s 1 ( x )g( x, ε n )
LR( x, ε n )=HR( x, ε n )P( x )
r( x )= LR( x, ε n ) g ^ ( x, ε n )d ε n = { [ s( x' )+g( x', ε n ) s 1 ( x' )g( x', ε n ) ]P( xx' )dx' } g ^ ( x, ε n )d ε n
[ [ s( x' )+g( x', ε n ) s 1 ( x' )g( x', ε n ) ] g ^ ( x, ε n )d ε n ]P( xx' )dx' = ν( x ) s( x' )P( xx' )dx' + [ g( x', ε n ) g ^ ( x, ε n )d ε n ]P( xx' )dx' [ g( x', ε n ) g ^ ( x, ε n )d ε n ] s 1 ( x' )P( xx' )dx'
s ¯ 1 ( x' )[ g( x', ε n ) g ^ ( x, ε n )d ε n ]P( xx' )dx'
s ¯ 1 ( x' )[ g( x'+z( ε n +x ), ε n ) g ^ ( z ε n , ε n )dz ]P( xx' )dx' = s ¯ 1 ( x' )[ g( z+x'x,0 ) g ^ ( z,0 )dz ]P( xx' )dx' = s ¯ 1 ( x' )D( xx' )P( xx' )dx' = s ¯ 1 ( x )[ D( x )P( x ) ]
k( x )p( 0 )k( x )p( 0 ) s 1 ( x )
r( x )=k( x )p( 0 )k( x )p( 0 ) s 1 ( x )+ν( x )LR
r( x )=[ LR( x, ε n ) g ^ ( x, ε n )J( x )d ε n ]
η= sin c 2 ( q 1 )sin c 2 ( Λ 1 ) sin c 2 ( lcm [ q,Λ ] 1 ) ,
R η = λ f obj 2mpixelpitch ,
α min 2πλ f obj D N pix q ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.