Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhancing diffractive multi-plane microscopy using colored illumination

Open Access Open Access

Abstract

We present a method to increase the number of simultaneously imaged focal planes in diffractive multi-plane imaging. We exploit the chromatic properties of diffraction by using multicolor LED illumination and demonstrate time-synchronous imaging of up to 21 focal planes.We discuss the possibilities and limits given by the use of a liquid crystal spatial light modulator to display the diffractive patterns. The method is suitable for wide-field transmission and reflection microscopy.

© 2013 optical society of america

1. Introduction

Time-synchronous high resolution microscopic imaging of three-dimensional sample volumes is desirable in a number of applications that involve the investigation of dynamic specimens. Examples are the tracking of proteins or vesicles in living cells [13] or the detection of fluorescence events used for super-localization methods [47]. Because the high numerical aperture (NA) of microscope objectives does not only feature high lateral resolution but also a high axial one, the depth of field is typically reduced to approximately two micrometers for high NA lenses in wide-field imaging. This high optical sectioning capability is a useful property in many cases, yet prevents acquiring volumetric sample information with a single recording.

Many problems in volumetric imaging can be tackled using sequential scanning techniques, where a series of snapshots is taken while the sharply imaged focal plane is swept through the sample. The scanning can be achieved by moving the sample stage, the objective lens, or - which is typically much faster - by implementing remote focusing [8]. Although such scanning methods can achieve axial scanning rates in the range of kHz, they cannot provide truly synchronous frame acquisition. Holographic approaches using incoherent [9] and coherent light [1013] can be employed, although methods of the latter class are not suitable for fluorescence and typically do not provide the high optical sectioning capability that is often desired. Furthermore, the required coherence of the laser is often the origin of unwanted interference artifacts. PSF-engineering methods [1416] allow to extend the axial field of view to several microns while maintaining the localization precision given by the high NA optics. These methods are especially suited for imaging of a sparse ensemble of fluorophores but less suitable for complicated sample structures. Multi-plane imaging can also be achieved using conventional beam splitters [1, 3, 17], however at the cost of an increased complexity of the optical pathway. For larger refocusing distances, these techniques also introduce noticeable spherical aberrations [8].

An interesting approach to image a multitude of axial planes simultaneously has been proposed by Blanchard et al. in 1999 [18]. The method utilizes a diffractive element that can be thought of an off-axis Fresnel zone plate. When illuminated with a collimated beam, such a grating will split the light into diffraction orders that not only show different propagation angles but also different degrees of divergence. Consequently, when integrated into the optical pathway of a microscope, it can produce laterally shifted images of multiple axial planes.

In the past decade, this approach has been further developed. The utilization of volume holograms has been demonstrated [19, 20]. More complex diffractive patterns that feature up to nine diffraction orders have been realized on a liquid crystal spatial light modulator (LC-SLM) [21]. Ways of compensating dispersion caused by such gratings have been reported [2224], which opened a door for this technique to fluorescence imaging. Very recently, the method has been refined by including high-NA (spherical) rather than low-NA (parabolic) refocusing [24], a feature which has already been suggested by Blanchard et al. [18] and enables aberration-free refocusing when imaging under high NA.

An interesting question in the context of this multiplex method is what actually limits the number of simultaneously imageable axial planes. In practice, this number will be restricted either by the available signal or by the resolution of the SLM if one chooses to use a dynamic device to display the diffractive pattern. Whereas the former restraint will be dominant in fluorescence imaging, the latter is likely to set the limit in non-fluorescence wide-field imaging as long as a sufficiently bright illumination source is provided.

Here we present a way to increase the number of imaged axial planes using colored illumination and RGB detection. Using a LC-SLM, high power LEDs and a consumer electronics CMOS camera, we demonstrate the synchronous imaging of as many as 21 different planes in a single snapshot. Our diffractive pattern design enables aberration-free refocusing like the methods described in Refs. [8, 24] and furthermore corrects spherical aberrations that arise from imaging through the refractive index mismatch between immersion and sample. This is important when high NA oil immersion or dry lenses are used to image into aqueous media.

Finally, we derive some limits of this technique that are relevant when using pixelated SLMs, where the patterns should be correctly sampled, avoiding aliasing effects. Taking the Nyquist criterion we derive numbers for key properties such as the maximal refocusing length as well as the maximal field of view in the created sub-images.

2. Experimental results

Figure 1 shows the set-up we used for the experimental demonstration. The lights of three colored LEDs (Luminus SST-90-R-F11-HF100, SST-90-G-F11-JG200, SSR-90-B-R11-KF300), which were spectrally narrowed by bandpass filters to (458 ± 0.8) nm, (532 ± 1.5) nm and (633 ± 1.5) nm, were combined using dichroic mirrors and sent through the condenser to illuminate the sample at an effective NA of around 0.5. The light powers coupled into the condenser lens were 8 mW, 7 mW and 20 mW for the red, green and blue wavelengths, respectively. The objective back aperture was imaged onto the panel of a LC-SLM (PLUTO system from Holoeye Photonics AG). The relay optics is not shown in the drawing. We have computed a diffractive pattern that acts like a “one-to-seven” beam splitter. Each of the seven output beams shows the same diffraction angle with respect to the optical axis but a different azimuthal angle and degree of divergence, which can be freely chosen and corresponds to specific values of defocus in the sample volume. While arranging the output beams in a squared geometry (as for instance in Refs [21, 24]) would be the natural choice regarding the squared pixel shape, we experimentally found a slightly better intensity uniformity amongst the output beams for our circular geometry.

 figure: Fig. 1

Fig. 1 Left: experimental set-up; chromatic dependence of diffraction leads to color-dependent focus values and thus to an increased number of planes that can be imaged simultaneously; the sketch shows only the simple case of a binary off-axis Fresnel zone lens, which channels the light mainly into two diffraction orders. Right: optimized hologram used for experiments; a section is magnified to resolve its fine structure.

Download Full Size | PDF

The diffraction pattern on the SLM (shown on the right of Fig. 1) produced seven output beams per color. The zero order image in the center suffers from aberrations that arise from the uneven SLM surface. Since these aberrations cannot be corrected, we exclude the zero order image from our further considerations. Consequently, 21 different sub-images with different focus values are created on the camera (Fig. 2(b)). The focus values for the seven output beams in our experiments were designed such that the axial distance of an arbitrary axial plane to its neighbors is roughly constant. Exactly equispaced planes are not achievable, because only the plane positions of a single color can be freely chosen. The plane positions of the remaining two colors result from the corresponding wavelengths ratios. In our case the green plane positions were chosen such as to make the entire set of focal planes as homogeneously distributed as possible. For fine tuning these focus values it was necessary to consider also the longitudinal chromatic aberration with respect to 532 nm of the whole imaging system, which was estimated from images to be −0.5 μm for 633 nm and −1.0 μm for 460 nm. For detection we employed a consumer electronics CMOS camera (Canon 5D Mark2) that is controlled via LabVIEW. The images are stored in RAW format and contain a red, a green and a blue channel, respectively. The sensitivities of these channels at our illumination wavelengths were measured and the obtained data used to remove crosstalks between them. The camera features a Bayer-sensor with about 21 Megapixels in total. This number is shared between the different colors, where there are twice as many “green” pixels than “red” or “blue” ones. We down-sampled the higher native resolution of the green image by a factor of two in order to match the resolutions of the red and blue images. Despite the reduction the sample plane was still sufficiently sampled (5.2 pixels per μm). A positive side-effect of the reduction was an increased signal to noise ratio of the green image.

 figure: Fig. 2

Fig. 2 Results from the imaging of a clubmoss spore; (a) distribution of imaged planes; (b) raw image at the camera; (c) frames extracted from the multi-color image ( Media 1).

Download Full Size | PDF

Figure 2 shows results from the imaging of a clubmoss-spore immersed in glycerol with a 40×, 0.75 NA objective. Figure 2(a) shows the axial positions of all 21 planes. They cover a range of about 12 μm with a mean axial distance of 0.6 μm. The colors of the bars indicate the recording wavelengths. The colored image (b) represents the raw image as delivered by the camera and the table (c) contains the final images ( Media 1). The acquisition time for the images in Fig. 2 was a second, which is too long for capturing dynamic events. With high power LED illumination we achieved a reduction of the acquisition time to approximately 30 ms without a noticeable degradation of the image quality. This time could be further shortened by choosing higher camera gain settings or using laser illumination in conjunction with a despeckling technique such as a fast rotating diffuser.

A fair assessment of the image quality that is achievable with the multi-focus technique requires the comparison with an established standard technique such as sequential image recording combined with focal plane scanning. To this end, a sample volume consisting of PMMA beads in agarose gel was imaged using two different techniques: on the one hand with the single-shot method and on the other hand frame by frame using a piezo-actuated objective mount for refocusing. Figure 3 summarizes the results. Figure 3(a) compares the contrasts of a single frame of the multicolor stack with an independently taken image of the same sample plane. Both images were recorded with green light. The intensity profiles along a line through both images are comparable, which indicates that a single image of the multicolor stack is not, or at least only negligibly, influenced by any of the other images that are simultaneously taken. Figure 3(b) shows axial sections through both volumes. The single-shot stack was postprocessed by interpolation to 20 equidistant planes, with a plane-interspacing of 0.63 μm. Note that the sequentially recorded stack has a smaller plane-interspacing (0.5 μm). Overall, the sections offer comparable information, although some differences are noticeable. Varying contrasts between individual frames of the multicolor stack exist as well as slight transverse misalignments between adjacent frames. The latter effect is caused by the software which builds the image stack from the raw image.

 figure: Fig. 3

Fig. 3 Comparison of a single-shot multicolor recording with a sequentially recorded image stack (using green illumination); the sample consists of PMMA beads in agarose gel; (a) transverse intensity profiles along the same line through the sample (marked in the inset Fig.); the frame taken from the multicolor-stack (red plot) and an independently taken acquisition of the same focal plane (blue plot) show similar contrasts; (b) axial sections through the sequentially acquired stack (upper image) and the single-shot multicolor recording; both images show comparable information; slight contrast differences and misalignments between the individual frames of the single-shot recording are noticeable.

Download Full Size | PDF

3. Pattern design and aberration correction

Our diffractive patterns were calculated using a weighted Gerchberg-Saxton algorithm [25] that was modified to enable aberration-free refocusing by using high-NA (spherical) rather than low-NA (parabolic) lens terms as well as to correct for aberrations that arise from a potential refractive mismatch between immersion medium and sample. The latter effect for instance blurs images when an oil immersion or dry lens is used to focus deep into an aqueous sample. Because the aberration magnitude scales linearly with the focusing depth (like the lens term), the compensation is automatically correct for each color. The same algorithm has recently been applied for the direct laser writing of microstructures in glass and crystals using ultra-short laser pulses [26].

Further aberration correction was required to compensate for the slightly curved surface of the SLM panel. The aberrations introduced by this curvature was dominated by astigmatism with a magnitude of about 0.5 rad RMS @532 nm. This aberration was corrected for the green, which means that about +0.1 rad and −0.1 rad of astigmatism were left for the red and blue wavelengths, respectively. As mentioned earlier, the aberrations in the zero order image cannot be corrected, which is a reason why this image was excluded from the image stack.

4. Limits for pixelated diffractive patterns

In the following we would like to investigate some fundamental limits of the technique that are set by a pixelated diffractive element. Related considerations can for instance be found in [2729]. It is evident that the available space-bandwidth product of such an element will limit the achievable field of view of a single sub-frame as well as the maximal achievable focus range. We restrict the discussion to a single wavelength and a single dimension.

We will first concentrate on the question what the maximal field of view (FOV) is. For this it is helpful to understand that the amount of information in the imaged FOV and in its Fourier transform (at the SLM surface) are the same. In the image space, the information content can be represented by the FOV divided by the maximal sampling length that still allows one to reconstruct the entire image from the information given at the sampling points [30]. Since the maximal spatial frequency detected by the objective lens is NA/λ0 (with NA denoting the numerical aperture of the objective and λ0 the vacuum wavelength), this sampling length is, according to the Nyquist theorem, λ0/ (2NA). We may call the quantity describing the information content NPSF.

NPSF=FOV(λ02NA).
In Fourier space, which is optically formed at the SLM plane, this quantity can be defined similarly. There it corresponds to the diameter of the light field divided by the sampling length as defined by the NA of the lens in front of the SLM. Ideally, the size of the diffractive pattern is chosen to fill to entire SLM panel in order to use as many pixels as possible and the telescope optics between objective and SLM is designed to image the objective back aperture exactly onto the diffractive pattern. We obtain the following relation:
NPSF=Nδ(λ02sin(β/2)),
where N denotes the diameter of the diffractive pattern in units of pixels and δ the side length of a pixel. The angle β is the opening angle of the light cone produced by the lens in front of the SLM (Fig. 4). This angle must be smaller than the maximal angle of diffraction (αmax) that can be produced by the SLM pattern (βαmax), otherwise the cones of the zero and first diffraction order overlap as do the corresponding images on the camera. This is necessary because it is difficult to cancel the zero diffraction order, which originates from non-ideal properties of the SLM. Methods to reduce the zero order are known [31, 32], but might not yet be sufficient enough to suppress it to a negligible level required for imaging applications as described here.

 figure: Fig. 4

Fig. 4 Beam geometry in diffractive multi-plane imaging; each sub-image that is created by diffraction is represented by a separate light cone (here only two cones are shown for the sake of simplicity). In order to avoid overlapping of the cones one must either increase the diffraction angle α or narrow the FOV with a diaphragm in the intermediate image plane. The maximal possible diffraction angle αmax determines a maximum for the FOV.

Download Full Size | PDF

The maximal diffraction angle αmax corresponds to the finest possible grating on the SLM, which has a period of two pixels. We thus have: αmaxλ0/(2δ). When inserted in Eq. (2), one obtains

NPSFN2
as an upper limit for the number of resolved spots across the FOV. Apparently, the only crucial parameter is the number of pixels of the diffractive pattern. It is important to mention that this restriction only exists along the axis where the diffraction occurs. The orthogonal direction is not affected. Equation 3 was derived under the assumption of coherent illumination. For partially coherent illumination NPSF is larger as a consequence of the effectively narrower PSF. When using the PLUTO device of Holoeye Photonics AG, which features 1080 rows, the maximal FOV along the direction of diffraction can thus contain up to 540 resolved spots for coherent illumination.

Based on this result we can also estimate an upper limit for the spectral bandwidth of the light. Assuming that dispersion related blurring should stay below the resolution length, we can formulate the following condition for the bandwidth:

Δλ0λ0(N2).
For the PLUTO modulator the maximal allowed relative bandwidth is around 0.2% or 1 nm for a wavelength of 532 nm and coherent illumination. For incoherent illumination the limit is more strict, whereas it softens for smaller FOVs.

Finally, a question of interest in the context of SLM-based refocusing concerns the accessible focusing range. It appears evident that N, i.e. the resolution of the device, but also the objective NA will represent crucial parameters. Based on the Nyquist criterion we find the following limit for refocusing with a SLM (see Appendix for a derivation):

|Δz|(121P)Nλ02NA2n2NA2.
Here, n is the refractive index of the immersion medium and sample, which are assumed to be the same, and P the period of a grating (in units of pixels) that is added to the diffractive lens. The gratings, which are necessary to ensure that all sub-images of a multi-plane image are laterally separated on the camera sensor, unfortunately reduce the available focusing range. This range equals zero for a grating period of 2, since this is the finest possible grating and any kind of additional modulation will violate the sampling criterion. Note that for the case of a refractive index mismatch between the sample and immersion media, the focus will show an additional shift because of the refraction at the interface. In this case the total focus shift can be approximated by Δz· n2/n1, where n2 denotes the refractive index of the sample, and n1 that of the immersion medium. One has to bear in mind that a refractive index mismatch does also cause spherical aberrations, the correction of which was included experimentally, but not in the here made considerations.

Figure 5 illustrates the available focusing range as a function of the objective NA and for three different grating periods. The plot in Fig. 5(a) is split for better visualization: the left graph shows NA values ranging from 0.25 to 0.6, and the right graph from 0.6 to 1.3. The lines mark the onset of under-sampling and therefore the maximum value of |Δz| that can be set. The plots were calculated using the following parameters: N=1080, n=1.33 (focusing into water with a water immersion lens), and λ0=532 nm.

 figure: Fig. 5

Fig. 5 Maps of accessible focus ranges: (a) The lines mark the maximal focal shifts that can be realized with pixelated diffractive lenses as a function of the objective NA. Three different values for the period (in pixels) of a grating lying on top of the lens have been considered, assuming a refractive index of n = 1.33; (b) accessible focusing range as function of the lateral image shift; if both lateral axes are considered, the focusing range defines a diamond-shaped volume (inset); objective pupil apodizations due to diffractive losses have been calculated for four different scenarios. The corresponding intensity distributions in the objective pupil are shown on the right, with the average diffraction efficiencies denoted in the pupil centers. (c) two examples for the distribution of multiple focal planes. The planes can be freely arranged within the diamond-shaped volume, but their projections onto the x- and y-axes must not overlap. The plane in the center of both examples is the zero diffraction order image which cannot be moved. All calculations presented in this figure assumed a wavelength of 532 nm, a hologram measuring 1080 pixels in diameter and matching refractive indices of sample and immersion medium.

Download Full Size | PDF

Figure 5(b) presents similar information in an alternative way. Here, the accessible focusing range is shown as a function of the lateral image shift Δx that is caused by a superposed grating. Accessible focal values lie within the diamond-shaped areas. Without a grating (i.e. Δx = 0), the focusing range is maximal. It linearly decreases with increasing grating vector (and thus image shift) and finally reaches zero for the finest possible grating. The slopes which define the diamond-shaped areas can be calculated from Eq. (5) by expressing the grating period P as a function of the lateral image shift it produces in the focal plane: P = λ0f / (Δ). Three plots are shown for different objective lenses. For the 0.75 NA lens, the pupil apodization resulting from diffractive losses is shown for four different scenarios (numbered 1 to 4 in the figure). The diffraction patterns used in scenarios 2 to 4 are at the threshold to undersampling. The total efficiencies of the diffraction patterns are denoted in the pupil centers. The apodization functions were calculated by performing a fast Fourier transform of the pixelated diffraction patterns, followed by a multiplication with the sampled Fourier transform of the pixel shape (i.e. a square) and subsequent inverse fast Fourier transform. Note that diffraction related apodization generally does not show radial symmetry (e.g. scenario 3), which results in a correspondingly anisotropic point spread function. This anisotropy however is entirely removable by numerical post-processing since the apodization function is known.

This kind of visualization can be helpful, because it allows assessing the feasibility of a desired multi-plane configuration very quickly. It allows to find tradeoffs between the number of sub-images, their FOV and focus values in a graphical way. Figure 5(c) depicts two exemplary multi-plane configurations for a high NA objective. The left one features two planes (three if the zero order image is counted as well) with a FOV along the diffraction axis of 50 μm and focus values of +21 μm, 0 and −21 μm. The centers of the refocused planes lie on the dashed diamond-shaped line which means that any further refocusing will lead to artifacts caused by an undersampled diffraction pattern. The example on the right has six planes (seven with the zero order) with an axial interspacing of 11 μm and a FOV of 25 μm.

It it worth mentioning that the diamond-shaped regions are volumes rather than areas, if the y-axis is considered as well. The shape of the volume is shown as inset. These volumes can also be understood as the space where a focal spot can be steered around using the SLM in an optical tweezers or point-scanning imaging application.

5. Discussion

We have presented a diffractive method to achieve parallel imaging in multiple axial planes which employed color multiplexing in order to increase the information that can be captured in a single snapshot. Using color multiplexing can be especially of interest in combination with the use of etched diffractive patterns, which can generate a very high number of different axial planes and thus a large amount of information.

Our diffractive pattern design implements high NA refocusing and, at the same time, compensates aberrations introduced by focusing through a refractive index mismatch. We have investigated the limits of this technique, which are given by the resolution of the diffractive pattern, and provide a simple mathematical relation that allows assessing the feasibility of the SLM-based multi-focus approach for a given practical situation.

The method is suitable for transmission and reflection wide-field microscopy. An applicability of diffractive multi-plane imaging to fluorescence imaging is given for bright fluorophores that allow narrowing of the emission band to the limit derived in this paper. Naturally, multi-colored illumination does not offer any advantage here, because the wavelength of the imaged light is determined by the fluorophore.

Care has to be taken for samples showing significant wavelength-dependent absorption. In such cases, the image information will be also influenced by the natural color of the sample. For “severe” cases, i.e. when a large amount of information is contained in the color-distribution within the sample, one thus images three sets of images with each set showing 7 different axial planes rather than a single set of images showing the object in 21 axial planes. For the task of recording a single-shot 3D stack of a colored specimen, the axial shifts between the images of different color are undesired. Here, the distribution of axial planes is ideally chosen to lie symmetrically around z = 0, which is different from the plane distribution depicted in Fig. 2. Then, chromatic shifts would be less pronounced and a 3D color stack could be interpolated from the available data more easily. It is also worthwhile noting that for the sake of measuring the color-dependent absorption of a sample, the narrow-band three-color illumination used in our work is only of limited use as only three points of the function are sampled.

The use of LC-SLMs for image multiplexing offers advantages, especially its flexibility: diffractive patterns can be tailored and switched in real time. For instance, it would be possible to implement an “autofocus” feature that automatically adapts the defocus of a particular sub-image according to a movement of the structures it shows. Such a feature would enable to keep multiple freely swimming organisms in focus simultaneously.

On the other hand, limitations exist in terms of achievable defocus and field of view. A crucial number to this end is the resolution of the SLM, which, however, has the potential to increase for future generations of light modulators. The polarization sensitivity of today’s devices is another undesired property, since it limits the achievable efficiency to a maximum of 50% for applications that involve unpolarized light. This problem could be solved by using two SLMs. For transmissive modulators, a specific geometry also allows one to modulate unpolarized light [33].

6. Appendix

Derivation of Eq. (5)

For the imaging with high NA lenses a diffractive pattern that performs defocusing should feature a spherical rather than a parabolic shape [8]. Using the normalized objective pupil radius ρ = r/(fNA), where r is the radial coordinate and f the focal length of the objective lens, the defocus phase function for a focal shift of Δz can be expressed as [26]:

D(ρ)=2πλ0NAn2NA2ρ2Δz,
where n is the refractive index of the immersion medium and the sample. Note that for the case of a refractive index mismatch between these media, the focus will show an additional shift because of the refraction at the interface. Efficient defocusing is achieved as long as this spherical function is correctly sampled by the SLM. This is fulfilled if the derivative of the above expression is smaller than M ·π/δ, where M is the magnification at which the objective pupil is imaged onto the SLM. In units of the normalized radius, the pixel size δ can be expressed as M · 2/N. The lens term of Eq. (6) has its steepest gradient at ρ = 1, i.e. at the fringe of the lens. The Nyquist criterion can therefore be expressed as follows:
|dD(ρ)dρ|(ρ=1)πN2.
Finally, an upper limit for defocusing with a SLM can be defined as:
|Δz|14Nλ0NA2n2NA2.
Obviously, this range depends not only on the SLM resolution N, but also on the wavelength, the objective NA and the refractive index. It should be noted, however, that the diffraction efficiency of the lens will drop even before this sampling limit is reached. At the sampling threshold defined above, the diffractive lens will show an efficiency drop from practically 100% in its center to 41% at its boundary (i.e. the diffraction efficiency of a binary grating). For the application of multi-focal plane imaging, many diffractive lenses have to be combined in a single diffraction pattern. Each of these lens functions must be superposed by an individual blazed grating such that the final images at the camera are laterally separated. Adding such a grating will reduce the accessible defocus range. In this case the Nyquist criterion takes the following form:
|ddξ(D(ξ)+kξ)|(ξ=1)πN2.
In the above equation we consider only the normalized Cartesian coordinate ξ = x/(fNA) and assume that the x-axis is parallel to the grating vector k. We also assume that k > 0. k can be expressed in terms of the grating period P (in pixels): k = 2π/() = ()/P. We then obtain the following defocusing limit:
|Δz|(121P)Nλ02NA2n2NA2,
As expected, the refocus range is zero for a grating period of 2, since this is the finest possible grating and any kind of additional modulation will violate the sampling criterion. Also, in the limit of P, the expression identical to that of Eq. (8).

Acknowledgments

This work was supported by the ERC Advanced Grant 247024 catchIT.

References and links

1. P. Prabhat, S. Ram, E. Ward, and R. Ober, “Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimensions,” IEEE Trans. Nanobiosci. 3, 237–242 (2004) [CrossRef]  .

2. I. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005) [CrossRef]   [PubMed]  .

3. T. M. Watanabe, T. Sato, K. Gonda, and H. Higuchi, “Three-dimensional nanometry of vesicle transport in living cells using dual-focus imaging optics,” Biochem. Bioph. Res. Co. 359, 1–7 (2007) [CrossRef]  .

4. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples,” Nat. Meth. 5, 527–529 (2008) [CrossRef]  .

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006) [CrossRef]   [PubMed]  .

6. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006) [CrossRef]   [PubMed]  .

7. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Meth. 3, 793–795 (2006) [CrossRef]  .

8. E. J. Botcherby, R. Juskaitis, M. J. Booth, and T. Wilson, “Aberration-free optical refocusing in high numerical aperture microscopy,” Opt. Lett. 32, 2007–2009 (2007) [CrossRef]   [PubMed]  .

9. J. Rosen and G. Brooker, “Fresnel incoherent correlation holography (FINCH): a review of research,” Adv. Opt. Tech. 1, 151–169 (2012).

10. W. Xu, M. Jericho, H. Kreuzer, and I. Meinertzhagen, “Tracking particles in four dimensions with in-line holographic microscopy,” Opt. Lett. 28, 164–166 (2003) [CrossRef]   [PubMed]  .

11. P. Marquet, B. Rappaz, P. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30, 468–470 (2005) [CrossRef]   [PubMed]  .

12. P. Ferraro, S. Grilli, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital holography,” Opt. Express 13, 6738–6749 (2005) [CrossRef]   [PubMed]  .

13. J. Sheng, E. Malkiel, and J. Katz, “Digital holographic microscope for measuring three-dimensional particle distributions and motions,” Appl. Opt. 45, 3893–3901 (2006) [CrossRef]   [PubMed]  .

14. L. Holtzer, T. Meckel, and T. Schmidt, “Nanometric three-dimensional tracking of individual quantum dots in cells,” Appl. Phys. Lett. 90, 053902 (2007) [CrossRef]  .

15. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009) [CrossRef]   [PubMed]  .

16. M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, “Corkscrew point spread function for far-field three-dimensional nanoscale localization of pointlike objects,” Opt. Lett. 36, 202–204 (2011) [CrossRef]   [PubMed]  .

17. E. Toprak, H. Balci, B. H. Blehm, and P. R. Selvin, “Three-dimensional particle tracking via bifocal imaging,” Nano Lett. 7, 2043–2045 (2007) [CrossRef]   [PubMed]  .

18. P. Blanchard and A. Greenaway, “Simultaneous multiplane imaging with a distorted diffraction grating,” Appl. Opt. 38, 6692–6699 (1999) [CrossRef]  .

19. Y. Luo, P. J. Gelsinger-Austin, J. M. Watson, G. Barbastathis, J. K. Barton, and R. K. Kostuk, “Laser-induced fluorescence imaging of subsurface tissue structures with a volume holographic spatial-spectral imaging system,” Opt. Lett. 33, 2098–2100 (2008) [CrossRef]   [PubMed]  .

20. Y. Luo, I. K. Zervantonakis, S. B. Oh, R. D. Kamm, and G. Barbastathis, “Spectrally resolved multidepth fluorescence imaging,” J. Biomed. Opt. 16, 096015 (2011) [CrossRef]   [PubMed]  .

21. C. Maurer, S. Khan, S. Fassl, S. Bernet, and M. Ritsch-Marte, “Depth of field multiplexing in microscopy,” Opt. Express 18, 3023–3034 (2010) [CrossRef]   [PubMed]  .

22. P. Blanchard and A. Greenaway, “Broadband simultaneous multiplane imaging,” Opt. Commun. 183, 29–36 (2000) [CrossRef]  .

23. Y. Feng, P. A. Dalgarno, D. Lee, Y. Yang, R. R. Thomson, and A. H. Greenaway, “Chromatically-corrected, high-efficiency, multi-colour, multi-plane 3D imaging,” Opt. Express 20, 20705–20714 (2012) [CrossRef]   [PubMed]  .

24. S. Abrahamsson, J. Chen, B. Hajj, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. Dahan, and M. G. L. Gustafsson, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Meth. 10, 60–63 (2013) [CrossRef]  .

25. R. Di Leonardo, F. Ianni, and G. Ruocco, “Computer generation of optimal holograms for optical trap arrays,” Opt. Express 15, 1913–1922 (2007) [CrossRef]   [PubMed]  .

26. A. Jesacher and M. J. Booth, “Parallel direct laser writing in three dimensions with spatially dependent aberration correction,” Opt. Express 18, 21090–21099 (2010) [CrossRef]   [PubMed]  .

27. L. Golan, I. Reutsky, N. Farah, and S. Shoham, “Design and characteristics of holographic neural photo-stimulation systems,” J Neural Eng. 6, 066004 (2009) [CrossRef]   [PubMed]  .

28. A. Jesacher and M. Ritsch-Marte, “Multi-focal light microscopy using liquid crystal spatial light modulators,” in “International Symposium on Optomechatronic Technologies (ISOT) 2012,” (2012), pp. 1–2 [CrossRef]  .

29. P. S. Salter, Z. Iqbal, and M. J. Booth, “Analysis of the three-dimensional focal positioning capability of adaptive optic elements,” Int. J. Optomechatronics 7, 1–14 (2013) [CrossRef]  .

30. G. Di Francia, “Resolving Power and Information,” J. Opt. Soc. Am. 45, 497–501 (1955) [CrossRef]  .

31. D. Palima and V. R. Daria, “Holographic projection of arbitrary light patterns with a suppressed zero-order beam,” Appl. Opt. 46, 4197–4201 (2007) [CrossRef]   [PubMed]  .

32. E. Ronzitti, M. Guillon, V. de Sars, and V. Emiliani, “LCoS nematic SLM characterization and modeling for diffraction efficiency optimization, zero and ghost orders suppression,” Opt. Express 20, 17843–17855 (2012) [CrossRef]   [PubMed]  .

33. G. Love, “Liquid-crystal phase modulator for unpolarized light,” Appl. Opt. 32, 2222–2223 (1993) [CrossRef]   [PubMed]  .

Supplementary Material (1)

Media 1: AVI (250 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Left: experimental set-up; chromatic dependence of diffraction leads to color-dependent focus values and thus to an increased number of planes that can be imaged simultaneously; the sketch shows only the simple case of a binary off-axis Fresnel zone lens, which channels the light mainly into two diffraction orders. Right: optimized hologram used for experiments; a section is magnified to resolve its fine structure.
Fig. 2
Fig. 2 Results from the imaging of a clubmoss spore; (a) distribution of imaged planes; (b) raw image at the camera; (c) frames extracted from the multi-color image ( Media 1).
Fig. 3
Fig. 3 Comparison of a single-shot multicolor recording with a sequentially recorded image stack (using green illumination); the sample consists of PMMA beads in agarose gel; (a) transverse intensity profiles along the same line through the sample (marked in the inset Fig.); the frame taken from the multicolor-stack (red plot) and an independently taken acquisition of the same focal plane (blue plot) show similar contrasts; (b) axial sections through the sequentially acquired stack (upper image) and the single-shot multicolor recording; both images show comparable information; slight contrast differences and misalignments between the individual frames of the single-shot recording are noticeable.
Fig. 4
Fig. 4 Beam geometry in diffractive multi-plane imaging; each sub-image that is created by diffraction is represented by a separate light cone (here only two cones are shown for the sake of simplicity). In order to avoid overlapping of the cones one must either increase the diffraction angle α or narrow the FOV with a diaphragm in the intermediate image plane. The maximal possible diffraction angle αmax determines a maximum for the FOV.
Fig. 5
Fig. 5 Maps of accessible focus ranges: (a) The lines mark the maximal focal shifts that can be realized with pixelated diffractive lenses as a function of the objective NA. Three different values for the period (in pixels) of a grating lying on top of the lens have been considered, assuming a refractive index of n = 1.33; (b) accessible focusing range as function of the lateral image shift; if both lateral axes are considered, the focusing range defines a diamond-shaped volume (inset); objective pupil apodizations due to diffractive losses have been calculated for four different scenarios. The corresponding intensity distributions in the objective pupil are shown on the right, with the average diffraction efficiencies denoted in the pupil centers. (c) two examples for the distribution of multiple focal planes. The planes can be freely arranged within the diamond-shaped volume, but their projections onto the x- and y-axes must not overlap. The plane in the center of both examples is the zero diffraction order image which cannot be moved. All calculations presented in this figure assumed a wavelength of 532 nm, a hologram measuring 1080 pixels in diameter and matching refractive indices of sample and immersion medium.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

N P S F = FOV ( λ 0 2 NA ) .
N P S F = N δ ( λ 0 2 sin ( β / 2 ) ) ,
N P S F N 2
Δ λ 0 λ 0 ( N 2 ) .
| Δ z | ( 1 2 1 P ) N λ 0 2 NA 2 n 2 NA 2 .
D ( ρ ) = 2 π λ 0 NA n 2 NA 2 ρ 2 Δ z ,
| d D ( ρ ) d ρ | ( ρ = 1 ) π N 2 .
| Δ z | 1 4 N λ 0 NA 2 n 2 NA 2 .
| d d ξ ( D ( ξ ) + k ξ ) | ( ξ = 1 ) π N 2 .
| Δ z | ( 1 2 1 P ) N λ 0 2 NA 2 n 2 NA 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.