Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High throughput multichannel fluorescence microscopy with microlens arrays

Open Access Open Access

Abstract

We present a multichannel fluorescence microscopy technique for high throughput imaging applications. A microlens array with over 140,000 elements is used to image centimeter-scale samples at up to 18.1 megapixels per second. Large field-of-view multichannel fluorescent imaging is demonstrated in both sequential and parallel geometries. The extended dynamic range of this approach is also discussed.

© 2014 Optical Society of America

1. Introduction

Fluorescence microscopy is widely used in biological research to visualize morphology from the whole organism down to the cellular level [1]. In the field of high content screening (HCS), fluorescence microscopy followed by image analysis is used to quantify the reactions of cells to candidate drugs at varying dosages [2,3]. Typically this involves imaging microwell plates. At the time of writing, automated microscopes are limited to at least ~1-2 seconds per imaging position [4], and are outfitted with cameras with up to 4.66 megapixels [5]. Such microscopes therefore achieve throughputs of ~4.66 megapixels per second (Mpx/s) at best. For 7.3mm square wells imaged at 0.5μm/pixel, this corresponds to ~73 minutes per 96-well plate. The imaging throughput represents a bottleneck to drug discovery efforts, and methods that improve upon it would be greatly advantageous.

A conventional automated wide-field microscope builds up an image by stitching together 103104 smaller fields-of-view (FOVs), each imaged by a microscope objective. After each of small FOV is acquired, the sample must be moved by a distance equal to the linear dimension of the field of view before the next image can be taken. Additionally, the position of the microscope objective must be adjusted so that the subsequent image is in focus. Consequently, most of the image acquisition time is spent on mechanical movement rather than on photon collection.

Recently, arrayed and structured illumination imaging platforms that aim to reduce imaging time have been introduced [612]. The general theme is to illuminate the sample with an array of focal spots that act as a group of scanning microscope. Photons are collected with a camera at a high speed, increasing the information throughput. Crucially, all of these methods allow for photon collection while the sample is moving – which is impossible with a regular wide field microscope due to motion blur. Continuously imaging during sample movement relaxes the demands for fast stage accelerations over large distances.

These focal spot scanning techniques have been demonstrated in a brightfield configuration by employing a holographic mask to shape an illuminating laser beam into an array of focal spots [6,10]. These demonstrations create images where light absorption is the contrast mechanism. Similar systems have also been used to image fluorescence where the focal spot array is created by the Talbot effect generated by a microlens array illuminated by a laser beam [7,8]. The work in this paper is the latest and most advanced implementation of our previously reported large FOV microscope that uses a microlens array to directly focus a laser beam into a grid of focal spots [11,12]. Most significantly, this is the first multi-wavelength demonstration of our high throughput microscopy technique.

We demonstrate acquisition of high-resolution fluorescence images free of ghosting and streaking artifacts that are often seen in other multi-spot microscopy systems (Fig. 3 in [7], Fig. 5 (b1) in [8] and Fig. 6 in [10]). Ghosting and low contrast can be particularly problematic in systems where the focal spot grid is created by diffractive effects because any residual laser excitation outside of the focal spots contribute to background, thus degrading the signal to noise ratio, resolution and decreasing overall system efficiency. Our system’s focusing mechanism is entirely refractive which results in high efficiency and signal to background ratio.

2. System setup and characterization

Our high throughput imaging approach eliminates mechanical dead time by continuously imaging the sample as it is being scanned in two dimensions [11,12]. The optical layout of our approach is shown in Fig. 1. A multi-wavelength excitation laser (Laserglow, 473/532/658nm) emits a beam with an output power of up to 150mW per channel. The beam is expanded by a 50x microscope objective and reflected into the optical path by a quad-band dichroic mirror (Chroma zt405/473/532/660rpc). A 125μm pitch hexagonal grid microlens array (MLA) splits the laser beam into an array of focal spots. Because the laser beam is not collimated when it hits the microlens array, the microlenses at the periphery will create focal spots at a field angle of up to 2.5°. This is indicated by the slight tilt of the focal spots created by the microlens array in Fig. 1.

 figure: Fig. 1

Fig. 1 Schematic of the extended dynamic range microlens microscope. A microscope objective (OBJ) expands a laser beam (LB) that is focused into an array of focal spots by a microlens array (MLA) on a fluorescent sample (S). The sample sits on a raster scanning piezo stage (PS) and the MLA is imaged onto a camera (CAM) by a single lens reflex (SLR) lens. A quad-band dichroic mirror (DM) reflects the laser lines (473/532/658nm) while passing fluorescence wavelengths; an emission filter (EF) provides additional wavelength filtering. Inset i) Raw image of MLA as recorded by the camera. Scale bar is 1 mm. Inset ii) Zoom-in of green area outlined area in i). Representative N = 9 superpixels are outlined in red; N = 1 superpixels are outlined in blue. Inset iii) Representative focal spots created by the MLA with red (R), green (G) and blue (B) lasers at field angles of 0° and 2.5° Scale bar is 2μm.

Download Full Size | PDF

Each microlens in the array functions as an independent point scanning microscope by collecting the fluorescence emitted at its focal spot and relaying it back to a CMOS camera (Basler acA2000-340km) via a 50mm focal length, f/1.4 single lens reflex (SLR) camera lens with an aperture set to f/8. The SLR lens is placed 400mm from the microlens array, equal to the separation between the focal plane of the expansion microscope objective and the microlens array.

The fluorescence collected by each microlens exits its aperture at the same angle as the laser illumination at that given position in the microlens array. Consequently, the fluorescence converges to the center of the SLR aperture. This geometry is critical for avoiding vignetting from microlenses towards the periphery of the array. For example, if the illuminating laser beam were collimated, some of the signal from non-central microlenses would miss the physical aperture of the SLR lens. This is a similar concept to the de-scanning mirrors in a point scanning confocal microscope [13]. Without de-scanning the fluorescence collected by the microscope objective, the signal in a confocal microscope would miss the confocal pinhole for non-zero field angles (ie. away from the center of the FOV).

A fluorescence filter between the SLR lens and the camera filters the wavelength range appropriate for the fluorophore being employed. The SLR placement (400mm from the MLA) results in a ~7x de-magnified image of the microlens array at the camera sensor plane. The fluorescence distribution on the camera sensor is an array of bright spots (Fig. 1, (i) and (ii)), with each corresponding to an image of a microlens. The brightness of each microlens imaged in this way is proportional to the fluorescence excited from the sample at its focal spot. The set of camera sensor pixels corresponding to the image of each microlens can therefore be thought of as playing the same role as the point detector of a conventional scanning confocal microscope.

The camera acquires a video at 200 frames per second (fps) with the camera gain set to its minimal value. The sample is raster scanned under the focal spot array as the movie is recorded. The sample sits on a closed loop piezoelectric stage (Newport NPXY200SG) which is driven by a sawtooth wave to yield a speed of 100μm/s along the x-direction, resulting in a sampling density of 0.5μm per camera frame. A slow linear ramp of 0.37μm/s is applied along the y-direction. The image of the portion of the sample gathered by each microlens (a sub-FOV) is assembled by summing pixel values of the microlens image in the raw video and reorganizing them appropriately into a two dimensional image. This image has dimensions 135μm x 118μm and a pixel size of 0.5μm. Fluctuations in laser intensity are mitigated by dividing by the normalized average of all microlens intensities from each camera frame. A large FOV image is created by stitching together all of the sub-FOVs on a hexagonal lattice using nonlinear blending to minimize seams [14].

We use standard optical lithography followed by reflow to fabricate hexagonally packed MLA masters that are subsequently replicated into an optical adhesive (NOA 61) on a glass slide [11,12]. This replicated MLA is used for imaging experiments. The microlenses in this work have diameters of 122μm and sags of 11.7μm. The entire array measures 4.5cm x 4.5cm and contains more than 140,000 microlenses. We observe no measurable axial chromatic aberration: the MLA has a focal length of 248μm±2μm (NA = 0.24) for all three laser wavelengths.

The resolution of our system is determined by the focal spot size. The focal spots created by the MLA do not reach the diffraction limit owing to spherical aberration inherent in the reflow molding fabrication technique [15]. We measure the full width at half maximum (FWHM) values of the microlens focal spots by imaging the focal plane with a microscope objective with a numerical aperture (NA) of 0.80 and a magnification of 100×. Gaussian fits are applied to the images of the focal spots (Fig. 1, (iii)), for field angles of 0° and 2.5° (the maximum field angle used in this work). A focal stack is acquired by moving the microlens array slowly through the focus of the microscope objective. We then extract the frame with the highest peak pixel value and use this image for focal spot characterization. The focal length was determined experimentally by recording the distance between this focal spot plane and the base of the microlens. Thus, the working distance Wd of the microlens array is equal to the focal length minus the sag: Wd = 248μm - 11.7μm = 236.3μm.

The small field angle has almost no discernable effect on the focal spot size and shape. The focal spot FWHMs at 0° field angle are 1.44μm±0.03μm, 1.30μm±0.03μm and 1.20μm±0.03μm for red, green and blue excitation lasers, respectively. At a field angle of 2.5°, the FWHMs are 1.47μm±0.03μm (red), 1.30μm±0.03μm (green) and 1.22μm±0.03μm (blue). The uncertainty is a 95% confidence bound for the Gaussian FWHM parameter when fit to an experimentally measured focal spot. The FWHM values are slightly larger than the FWHMs for a diffraction-limited 0.24 NA lens approximated by a Gaussian focal spot: 1.37μm, 1.10μm and 0.98μm, respectively.

Fluorescent samples are not completely flat over large areas. In HCS applications, fluorescent samples typically reside in microwell plates. For our system to have applicability in drug discovery labs that employ HCS, it is necessary that the microwell plate flatness be sufficient to ensure that the height variation within the FOV is less than the depth of field of the system. Figure 2(a) shows a contact profilometry trace of the bottom of an Ibidi 96-well plate. Over a 3cm line trace, the variation in well height is ~15μm. The Fig. 2(a) insets show a focus stack of a FOV containing wrinkled 5.3μm diameter Nile Red beads (535/575nm excitation/emission peaks), imaged with the green excitation channel (532nm) of the microlens microscope. Subjectively, the image quality variation over the 15μm range is nearly imperceptible. We quantify the imaging quality by imaging isolated sub-resolution 500nm Nile Red fluorescent beads (Invitrogen), yielding the point spread function (PSF) of the green channel of the system. The modulation transfer function (MTF) for each imaging depth is calculated by taking the magnitude of the Fourier transform of the measured PSF [16]. In Fig. 2(b) we plot the MTF and PSF for imaging depths 6μm above, 9μm below and at the plane of best focus. As per convention, the MTFs are normalized to unity. Corroborating the observation that that imaging quality varies minimally over a 15μm depth of focus, the MTF has almost no change over this range. Interestingly, at low spatial frequencies, the MTF of our system lies above that of a diffraction-limited 0.24 NA circular aperture wide field system. This is likely due to apodization of the microlens apertures by the steep angle of the lens at the periphery [15]. The result is increasing reflection losses towards the edge of the microlenses. This effect manifests itself as low-pass filtering. Note that this does not mean that our microlenses are more efficient at imaging low spatial frequencies than a diffraction-limited system. Because the MTF plot is normalized, the correct interpretation is that the low spatial frequency signal is boosted relative to the high spatial frequency signal when compared to the diffraction-limited case.

 figure: Fig. 2

Fig. 2 a) Blue curve: Contact profilometry height trace along a 3cm line on a 96-well plate. Greyed out vertical bars are sections of plastic support on the well plate where there is no fluorescent sample. Insets: FOV of 5μm diameter fluorescent microspheres (Nile Red, green channel) at the focal distances indicated by the dotted horizontal lines. Positions are relative to the best focus plane. Negative distances are closer to the MLA and positive distances are farther away from the MLA. Scale bar is 20μm. b) Modulation transfer functions (MTFs) for the green laser channel at −6 (red), 0 (green) and 9μm (blue) from the best focus plane. Diffraction limited MTFs for 0.24 NA wide field and confocal microscopes with circular apertures are shown in purple and cyan, respectively. All MTFs are normalized to a maximum value of 1. Insets: Image of a 500nm bead (the PSF) placed at −6 (red), 0 (green) and 9μm (blue) from the best focus plane. Scale bar is 2μm.

Download Full Size | PDF

3. Sequential multichannel imaging

To demonstrate the high throughput capabilities of our system, we fill 16 (4 x 4) wells of a 96-well plate (Ibidi) with a mixture of Dragon Green (Bangs Labs), Nile Red (Invitrogen) and Flash Red (Bangs Labs) fluorescent beads. The Dragon Green and Flash Red beads are nominally 7.3μm in diameter and have excitation/emission peaks at 480/520nm, and 660/690nm, respectively. The Nile Red beads have excitation/emission peaks at 535/575nm, are nominally 5.3μm in diameter and have been aged so that their surface is wrinkled, yielding high spatial frequency features. Three separate images of the 4 x 4 well area are acquired with laser wavelengths of 473nm (blue), 532nm (green) and 658nm (red). The camera exposure time is 3.5ms/frame and the approximate optical power in each focal spot is 0.13μW, 0.031μW and 0.037μW for blue, green and red lasers, respectively. For 473nm excitation, a 10nm FWHM bandpass fluorescence emission filter centered at 500nm is used. Long pass filters with cut on wavelengths at 575nm and 664nm are used for excitation wavelengths of 532nm and 658nm, respectively. These excitation wavelength and emission filter combinations separate the beads into their constituent populations without the need for spectral unmixing.

Figure 3 shows a 3-channel image all 16 wells imaged, along with magnified regions of four wells (Figs. 3(b)3(e)). The red/green/blue color channels of the presented images correspond to the images obtained with red/green/blue lasers, respectively. Color channels are aligned by finding the position of their cross-correlation minimum and then correcting for this offset. Wrinkling of the green-colored beads is evident, while the blue- and red-colored beads have smooth surfaces. All areas of the 4 x 4 well region are in focus owing to the large depth of field of the microlenses, which accommodates both intra- and interwell height variations. The 4 x 4 well region has an area of 12.25cm2, corresponding to 90,550 microlenses. At a frame rate of 200 fps this is a total pixel throughput of 18.1 Mpx/s. A portion of this data corresponds to the plastic support regions between wells (Fig. 2(a)), which contain no fluorescent sample. Also, a small amount of overlap (10μm) between neighboring sub-FOVs is necessary for the stitching process. After accounting for these two factors, the final data set for each color consists of 16 images each 16,000 x 16,000 pixels in size. The acquisition time is 320 s, corresponding to a pixel throughput of 12.8 Mpx/s.

 figure: Fig. 3

Fig. 3 a) Three channel image 16 well section of a 96-well plate, filled with fluorescent microspheres. Red, green and blue fluorescent microspheres are excited with red, green and blue laser lines, respectively. Red and blue microspheres have a nominal diameter of 7.3μm. Green microspheres have high spatial frequency features due to wrinkling and have a nominal diameter of 5μm. White boxes denote locations of (b)-(e). Scale bar is 8 mm. b) to e): Zoom-in of regions indicated in (a). Scale bars are 25μm.

Download Full Size | PDF

We demonstrate the tissue imaging capability of our system by imaging a 16μm thick slice of mouse kidney (Life Technologies FluoCells Prepared Slide #3), sealed between a microscope slide and a #1.5 coverslip. The tissue section is stained with Alexa Fluor 488 and Alexa Fluor 568 Phalloidin, while the fluorescent stain 4',6-diamidino-2-phenylindole (DAPI) is also present but not imaged. Laser excitation wavelengths of 473nm (focal spot power = 0.20μW) and 532nm (focal spot power = 0.55μW) are used to excite the Alexa Fluor 488 and 568 fluorophores, respectively, whose emission is filtered with long pass filters with cut-on wavelengths of 500nm and 575nm. The camera integration time is set to 4.9ms/frame. As some of the tissue features stained with different fluorophores tend to be co-localized, we align the channels by finding their cross correlation maximum and correcting for that shift. The resulting aligned image is shown in Fig. 4(a), with the Alexa Fluor 488 (568) channel in green (red). The sample has an extent of roughly 0.5cm x 1.0cm. At a moderate zoom level, the brush border and glomeruli can be identified (Fig. 4(b)). At full image size (Fig. 4(c)), fine features such as filamentous actin and microtubules are visible. Figure 4 (Media 1) shows the sample at various magnifications, zooming in from the full field view in Fig. 4(a). and finishing with an image of the region in Fig. 4(c).

 figure: Fig. 4

Fig. 4 a) Full field view of a dual channel image of a mouse kidney slice. Scale bar is 3mm. b) Zoomed-in view of boxed region in (a). Scale bar is 50μm. c) Further magnified view of boxed region in (b). Scale bar is 10μm. Media 1 is an animation starting with image (a), zooming into region (c) in the sample.

Download Full Size | PDF

4. Parallel dual-channel imaging

Not all fluorescent samples are large enough to take full advantage of the large FOV imaged by our microlens microscope. For example, the throughput of the sequential multichannel system in the previous section is only roughly 0.75 Mpx/s when imaging the small mouse kidney sample. This throughput is modest because the sample is small, meaning that not all of the microlenses within the array are used. To increase throughput for small samples, we employ a parallelized system. The fluorescent signal from the sample is split into low-pass (500nm < λ < 532nm) and high-pass (λ > 575nm) spectral components, and both are imaged simultaneously onto the camera. Spectral decomposition is achieved by inserting a long-pass dichroic mirror (edge λ = 532nm) between the quand-band dichroic mirror and SLR lens in Fig. 1. A pair of broadband mirrors is positioned on either side of the long pass dichroic mirror. The broadband mirrors are tilted at small (and opposite) angles with respect to the long-pass dichroic, sending the fluorescence signals from the short- and long-pass components into the SLR lens at opposing angles. Emission filters are inserted at 45° with respect to each of the broadband mirrors to cut out any residual laser illumination. This arrangement of a long-pass dichroic filter, pair of broadband mirrors and emission filters will be termed a spectral splitting module (SSM).

The spectral components of the fluorescent signal arrive at the SLR lens at opposing angles. As a result, the SLR lens images two copies of the microlens apertures onto the camera sensor – one for each spectral component. The setup is shown schematically in Fig. 5. We use long-pass filters (λ > 500nm and λ > 575nm) as emission filters in the SSM, but in general any pair of filters such as bandpass filters may be used provided that the signal level of each spectral copy is comparable.

 figure: Fig. 5

Fig. 5 Parallel multichannel microscopy with a microlens array (MLA). The excitation geometry (not shown) is identical to that in Fig. 1. Fluorescence from the sample (S) - which sits on a piezo stage (PS) - enters a spectral splitting module (SSM) after the quad-band dichroic mirror (not shown, see Fig. 1). The SSM consists of: a long-pass dichoic mirror (LPDM[edge λ=532nm]); a pair of broadband mirrors (M1 & M2); a pair of long-pass emission filters (LP1[λ>575nm]& LP2[λ>500nm]). The SSM relays long- and short-pass spectral copies at opposing angles into the SLR lens, yielding spatially separated spectral images (inset) on the camera (CAM) sensor. The spectral copies are shown in false color in the inset (actual camera sensor is greyscale only).

Download Full Size | PDF

The kidney section shown in Fig. 4 is imaged again, this time using the parallel dual-channel setup described above, using 473nm laser excitation. The resulting image is shown in Fig. 6. This parallel geometry image matches closely with the sequentially acquired image; the two are compared in Fig. 6(b). There is some mixing of the two spectral channels because of the emission filters used. Nevertheless, the localization of the fluorescent stains to their appropriate cellular structures is evident. Spectral bleeding can be mitigated or avoided altogether by using bandpass filters within the SSM that are appropriate for the given fluorophores.

 figure: Fig. 6

Fig. 6 a) Dual-channel mouse kidney slice image, acquired in the parallel configuration. Red color is the λ>575nm channel and green color is the 500nm>λ>532nmchannel. Scale bar is 2 mm. b) Magnified view of boxed region in (a). Inset: Corresponding area of the mouse kidney slice when channels are imaged sequentially (data set from Fig. 4). Scale bar is 50μm.

Download Full Size | PDF

Parallel acquisition of two spectral channels results in a direct doubling of effective pixel throughput to 1.5 Mpx/s. This parallel configuration is not limited to small samples, however. Most microwell plates consist of loosely packed rectangular arrays of circular wells. One could duplex two channels on the sensor by interleaving the spectral copies of the rectangular array of wells to form a quasi hexagonally packed array of wells, thereby doubling the throughput. This interleaving approach is possible with any regularly arrayed substrate with less than a 50% packing fraction.

5. Dynamic range

As discussed, our method records the fluorescence collected by each microlens by integrating the pixels associated with its image on the camera sensor. Interestingly, this extends the dynamic range of our imaging technique to exceed that of each pixel of the camera. Recall from Fig. 1, (ii) that each microlens aperture is imaged to an n x n pixel region on the camera sensor. Therefore, even though the camera records an 8-bit video, the conglomeration of the N = n2 pixels assigned to each microlens can take on values from 0 to 255 x N. We will call this combination of N pixels a “superpixel”. The extent of each microlens (122μm), the extent of the camera sensor pixels (5.5μm) and the ~7x demagnification provided by the SLR lens result in the superpixel comprising N = 9 pixels.

The point spread function (PSF) created by the SLR lens on the camera sensor is comparable in size to the image of a microlens, leading to an image of a microlens that resembles a Gaussian spot. As a result, the center of the image of a microlens will saturate its camera pixel before a pixel on the periphery of the microlens image saturates. This restricts off center microlens pixels to smaller dynamic ranges than the center. Thus, unsaturated superpixel values are restricted to values smaller than 255 x 9 = 2295.

To quantify the dynamic range of our system, we record a movie of microlenses relaying fluorescence in the following configuration. The piezo stage is not scanned and the SLR lens aperture is set to f/8. The microlens focal spots are brought into focus so that they excite fluorescence at the sample. We record 300 frames at 200 fps. Laser fluctuations are mitigated by dividing each frame by the average intensity of all microlenses in the frame and then multiplying by the highest average microlens intensity of all of the frames. The signal-to-noise ratio (SNR) of a superpixel i is calculated by dividing the mean superpixel value μi by its standard deviation σi. The SNR curve for an N = 1 superpixel is calculated using only the central pixel in each superpixel (blue boxes, Fig. 1, (ii)). For the N = 9 case we sum all 9 pixels that make up the image of each microlens and then calculate the SNR using this summed value (red boxes, Fig. 1, (ii)). In order to avoid pixel value saturation, we do not include any superpixels for which any pixel reaches a value of 255 during the movie.

Figure 7 shows a comparison between the SNR curves for superpixels of size N = 1 and N = 9 pixels. The N = 9 curve shows lower SNR than the N = 1 curve due to increased read noise arising from the use of 9 pixels. This property is captured by a model that accounts for shot noiseaIand camera pixel read noise nr: SNR = aI/aI+nr2, whereIis the signal intensity in grey level units (e.g. 0 to 255 for N = 1) [11]. We fit this model to the data in Fig. 4, resulting in best-fit parameters of (a,nr)=[(34.21,13.64); (31.00, 41.97)] for N = 1 and N = 9 superpixels, respectively. For the N = 9 fit we only use data points where every pixel in the superpixel has a grey level above 0. Pixels within a superpixel that contain no signal do not contribute to the overall noise because the read noise is below one grey level. As expected, we find that the read noise for an N = 9 superpixel is 3.1-fold (9) higher than for the N = 1 case, because noise sources within a superpixel add in quadrature.

 figure: Fig. 7

Fig. 7 Signal-to-noise ratio (SNR) curves of microlens imaging system. White circles and red dots are SNR curves for superpixel sizes of N = 1 and N = 9 pixels, respectively. The dashed (solid) curve is a fit to the N = 1 (N = 9) data of the form SNR=aI/aI+nr2, whereaandnrare fitting parameters andIis the pixel intensity (grey level). Dynamic range extension occurs for intensities (grey levels) lying between the vertical dashed lines.

Download Full Size | PDF

At the top end of the dynamic range, the SNR is dictated by shot noise. Because an N = 9 configuration can collect more photons before saturation, the maximum SNR is increased. We measure an 87% larger SNR for N = 9 (SNR = 174) than for N = 1 (SNR = 93) at the top of the dynamic range.

The dynamic range is also improved with the N = 9 superpixel configuration. The largest unsaturated signal for N = 9 takes a grey value of 1190 while the fitting results in Fig. 7 imply that the noise floor (SNR = 1) is at a grey level of 1.37. The dynamic range of an N = 9 superpixel is therefore 1190/1.37 = 868 (58.77 dB). For an N = 1 superpixel, the dynamic range is limited to 255 (48.13 dB) by 8-bit digitization. The dynamic range for an N = 9 superpixel is increased 3.4-fold (868/255) over the intrinsic dynamic range of the camera, from 8 bits to 9.76 bits. The increase in dynamic range comes with the caveat that it inherently occurs at higher signal levels – the extra bits come at the top end of dynamic range, as indicated by the dynamic range extension region in Fig. 7.

For weakly fluorescent samples, imaging can become read noise dominated [13]. A superpixel of size N contributes N times more read noise than a single pixel, modifying the SNR by a factor of 1/N. Therefore, when sensitivity is paramount, it is optimal to reduce the magnification in order to direct all the photons from a single microlens to as few pixels as possible (small N). However, the magnification must be kept large enough to ensure that the images of the microlenses are still resolvable on the camera. The magnification can thus be tailored according to the nature of the sample in order to maximize either sensitivity or dynamic range.

6. Conclusions

We have demonstrated an extended dynamic range MLA-based high throughput multichannel fluorescence imaging system. Two geometries are presented: sequential and parallel multichannel acquisition. The microscope uses a MLA to image over a large FOV, thereby reducing mechanical dead time inherent in commercial high throughput imagers. We sequentially image three channels of fluorescent beads in a microwell plate at a rate of 3 wells per minute per channel at resolutions up to 1.20μm. The total pixel throughput of our system is 18.1 Mpx/s. When the fact that much of microwell plate consists of plastic support regions (that do not contain beads) and the stitching overhead are taken into account, the pixel throughput is then found to be 12.8 Mpx/s. This is roughly 3x faster than commercial microscopes. Even higher speeds are possible with continuous samples with no dead space [12]. We also demonstrate sequential and parallel multichannel tissue imaging, indicating our system’s applicability in arrayed tissue imaging. We anticipate that this could be used for a variety of applications in biological research requiring high throughput imaging, for example for labeled brain tissue sections (Brainbow [17],) in neuroscience. The parallel dual-channel geometry could potentially be used to make up for blank space in the well plate by interleaving two different spectral copies of the well plate onto a single image sensor, thereby doubling the pixel throughput.

Acknowledgments

This work was supported partly by the National Science Foundation (grant number ECCS-1201687), and partly by Thermo Fisher Scientific. Fabrication was carried out in the Harvard Center for Nanoscale Systems (CNS), which is supported by the NSF.

References and links

1. R. Pepperkok and J. Ellenberg, “High-throughput fluorescence microscopy for systems biology,” Nat. Rev. Mol. Cell Biol. 7(9), 690–696 (2006). [CrossRef]   [PubMed]  

2. P. Lang, K. Yeow, A. Nichols, and A. Scheer, “Cellular imaging in drug discovery,” Nat. Rev. Drug Discov. 5(4), 343–356 (2006). [CrossRef]   [PubMed]  

3. M. Bickle, “The beautiful cell: high-content screening in drug discovery,” Anal. Bioanal. Chem. 398(1), 219–226 (2010). [CrossRef]   [PubMed]  

4. Olympus ScanR specifications website, http://www.olympus-europa.com/microscopy/en/microscopy/components/component_details/component_detail_21320.jsp. Accessed 22 January 2014.

5. Molecular Devices ImageXpress Micro XLS specifications website, http://www.moleculardevices.com/Products/Instruments/High-Content-Screening/ImageXpress-Micro.html. Accessed 28 January 2014.

6. J. Wu, X. Cui, G. Zheng, Y. M. Wang, L. M. Lee, and C. Yang, “Wide field-of-view microscope based on holographic focus grid illumination,” Opt. Lett. 35(13), 2188–2190 (2010). [CrossRef]   [PubMed]  

7. S. Pang, C. Han, M. Kato, P. W. Sternberg, and C. Yang, “Wide and scalable field-of-view Talbot-grid-based fluorescence microscopy,” Opt. Lett. 37(23), 5018–5020 (2012). [CrossRef]   [PubMed]  

8. S. Pang, C. Han, J. Erath, A. Rodriguez, and C. Yang, “Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging,” Opt. Express 21(12), 14555–14565 (2013). [CrossRef]   [PubMed]  

9. S. A. Arpali, C. Arpali, A. F. Coskun, H. H. Chiang, and A. Ozcan, “High-throughput screening of large volumes of whole blood using structured illumination and fluorescent on-chip imaging,” Lab Chip 12(23), 4968–4971 (2012). [CrossRef]   [PubMed]  

10. B. Hulsken, D. Vossen, and S. Stallinga, “High NA diffractive array illuminators and application in a multi-spot scanning microscope,” J. Eur. Opt. Soc. Rapid Publ. 7, 12026 (2012). [CrossRef]  

11. A. Orth and K. B. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20(12), 13522–13531 (2012). [CrossRef]   [PubMed]  

12. A. Orth and K. B. Crozier, “Gigapixel fluorescence microscopy with a water immersion microlens array,” Opt. Express 21(2), 2361–2368 (2013). [CrossRef]   [PubMed]  

13. J. B. Pawley, Handbook of Biological Confocal Microscopy, 3rd ed. (Springer, 2006).

14. S. Preibisch, S. Saalfeld, and P. Tomancak, “Globally optimal stitching of tiled 3D microscopic image acquisitions,” Bioinformatics 25(11), 1463–1465 (2009). [CrossRef]   [PubMed]  

15. F. T. O’Neill and J. T. Sheridan, “Photoresist reflow method of microlens production Part II: Analytic models,” Optik 113(9), 405–420 (2002). [CrossRef]  

16. J. W. Goodman, Introduction to Fourier optics (McGraw-Hill International Editions, 1996), Chap. 2.

17. D. Cai, K. B. Cohen, T. Luo, J. W. Lichtman, and J. R. Sanes, “Improved tools for the Brainbow toolbox,” Nat. Methods 10(6), 540–547 (2013). [CrossRef]  

Supplementary Material (1)

Media 1: MOV (4164 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic of the extended dynamic range microlens microscope. A microscope objective (OBJ) expands a laser beam (LB) that is focused into an array of focal spots by a microlens array (MLA) on a fluorescent sample (S). The sample sits on a raster scanning piezo stage (PS) and the MLA is imaged onto a camera (CAM) by a single lens reflex (SLR) lens. A quad-band dichroic mirror (DM) reflects the laser lines (473/532/658nm) while passing fluorescence wavelengths; an emission filter (EF) provides additional wavelength filtering. Inset i) Raw image of MLA as recorded by the camera. Scale bar is 1 mm. Inset ii) Zoom-in of green area outlined area in i). Representative N = 9 superpixels are outlined in red; N = 1 superpixels are outlined in blue. Inset iii) Representative focal spots created by the MLA with red (R), green (G) and blue (B) lasers at field angles of 0° and 2.5° Scale bar is 2μm.
Fig. 2
Fig. 2 a) Blue curve: Contact profilometry height trace along a 3cm line on a 96-well plate. Greyed out vertical bars are sections of plastic support on the well plate where there is no fluorescent sample. Insets: FOV of 5μm diameter fluorescent microspheres (Nile Red, green channel) at the focal distances indicated by the dotted horizontal lines. Positions are relative to the best focus plane. Negative distances are closer to the MLA and positive distances are farther away from the MLA. Scale bar is 20μm. b) Modulation transfer functions (MTFs) for the green laser channel at −6 (red), 0 (green) and 9μm (blue) from the best focus plane. Diffraction limited MTFs for 0.24 NA wide field and confocal microscopes with circular apertures are shown in purple and cyan, respectively. All MTFs are normalized to a maximum value of 1. Insets: Image of a 500nm bead (the PSF) placed at −6 (red), 0 (green) and 9μm (blue) from the best focus plane. Scale bar is 2μm.
Fig. 3
Fig. 3 a) Three channel image 16 well section of a 96-well plate, filled with fluorescent microspheres. Red, green and blue fluorescent microspheres are excited with red, green and blue laser lines, respectively. Red and blue microspheres have a nominal diameter of 7.3μm. Green microspheres have high spatial frequency features due to wrinkling and have a nominal diameter of 5μm. White boxes denote locations of (b)-(e). Scale bar is 8 mm. b) to e): Zoom-in of regions indicated in (a). Scale bars are 25μm.
Fig. 4
Fig. 4 a) Full field view of a dual channel image of a mouse kidney slice. Scale bar is 3mm. b) Zoomed-in view of boxed region in (a). Scale bar is 50μm. c) Further magnified view of boxed region in (b). Scale bar is 10μm. Media 1 is an animation starting with image (a), zooming into region (c) in the sample.
Fig. 5
Fig. 5 Parallel multichannel microscopy with a microlens array (MLA). The excitation geometry (not shown) is identical to that in Fig. 1. Fluorescence from the sample (S) - which sits on a piezo stage (PS) - enters a spectral splitting module (SSM) after the quad-band dichroic mirror (not shown, see Fig. 1). The SSM consists of: a long-pass dichoic mirror (LPDM [ edge  λ = 532 nm] ); a pair of broadband mirrors (M1 & M2); a pair of long-pass emission filters (LP1 [ λ > 575 nm] & LP2 [ λ > 500 nm] ). The SSM relays long- and short-pass spectral copies at opposing angles into the SLR lens, yielding spatially separated spectral images (inset) on the camera (CAM) sensor. The spectral copies are shown in false color in the inset (actual camera sensor is greyscale only).
Fig. 6
Fig. 6 a) Dual-channel mouse kidney slice image, acquired in the parallel configuration. Red color is the λ > 575 nm channel and green color is the 500 nm > λ > 532 nm channel. Scale bar is 2 mm. b) Magnified view of boxed region in (a). Inset: Corresponding area of the mouse kidney slice when channels are imaged sequentially (data set from Fig. 4). Scale bar is 50μm.
Fig. 7
Fig. 7 Signal-to-noise ratio (SNR) curves of microlens imaging system. White circles and red dots are SNR curves for superpixel sizes of N = 1 and N = 9 pixels, respectively. The dashed (solid) curve is a fit to the N = 1 (N = 9) data of the form SNR = a I / a I + n r 2 , where a and n r are fitting parameters and I is the pixel intensity (grey level). Dynamic range extension occurs for intensities (grey levels) lying between the vertical dashed lines.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.