Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-exposure quantitative phase imaging in color-coded LED microscopy

Open Access Open Access

Abstract

We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

© 2017 Optical Society of America

1. Introduction

Optical microscopy is crucial for observing the details of materials and biological samples [1]. In typical label-free microscopy, the image contrast arises from light absorbance or scattering induced by the sample, and thus cannot provide high-contrast images for translucent biological samples such as unlabelled cells. Phase-contrast microscopy provides enhanced image contrast for such weak absorbing and scattering samples by rendering the interference of the scattered and unscattered light into an intensity alteration [2,3]. Differential interference contrast microscopy also offers visualization of such objects by transforming the interference of spatially separated lights passing through the sample into an intensity variation [4].

However, the measured intensity information from these methods is not linear with the optical phase, and therefore may not be directly utilized for quantitative studies. In quantitative phase microscopy [5], the optical path-length of the specimen is quantitatively measured, allowing for various quantitative studies such as the measurement of cellular mechanical properties [6,7] and dynamic transport of intracellular structures [8]. As such, diverse QPI techniques have been developed, which may be categorized into interferometric and noninterferometric methods. The interferometric methods include a variety of techniques based on the phase-shifting interferometer [9–13], digital holography [14,15], Hilbert phase microscopy [16], diffraction phase microscopy [17], and some methods with low-coherence interferometry [18–20]. Noninterferometric QPI methods entail techniques via the Transport of Intensity Equation [21–23] and wavefront sensors (e.g., pyramid [24] and partitioned aperture wavefront sensors [25]), to name a few. The QPI techniques based on wavefront sensors in [24] and [25], in particular, enable single-shot QPI by performing simultaneous acquisition of four images that correspond to four oblique detection directions. The acquired images are then used to compute phase gradient images in two orthogonal directions. Yet, these methods require custom optics for Fourier space division in the detection path, and do not exploit full pixel resolution provided by the image sensor; the sensor area is divided by four subregions, so that each region is used to measure the image corresponding to a quadrant in the detection Fourier space.

LED microscopy is a simple and cost-effective means to phase imaging [26–29]. Based on controllable and patterned illumination, LED microscopy demonstrated a multi-contrast imaging capability [27,30,31], and has been utilized to produce quantitative phase-gradient and phase images in combination with computational algorithms [26,28,29,32]. Using multiple LED illuminations, high-resolution images could be obtained over a large field of view via the Fourier ptychographic approaches [33,34]. Recently, Tian et al. [29] and Claus et al. [35] demonstrated phase retrieval based on weak object transfer function in the platforms of LED and extreme ultraviolet (EUV) microscopes, which facilitate phase reconstruction with arbitrary source and pupil configuration. However, multiple measurements are still required for phase estimation with high accuracy [29]. Tian et al. noted that for a certain source or pupil configuration, there may be regions of missing spatial frequencies. Thus, QPI with four image acquisitions based on different LED patterns has been performed to capture spatial frequency information without any missing zones [29].

cLEDscope is one of the LED microscopy techniques that utilizes color multiplexing to improve imaging throughput. In the previous demonstration [27], a color LED array in the source plane was configured such that single image acquisition and subsequent computation yielded bright-field, dark-field, and differential phase-contrast (DPC) images simultaneously. This method also demonstrated QPI using two images with different LED patterns designed to obtain DPC images along the x and y directions [27].

Here, we demonstrate single-exposure QPI in the platform of cLEDscope. The LED source pattern is configured as a circle that is trisected in equal angles with red, blue, and green colors. Image acquisition with a color image sensor and computation based on weak object transfer functions (WOTFs) allow for the measurement of the amplitude and phase distributions of transparent biological specimens in a single shot. We describe our WOTF-based computational model and phase estimation algorithm for such configuration. Phase measurement accuracy is validated with imaging of precalibrated silica microspheres, and the single-shot phase imaging capability is demonstrated with the static and time-lapse imaging of unlabeled cells.

2. Setup and modeling

2.1 Optical setup

Figure 1 depicts a schematic of cLEDscope. In cLEDscope, a color LED array replaces the light source of a conventional microscope. Patterned LED illumination and computation with the images in red (R), green (G), and blue (B) colors realize multi-contrast imaging [27].

 figure: Fig. 1

Fig. 1 Schematic of cLEDscope for single-exposure QPI. A color LED array is placed ~100 mm away from the specimen plane (S). The LED illumination pattern is trisected into subregions with equal area. Each region is assigned red, green, and blue colors. Light passing through a transparent specimen is collected by an objective lens (OBJ), and subsequently detected by a color image sensor. Computation with R, G, and B images in combination with weak object transfer functions results in quantitative phase images. S: specimen plane, TL: tube lens.

Download Full Size | PDF

For single-exposure QPI, a color LED array (32 × 32 RGB LED Matrix Panel with 4-mm pitch, Adafruit, USA) was placed at ~100 mm away from the specimen plane (S) so that it could be located approximately at the Fourier plane of the specimen plane. The circular LED pattern was trisected into subregions with equal area, and each region was assigned red (R), green (G), and blue (B) colors [Fig. 1]. The size of the illumination pattern was set to be larger than the pupil size of the optical microscope (1.25 × the pupil size, in our case).

The light transmitted through a transparent specimen was collected by a microscope objective (NA 0.45, 20 × , Nikon, Japan) and then directed to a color camera via the tube lens (TL) to form the image at the sensor. The image was recorded by the color camera (IOI Flare 2M-360CL, IO Industries, Canada). The frame rate of the camera was 30 fps. The output of the camera was converted to a RGB image through gradient-corrected linear interpolation method [36]. We assessed other interpolation methods (e.g., linear and bicubic interpolation), and found that cLEDscope imaging performance was not markedly influenced by the employed interpolation method (Sec. 3.4). The R, G, and B images were then extracted from the interpolated color image, and used to retrieve phase information based on the methods described in Sec. 2.2–2.3.

The spatial resolution of the cLEDscope was measured by imaging a 1951 USAF resolution target (R1DS1N, Thorlabs, Inc., USA). The full-width at half-maximum (FWHM) resolution was estimated to be 0.63 μm, which matched to the theoretical estimation within 7%. Note that the cLEDscope employs partially coherent illumination, and thus provides spatial resolution corresponding to 2 × the detection NA.

2.2 Computational model

Our phase estimation is based on the weak object transfer function as described by Streibl [37] and Claus et al. [35]. A thin biological specimen (e.g., cells) can be regarded as a weakly scattering object, so that its transmission function t(r) can be written as

t(r)=eμ(r)+iϕ(r)1μ(r)+iϕ(r)
where r, μ(r), and ϕ(r) represent spatial coordinates in the object plane, absorption and phase of the specimen, respectively. The condition for the weak object approximation has been studied by Lu et al. [38]. It has been found that if the phase delay is smaller than 0.74π, the measurement error is smaller than 5%. We separately performed the same analysis based on the color-coded LED illumination, and noted that the measurement error is smaller than 5%, if the phase delay is smaller than 0.64π rad.

The specimen in our method is illuminated with different illumination pattern at each color in a single exposure, and we thus denote the source illumination of each color as Sl(l=R,G, and B). A circular pupil is used for all image acquisition, as determined by the numerical aperture of the detection objective. Under the source illumination of color l, the intensity in the Fourier plane can be expressed as [29,35]

I˜l(u)=D˜lδ(u)+H˜labsμ˜l(u)+H˜lphϕ˜l(u)
where uis the spatial frequency coordinates, D˜l is the background representing the component that does not interact with the sample, and μ˜l,ϕ˜l,H˜labs, and H˜lphdenote the absorption, phase, and their corresponding transfer functions, respectively. The expressions for D˜l,H˜labs, and H˜lph are given by [29,35].
D˜l=S˜l(u)|P˜(u)|2d2u
H˜labs(u)=[(S˜l(u)P˜*(u))P˜(u)+P˜(u)(S˜l(u)P˜*(u))]
H˜lph(u)=i[(S˜l(u)P˜*(u))P˜(u)P˜(u)(S˜l(u)P˜*(u))]
where represents the two-dimensional cross correlation, and P˜(u) is the pupil function.

We now assume that the specimen is not dispersive, meaning that its refractive index does not vary significantly with wavelength. We further assume that the absorption changes inversely with wavelength, i.e., μ˜(u)~1λ. For most biological specimens, this approximation is valid for the wavelength range considered here, as can be inferred in [39]. Using these assumptions, Eq. (2) can be rewritten as

I˜l(u)=D˜lδ(u)+H˜labsλ0λlμ˜(u)+H˜lphλ0λlϕ˜(u)
where μ˜(u) and ϕ˜(u) represent the absorption and phase at the measurement wavelength λ0 in the Fourier space. The measurement wavelength λ0 can be one of the three colors, i.e., red, green, and blue. In our case, the green color (λ = 530 nm) was chosen as the measurement wavelength. In deriving Eq. (6), we used the relations of λlϕ˜l(u)=λ0ϕ˜(u) and λlμ˜l(u)=λ0μ˜(u). We then define the DPC image at color l as
I˜lDPC(u)=(2I˜l(u)I˜m(u)I˜n(u))/(I˜l(u)+I˜m(u)+I˜n(u))
where l, m, and n represent colors (i.e., red, green, and blue) and lm,n. Note that the LED patterns with red, green, and blue colors have equal area, and so D˜R=D˜G=D˜B. Therefore, the background term is cancelled out during the evaluation of the numerator term of Eq. (7). It should be noted that this DPC computation is different from other studies [26,27,29,38,40] and is employed to enable the background cancellation for our unique illumination scheme. Equation (7) can then be reformulated in a matrix form as
AX=B
where A is (H˜R_DPCabs(u)H˜G_DPCabs(u)H˜B_DPCabs(u)H˜R_DPCph(u)H˜G_DPCph(u)H˜B_DPCph(u)), B is (I˜RDPC(u)I˜GDPC(u)I˜BDPC(u)), and X is (μ˜(u)ϕ˜(u)) with the DPC amplitude and phase transfer functions H˜l_DPCabs(u) and H˜l_DPCph(u)at color l given by
H˜l_DPCabs(u)=(2λ0λlH˜labs(u)λ0λmH˜mabs(u)λ0λnH˜nabs(u))/(D˜l+D˜m+D˜n)
H˜l_DPCph(u)=(2λ0λlH˜lph(u)λ0λmH˜mph(u)λ0λnH˜nph(u))/(D˜l+D˜m+D˜n)
Note that for a weak scattering object, the absorption can be assumed to be negligible, and thus was omitted in the evaluation of denominator in Eq. (7) [29,38]. Equation (8) indicates that the absorption and phase information of a specimen in the spatial frequency domain can be obtained by the direct inversion of matrix A. However, direct inversion of matrix A often does not produce a correct estimation because some elements in matrix A may be zero or extremely small. Therefore, we seek the solution to the least square problem
min|AXB|2+α2|βX|2
Here, α is the regularizer, and β is the diagonal matrix of weighting factors introduced to give different regularization factors for the absorption and phase. The Tikhonov solution to Eq. (11) can then be obtained by
(μ(r)ϕ(r))=F1{AHB(AHA+α2βHβ)}
where F1and H represent the inverse Fourier transform and the conjugate transpose, respectively. Note that Eq. (12) retrieves both absorption and phase of the specimens using two regularization terms, and is different from the reconstruction scheme in [29].

2.3 Color-leakage correction

Commercially available color LEDs typically exhibit broad spectral emissions. Moreover, for most color image sensors, the spectral response of a color channel is not completely isolated from those of other channels. Hence, one can expect that the light of a certain color may leak into and be detected by other color channels. In [27], to minimize this issue, the full pupil of our cLEDscope was allocated to red and blue colors, which are spectrally apart. The color leakage from R to B colors or vice versa in this case was found to be smaller than 9%, resulting in a phase estimation error smaller than 5%. The leakage of green LED light into other detector channels was not significant due to the dark-field illumination. In the present scheme, on the other hand, single-shot QPI is achieved by using an illumination pattern that is trisected in equal angles and allocated to R, G, and B colors. Since red, green, and blue LED lights are all within the pupil and significant portion of green light spectrally overlaps with the blue and red detection channels, the color-leakage becomes much more significant. In our case, approximately 22% of green LED light could be detected in the blue color channel. This color leakage results in incorrect DPC images, leading to errors in our phase estimation.

In order to mitigate the phase error owing to the color leakage, a correction method was devised. In the presence of the color leakage, the detector signal measured in a color channel can be expressed as the sum of the light of the desired color and the lights of other colors. In other words, the measured signals in the R, G, and B channels can be written as

(IRCCDIGCCDIBCCD)=(RRRRGRRBRRRGRGGRBGRRBRGBRBB)(IRIMGIGIMGIBIMG)
where IlCCD and IlIMG are the signal measured in the l channel in the detector and the light intensity of color l incident on the detector, respectively (l = R, G, and B). The element Rkl represents the detector response of the color-l channel to the LED light of color k (l, k = R, G, and B). In practice, this element can be easily found by measuring the signal in the l channel of the image sensor under the LED illumination of color k only. Once the detector response matrix is experimentally obtained, the accurate measurement of the light intensity at each color can be calculated as
(IRIMGIGIMGIBIMG)=(RRRRGRRBRRRGRGGRBGRRBRGBRBB)1(IRCCDIGCCDIBCCD)
For accurate phase estimation, we performed color-leakage correction for the acquired color image, and used the results to obtain a quantitative phase image using the method described in Sec. 2.2.

3. Results

3.1 Phase retrieval accuracy

We first assessed the phase measurement accuracy of our cLEDscope. Silica microspheres with a diameter (d) of 5 μm (44054, Sigma-Aldrich Co., St. Louis, MO, USA) were immersed in an index-matching gel (0608, Cargille Laboratories, Cedar Grove, NJ, USA) and placed between a microscope coverslip and a glass slide. cLEDscope imaging was then performed to obtain the quantitative phase distribution of the specimen. The phase delay induced by the microspheres Δϕ is estimated to be (2πλ)(nsngel)d=1.18 rad. Here, the refractive indices of the microsphere (ns) and the index matching gel (ngel) were 1.44 and 1.46 at a wavelength of 0.53 μm, as found in [41] and the datasheet from the manufacturer.

Figure 2(a) shows the phase image of the microspheres with color-leakage correction. For phase estimation, a regularizer α of 0.9×101.5and a diagonal matrix β of (1009×102) were used. The phase information along the dashed line in the inset of Fig. 2(a) is shown as a black solid line in Fig. 2(b). The phase delay at the center of the microsphere relative to the glass slide was measured to be −1.17 rad. Without color-leakage correction, the phase delay of the microsphere center relative to the glass slide was measured to be −0.58 rad [dashed line in Fig. 2(b)]. One can see that the application of the color-leakage correction significantly improved the phase accuracy. For the color-leakage corrected image, the difference between the cLEDscope measurement and the estimated phase was −0.01 rad, which corresponds to <1% of the estimated phase value. The discrepancy may be partly attributed to the uncertainties in the size of the microspheres and our estimation of the center wavelength.

 figure: Fig. 2

Fig. 2 (a) cLEDscope quantitative phase image of silica microspheres. The image was obtained with the color-leakage correction algorithm. (b) Phase distributions along dashed line in (a) for color-leakage corrected (solid line) and uncorrected (dashed line) cases. Application of color-leakage correction algorithm improves phase estimation accuracy. The scale bar represents 50μm.

Download Full Size | PDF

3.2 Phase imaging of live cells

We then performed cLEDscope phase imaging of unlabeled cellular specimens. For phase measurement, we used the same regularizer and diagonal matrix as in the microsphere experiments (α=0.9×101.5 and β=(1009×102)). The image acquisition was performed at 30 fps. The computation for the phase estimation was performed with MATLAB on a desktop computer (Intel i5-3570, 8GB RAM, Windows7 64bit), and the computation time was measured to be 8 sec. Figure 3 shows the representative DPC and phase images of immortalized human keratinocytes (HaCaT cell lines), human adipose-derived stem cells (hADSC, Invitrogen, Carlsbad, CA), and human red blood cells (RBCs). Shown in the three columns to the left are the DPC images for the three colors evaluated as in Eq. (7). The images in the last column are the reconstructed phase images. A detailed visualization of the intracellular structures could be obtained through the DPC and quantitative phase images. Indicated by the arrows in Fig. 3(a) are the nucleoli of the HaCaT cells. Oval disk shapes of RBCs were also clearly observed in Fig. 3(c).

 figure: Fig. 3

Fig. 3 Single-exposure DPC and quantitative phase images of immortalized human keratinocytes (a), human adipose-derived stem cells (b), and human red blood cells (c). Columns (I), (II), (III), and (IV) show the DPC images for colors R, G, B, and quantitative phase images of the specimens. The scale bar represents 25μm.

Download Full Size | PDF

We note that the negative phase background in the proximity of cell layers in Figs. 3(b) and 3(c) could result from the limitation of weak object approximation and focusing errors. In our model, it is assumed that specimen is illuminated by the plane waves emitted from the LEDs, and the amplitude and phase perturbations due to the specimen are small. However, the light from each LED may not be a plane wave in the experiment due to the system aberrations and focusing errors, resulting in intensity inhomogeneity across the field of view. The phase background due to this non-homogenous intensity distribution can be minimized by the DC term subtraction and normalization as described by Eq. (7), but cannot be completely removed, especially for the out-of-focus specimens. Furthermore, the linear model in Eq. (2) relies on the first-order Born approximation in which the magnitude of the scattered light by the specimen is much smaller than that of the incident light, i.e., the use of weakly scattering object. The stem cells, however, exhibit large phase alteration and scattering in the cell boundaries. Recently, Jenkins et al. [42] proposed a new WOTF-based reconstruction method that is less restrictive than the first-order Born approximation. Similar approaches may be employed to correct for such artifacts.

We also performed time-lapse imaging of human sperm cells to demonstrate the single-exposure, real-time phase imaging capability of our cLEDscope. The cells were residual samples that were acquired and processed at the Severance Yonsei Hospital, Seoul, South Korea. Figures 4(a)-4(d) show a snapshot of DPC images for R, G, B, and quantitative phase image of the sperm cells, respectively. It can be noted that the distinctive shapes of sperm heads and tails can be clearly visualized. Visualization 1 shows the dynamic movement of the sperm cells recorded by the cLEDscope.

 figure: Fig. 4

Fig. 4 Snapshot of time-lapse images of human sperm cells. (a), (b), and (c) show DPC images for R, G, and B colors. (d) presents the quantitative phase image. Images were acquired at a frame rate of 30 fps and the image size was 512 × 512 pixels. The scale bar represents 25μm. See Visualization 1.

Download Full Size | PDF

3.3 Phase error due to the dispersive property of the specimen

Since the cLEDscope measures the phase distribution of a specimen based on a single image acquired with color-coded illumination, the result may be influenced by the dispersive property of the specimen. In order to assess the phase error caused by the sample dispersion, we performed phase imaging of HaCaT cells with monochromatic and color-coded LED illuminations, and then compared the measurement results. For both illumination cases, the same color image sensor was utilized, and green LEDs and green channels of the image sensor were used to perform phase imaging in the monochromatic case. Figures 5(a) and 5(b) shows the employed LED illumination patterns and the corresponding phase images. Notice that the cLEDscope facilitates phase imaging in a single-shot, while four images are required for the monochromatic case. Figure 5(c) shows the differences between the two measurements. The root mean squared difference was found to be 0.25 rad, which corresponds to 7.8% of the maximum phase value. A difference in optical focus between the two illumination cases may partly account for the discrepancy.

 figure: Fig. 5

Fig. 5 Comparison of the quantitative phase images based on monochromatic and color-coded LED illuminations. Note that four image acquisitions are required for monochromatic illumination, whereas cLEDscope performs phase imaging in a single shot. LED illumination patterns and corresponding phase images are presented in (a) and (b). The difference map between the images of (a) and (b) is shown in (c). The scale bar represents 25μm.

Download Full Size | PDF

3.4 Effect of color interpolation on cLEDscope imaging performance

In our cLEDscope imaging, we perform gradient-corrected linear interpolation [36] to convert a Bayer pattern coded image to the corresponding RGB image. The RGB image was then used for subsequent color-leakage correction and phase estimation. We assessed other color interpolation schemes to examine the effect of the employed interpolation scheme on the cLEDscope imaging performance. Three color interpolation methods were evaluated, namely gradient-corrected linear, linear and bicubic interpolation methods. We first examined the phase measurement accuracy with the three interpolation methods. Shown in the top of Figs. 6(a)–6(c) are the cLEDscope QPIs of 5-μm silica microspheres (44054, Sigma-Aldrich Co., St. Louis, MO, USA) immersed in the index-matching gel (0608, Cargille Laboratories, Cedar Grove, NJ, USA) obtained with gradient-corrected linear (a), linear (b), and bicubic (c) interpolation methods, respectively. Note that the color-interpolated images were used for subsequent color-leakage correction and phase estimation. We measured the phase distributions along the dashed lines in the insets in the corresponding phase images. It can be seen from the bottom of Figs. 6(a)–6(c) that the phase measurements obtained with the three interpolation methods differ only by < 5%, suggesting that the color interpolation method does not significantly influence on the phase accuracy.

 figure: Fig. 6

Fig. 6 (Top) cLEDscope quantitative phase images of 5-μm silica microspheres obtained with gradient-corrected linear (a), linear (b), and bicubic (c) interpolation methods. The microspheres were immersed in the index-matching gel. The scalebar denotes 50 μm. (Bottom) Phase distributions along the dashed lines in the insets of the corresponding images.

Download Full Size | PDF

We then measured and compared the spatial resolutions acquired with the three interpolation methods. A 1951 USAF resolution target (R1DS1N, Thorlabs, Inc., USA) was imaged by the cLEDscope, and the image was converted into the RGB images by use of the three interpolation schemes. Shown in Figs. 7(a1)–7(c1) are the intensity images acquired with gradient-corrected linear, linear, and bicubic interpolation methods, respectively. In all the images, the smallest features in groups 6 and 7 could be clearly discerned. The spatial frequency of the lines of elements 6 in group 7 is 228 lps∕mm. The images in Figs. 7(a2)–7(c2) are the magnified views of the regions indicated in Figs. 7(a1)–7(c1), respectively. The intensity distributions along the dashed lines are presented in Figs. 7(a3)–7(c3). We evaluated the FWHMs of their first-order derivatives. The measured lateral resolutions were found to be 0.63 μm, 0.72 μm, and 0.69 μm, respectively, indicating the differences in the spatial resolutions were smaller than 15%. These results suggest that the color interpolation method does not significantly influence on the cLEDscope imaging performance.

 figure: Fig. 7

Fig. 7 (a1)–(c1) cLEDscope intensity images of a 1951 USAF resolution target acquired with gradient-corrected linear, linear, and bicubic color interpolation methods, respectively. (a2)–(c2) Magnified view of the regions indicated by the dashed rectangles in the corresponding images. The scalebar denotes 10 μm. (a3)–(c3) Intensity distributions along the dashed lines in (a2)–(c2).

Download Full Size | PDF

4. Discussion

QPI has been demonstrated in other forms of LED microscopy. For example, QPI was demonstrated in Fourier ptychographic microscopy [43], and DPC-based phase imaging was also achieved using an LED array [27,29]. These methods, however, require multiple image acquisitions or iterative image processing to realize QPI with high accuracy. Claus et al. recently presented a strategy for QPI using arbitrary illumination and pupils in EUV regime. In principle, one can perform a single-shot QPI based on the schemes described in [35], but should note that single-image acquisition with a particular source and pupil function may have an issue with missing spatial frequency components. Therefore, the acquisition of multiple images is still desired to improve phase accuracy. Our method, by contrast, utilizes color multiplexing to obtain multiple DPC information in a single acquisition. Therefore, our method alleviates the issue with missing the spatial frequency information and is capable of single-shot QPI. In using color multiplexing, the dispersive nature of a specimen may generate errors in our phase estimation. However, as demonstrated in Sec. 3.3., most biological cellular specimens are not dispersive, leading to a phase error smaller than 7.8% compared to the monochrome case.

The presented method is distinct from that of [27], although the employed optical setup is the same. The QPI scheme in [27] required two images acquired with different LED patterns designed to obtain DPC images in the x and y directions. Phase estimation was performed based on the Phase Gradient Transfer Function (PGTFs). Here, we demonstrated single shot quantitative phase imaging based on new color-coded illumination using a trisected LED pattern with each region assigned red, green, and blue colors. Use of this particular illumination pattern enables acquisition of three DPC images in different directions in a single shot. The absorption and phase extraction algorithms were derived from the corresponding weak object transfer functions (WOTFs), not from the PGTFs. As noted previously by Tian et al. [29], WOTF inverse model compensates for the reduced contrast at high spatial frequencies, and therefore provides robust amplitude and phase extraction for a weak scattering specimen. We also noted that most commercially-available LED array and image sensors do not offer spectrally isolated emissions and pixel responses, respectively. To resolve this color-leakage issue, we devised the correction scheme for the color-leakage, improving the phase measurement accuracy. Single-shot, real-time quantitative phase imaging capability of our method was then demonstrated by providing a movie that shows dynamic movements of sperm cells. In [27], real-time single-shot QPI was not possible, unless high-speed, expensive color image sensor synchronized with LED array is employed.

Our reconstruction scheme employs two regularization factors to estimate absorption and phase of a specimen. Phase accuracy and image contrast of QPI depend on these parameters, and the optimal parameters could vary with the specimens. This feature could be a drawback in comparison to other QPI methods. In our case, we performed phase imaging with calibrated phase samples, i.e., silica microspheres, and found that α=0.9×101.5 and β=(1009×102)were the values that resulted in the phase error smaller than 5% and image SNR higher than 20. The image SNR was evaluated as the ratio of the largest phase value in the image (i.e, the center of the microspheres) to the standard deviation of the phase values in the background (i.e., glass slide) over 100 × 100 pixels. In order to investigate sensitivities of the phase measurement and image SNR to the regularization parameters, we examined the phase errors and image SNRs as varying the regularization factors. The phase error and image SNRs were found to vary within 2% and 13%, respectively, so long as the parameters vary within 20% from the regularization factors of our choice.

In comparison to other QPI methods, one of the prominent features of our method is its simplicity in implementation. In our demonstration, QPI could be achieved in a conventional microscope by replacing the light source with a color LED array. Other configurations can also be conceived. For example, the multi-colored filter with spectral segments as described in Fig. 1 can be inserted immediately behind white light sources such as lamps and white-light LEDs. Indeed, single-shot QPI based on color-multiplexed filter with different spectral arrangement has recently been demonstrated [44]. In other configurations, a color filter can be inserted in the pupil plane in the detection path, with white light sources in the source plane. Acquired color images can then be processed in the similar manner, as described in Sec. 2.

Our method obtains phase information of a specimen based on the intensity images in R, G, and B colors. For interterferometric phase-imaging setups, employing a reference light with higher intensity typically increases the signal-to-noise ratio (SNR), thereby improving the phase sensitivity. In our case, phase estimation is based on the three intensity images in R, G, and B colors, and so the SNRs in the corresponding intensity images determine the phase detection limit. The noise-equivalent phase is determined by the noise sources such as the intensity noise of the light source, shot noise, and camera readout noise. To quantify the phase detection limit in our setup, we acquired 100 cLEDscope phase images of a flat microscope glass slide, and obtained a standard-deviation phase map. The averaged standard deviation over the field of view was measured to be ~0.048 rad. Use of a current-regulated light source and cooled color image sensor is expected to improve the phase stability.

Our cLEDscope is a viable platform for field-portable phase microscopy. Smartphones and webcams are inexpensive consumer-grade imaging devices that are equipped with color CMOS image sensors. Hence, based on these mobile devices, the cLEDscope can be readily implemented into portable QPI systems using appropriate illuminators with a specified spectral segment. This portable cLEDscope can then be utilized as a low-cost diagnostic platform for malaria and waterborne parasite detection, for example, in resource-limited settings.

Funding

National Research Foundation of Korea (NRF) (NRF-2015R1A5A1037668 and NRF-2015R1A1A1A05001548); Korea Ministry of Environment (KME) as Geo-Advanced Innovative Action Project (2015000540008)

Acknowledgment

The authors would like to thank Seunghee Oh, Prof. Hyungsuk Lee, Kisuk Yang, Prof. Seung Woo Cho and Prof. Sang-Guk Lee for providing the cell samples.

References and links

1. J. Mertz, Introduction to Optical Microscopy (Roberts, 2010).

2. F. Zernike, “How I discovered phase contrast,” Science 121(3141), 345–349 (1955). [CrossRef]   [PubMed]  

3. C. Burch and J. Stock, “Phase-contrast microscopy,” J. Sci. Instrum. 19(5), 71–75 (1942). [CrossRef]  

4. G. Nomarski, “Differential microinterferometer with polarized light,” Phys. Radium 16, 9–13 (1955).

5. M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative phase imaging,” Prog. Opt. 57, 133–217 (2012). [CrossRef]  

6. Y. Park, C. A. Best, K. Badizadegan, R. R. Dasari, M. S. Feld, T. Kuriabova, M. L. Henle, A. J. Levine, and G. Popescu, “Measurement of red blood cell mechanics during morphological changes,” Proc. Natl. Acad. Sci. U.S.A. 107(15), 6731–6736 (2010). [CrossRef]   [PubMed]  

7. Y. Park, C. A. Best, T. Kuriabova, M. L. Henle, M. S. Feld, A. J. Levine, and G. Popescu, “Measurement of the nonlinear elasticity of red blood cell membranes,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 83(5 Pt 1), 051925 (2011). [CrossRef]   [PubMed]  

8. Z. Wang, L. Millet, V. Chan, H. Ding, M. U. Gillette, R. Bashir, and G. Popescu, “Label-free intracellular transport measured by spatial light interference microscopy,” J. Biomed. Opt. 16(2), 026019 (2011). [CrossRef]   [PubMed]  

9. P. Hariharan, B. F. Oreb, and T. Eiju, “Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm,” Appl. Opt. 26(13), 2504–2506 (1987). [CrossRef]   [PubMed]  

10. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22(16), 1268–1270 (1997). [CrossRef]   [PubMed]  

11. G. Popescu, L. P. Deflores, J. C. Vaughan, K. Badizadegan, H. Iwai, R. R. Dasari, and M. S. Feld, “Fourier phase microscopy for investigation of biological structures and dynamics,” Opt. Lett. 29(21), 2503–2505 (2004). [CrossRef]   [PubMed]  

12. N. Warnasooriya and M. K. Kim, “LED-based multi-wavelength phase imaging interference microscopy,” Opt. Express 15(15), 9239–9247 (2007). [CrossRef]   [PubMed]  

13. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19(2), 1016–1026 (2011). [CrossRef]   [PubMed]  

14. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. 24(5), 291–293 (1999). [CrossRef]   [PubMed]  

15. C. J. Mann, P. R. Bingham, V. C. Paquit, and K. W. Tobin, “Quantitative phase imaging by three-wavelength digital holography,” Opt. Express 16(13), 9753–9764 (2008). [CrossRef]   [PubMed]  

16. T. Ikeda, G. Popescu, R. R. Dasari, and M. S. Feld, “Hilbert phase microscopy for investigating fast dynamics in transparent systems,” Opt. Lett. 30(10), 1165–1167 (2005). [CrossRef]   [PubMed]  

17. G. Popescu, T. Ikeda, R. R. Dasari, and M. S. Feld, “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. 31(6), 775–777 (2006). [CrossRef]   [PubMed]  

18. C. G. Rylander, D. P. Davé, T. Akkin, T. E. Milner, K. R. Diller, and A. J. Welch, “Quantitative phase-contrast imaging of cells with phase-sensitive optical coherence microscopy,” Opt. Lett. 29(13), 1509–1511 (2004). [CrossRef]   [PubMed]  

19. C. Joo, T. Akkin, B. Cense, B. H. Park, and J. F. de Boer, “Spectral-domain optical coherence phase microscopy for quantitative phase-contrast imaging,” Opt. Lett. 30(16), 2131–2133 (2005). [CrossRef]   [PubMed]  

20. M. A. Choma, A. K. Ellerbee, C. Yang, T. L. Creazzo, and J. A. Izatt, “Spectral-domain phase microscopy,” Opt. Lett. 30(10), 1162–1164 (2005). [CrossRef]   [PubMed]  

21. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73(11), 1434–1441 (1983). [CrossRef]  

22. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49(1), 6–10 (1984). [CrossRef]  

23. T. E. Gureyev and K. A. Nugent, “Rapid quantitative phase imaging using the transport of intensity equation,” Opt. Commun. 133(1-6), 339–346 (1997). [CrossRef]  

24. I. Iglesias, “Pyramid phase microscopy,” Opt. Lett. 36(18), 3636–3638 (2011). [CrossRef]   [PubMed]  

25. A. B. Parthasarathy, K. K. Chu, T. N. Ford, and J. Mertz, “Quantitative phase imaging using a partitioned detection aperture,” Opt. Lett. 37(19), 4062–4064 (2012). [CrossRef]   [PubMed]  

26. L. Tian, J. Wang, and L. Waller, “3D differential phase-contrast microscopy with computational illumination using an LED array,” Opt. Lett. 39(5), 1326–1329 (2014). [CrossRef]   [PubMed]  

27. D. Lee, S. Ryu, U. Kim, D. Jung, and C. Joo, “Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging,” Biomed. Opt. Express 6(12), 4912–4922 (2015). [CrossRef]   [PubMed]  

28. C. Zuo, J. Sun, J. Zhang, Y. Hu, and Q. Chen, “Lensless phase microscopy and diffraction tomography with multi-angle and multi-wavelength illuminations using a LED matrix,” Opt. Express 23(11), 14314–14328 (2015). [CrossRef]   [PubMed]  

29. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23(9), 11394–11403 (2015). [CrossRef]   [PubMed]  

30. G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Opt. Lett. 36(20), 3987–3989 (2011). [CrossRef]   [PubMed]  

31. Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. 19(10), 106002 (2014). [CrossRef]   [PubMed]  

32. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

33. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]   [PubMed]  

34. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

35. R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Opt. Express 23(20), 26672–26682 (2015). [CrossRef]   [PubMed]  

36. H. S. Malvar, L. He, and R. Cutler, “High-quality linear interpolation for demosaicing of Bayer-patterned color images,” in 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing (2004), pp. 485–488. [CrossRef]  

37. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. 2(2), 121–127 (1985). [CrossRef]  

38. H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24(22), 25345–25361 (2016). [CrossRef]   [PubMed]  

39. M. T. Cone, J. D. Mason, E. Figueroa, B. H. Hokr, J. N. Bixler, C. C. Castellanos, G. D. Noojin, J. C. Wigle, B. A. Rockwell, V. V. Yakovlev, and E. S. Fry, “Measuring the absorption coefficient of biological materials using integrating cavity ring-down spectroscopy,” Optica 2(2), 162–168 (2015). [CrossRef]  

40. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34(13), 1924–1926 (2009). [CrossRef]   [PubMed]  

41. J. Lim, K. Lee, K. H. Jin, S. Shin, S. Lee, Y. Park, and J. C. Ye, “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23(13), 16933–16948 (2015). [CrossRef]   [PubMed]  

42. M. H. Jenkins and T. K. Gaylord, “Quantitative phase microscopy via optimized inversion of the phase optical transfer function,” Appl. Opt. 54(28), 8566–8579 (2015). [CrossRef]   [PubMed]  

43. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

44. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS One 12(2), e0171228 (2017). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1: MP4 (541 KB)      Dynamic movements of human sperm cells

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic of cLEDscope for single-exposure QPI. A color LED array is placed ~100 mm away from the specimen plane (S). The LED illumination pattern is trisected into subregions with equal area. Each region is assigned red, green, and blue colors. Light passing through a transparent specimen is collected by an objective lens (OBJ), and subsequently detected by a color image sensor. Computation with R, G, and B images in combination with weak object transfer functions results in quantitative phase images. S: specimen plane, TL: tube lens.
Fig. 2
Fig. 2 (a) cLEDscope quantitative phase image of silica microspheres. The image was obtained with the color-leakage correction algorithm. (b) Phase distributions along dashed line in (a) for color-leakage corrected (solid line) and uncorrected (dashed line) cases. Application of color-leakage correction algorithm improves phase estimation accuracy. The scale bar represents 50μm.
Fig. 3
Fig. 3 Single-exposure DPC and quantitative phase images of immortalized human keratinocytes (a), human adipose-derived stem cells (b), and human red blood cells (c). Columns (I), (II), (III), and (IV) show the DPC images for colors R, G, B, and quantitative phase images of the specimens. The scale bar represents 25μm.
Fig. 4
Fig. 4 Snapshot of time-lapse images of human sperm cells. (a), (b), and (c) show DPC images for R, G, and B colors. (d) presents the quantitative phase image. Images were acquired at a frame rate of 30 fps and the image size was 512 × 512 pixels. The scale bar represents 25μm. See Visualization 1.
Fig. 5
Fig. 5 Comparison of the quantitative phase images based on monochromatic and color-coded LED illuminations. Note that four image acquisitions are required for monochromatic illumination, whereas cLEDscope performs phase imaging in a single shot. LED illumination patterns and corresponding phase images are presented in (a) and (b). The difference map between the images of (a) and (b) is shown in (c). The scale bar represents 25μm.
Fig. 6
Fig. 6 (Top) cLEDscope quantitative phase images of 5-μm silica microspheres obtained with gradient-corrected linear (a), linear (b), and bicubic (c) interpolation methods. The microspheres were immersed in the index-matching gel. The scalebar denotes 50 μm. (Bottom) Phase distributions along the dashed lines in the insets of the corresponding images.
Fig. 7
Fig. 7 (a1)–(c1) cLEDscope intensity images of a 1951 USAF resolution target acquired with gradient-corrected linear, linear, and bicubic color interpolation methods, respectively. (a2)–(c2) Magnified view of the regions indicated by the dashed rectangles in the corresponding images. The scalebar denotes 10 μm. (a3)–(c3) Intensity distributions along the dashed lines in (a2)–(c2).

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

t(r)= e μ(r)+iϕ(r) 1μ(r)+iϕ(r)
I ˜ l (u)= D ˜ l δ(u)+ H ˜ l abs μ ˜ l (u)+ H ˜ l ph ϕ ˜ l (u)
D ˜ l = S ˜ l (u) | P ˜ (u) | 2 d 2 u
H ˜ l abs (u)=[( S ˜ l (u) P ˜ * (u)) P ˜ (u)+ P ˜ (u)( S ˜ l (u) P ˜ * (u))]
H ˜ l ph (u)=i[( S ˜ l (u) P ˜ * (u)) P ˜ (u) P ˜ (u)( S ˜ l (u) P ˜ * (u))]
I ˜ l (u)= D ˜ l δ(u)+ H ˜ l abs λ 0 λ l μ ˜ (u)+ H ˜ l ph λ 0 λ l ϕ ˜ (u)
I ˜ l DPC (u)=(2 I ˜ l (u) I ˜ m (u) I ˜ n (u))/( I ˜ l (u)+ I ˜ m (u)+ I ˜ n (u))
AX=B
H ˜ l_DPC abs (u)=(2 λ 0 λ l H ˜ l abs (u) λ 0 λ m H ˜ m abs (u) λ 0 λ n H ˜ n abs (u))/( D ˜ l + D ˜ m + D ˜ n )
H ˜ l_DPC ph (u)=(2 λ 0 λ l H ˜ l ph (u) λ 0 λ m H ˜ m ph (u) λ 0 λ n H ˜ n ph (u))/( D ˜ l + D ˜ m + D ˜ n )
min | AXB | 2 + α 2 | βX | 2
( μ(r) ϕ(r) )= F 1 { A H B ( A H A+ α 2 β H β) }
( I R CCD I G CCD I B CCD )=( R R R R G R R B R R R G R G G R B G R R B R G B R B B )( I R IMG I G IMG I B IMG )
( I R IMG I G IMG I B IMG )= ( R R R R G R R B R R R G R G G R B G R R B R G B R B B ) 1 ( I R CCD I G CCD I B CCD )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.