Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Graded-field microscopy with white light

Open Access Open Access

Abstract

We present a general imaging technique called graded-field microscopy for obtaining phase-gradient contrast in biological tissue slices. The technique is based on introducing partial beam blocks in the illumination and detection apertures of a standard white-light widefield transillumination microscope. Depending on the relative aperture sizes, one block produces phase-gradient contrast while the other reduces brightfield background, allowing a full operating range between brightfield and darkfield contrast. We demonstrate graded-field imaging of neurons in a rat brain slice.

©2006 Optical Society of America

1. Introduction

Widefield phase imaging is of general utility in the biological sciences. The most popular microscopy techniques make use of Zernike phase contrast (PC) [1, 2] or Nomarski differential interference contrast (DIC) [3, 4]. In both techniques, uniform white light transilluminates a sample. Phase variations provoked by the sample are converted to intensity variations by specialized optics in the light path and are then detected by a widefield camera. DIC differs from PC in that it is sensitive to phase gradients along a particular lateral direction and hence confers an apparent 3D relief to the sample, producing shadow effects as if the illumination were incident from an angle.

Alternative phase imaging techniques also produce a similar 3D relief appearance. These techniques undoubtedly originated from the observation familiar to most microscopists that DIC-like shadow effects can be produced simply by partially blocking the light path before or after the sample. When this partial block is placed near a Fourier plane relative to the sample plane, then the sample illumination appears uniform and the resulting image bears striking resemblance to a DIC image, despite the fact that no specialized optics are required whatsoever. Variations of this technique involve placing a partial block in the illumination path (sometimes referred to as oblique[5, 6] or anaxial[7] illumination, or Dodt contrast[8]), or in the detection path (often referred to as Schlieren imaging[9, 10, 11, 12, 13]). In its usual implementation, Schlieren imaging makes use of slit (i.e. coherent) illumination, in which case a half-block in the detection path produces phase contrast by a Hilbert transform[14]. A more elaborate variation of Schlieren imaging, known as Hoffman contrast, also involves slit illumination but uses a specialized partial beam “modulator” with three levels of opacity[15]. Generalizations of Schlieren imaging to incoherent illumination have also been studied[11].

We present a rigorous description that encompasses all of the above techniques by including the possibility of partial-blocks in both the illumination and detection apertures. While such a possibility was previously considered for fixed partial aperture blocks [6, 13], its analysis remained largely qualitative. Our description is general in that we allow the possibilities of partial-blocks of arbitrary offsets and fill-factors. Different offsets and fill-factors are found to lead to different degrees of contrast that range from fully brightfield to fully darkfield. As such, we refer to our general technique as “graded-field” contrast. For simplicity, we confine our analysis to square apertures and consider white light illumination of arbitrary spatial coherence. Finally, we present examples of graded-field imaging of neurons in unlabeled rat brain slices.

2. Formalism

The basic microscope layout is shown in Fig. (1). This is a conventional transillumination widefield microscope which comprises Köhler illumination and afocal imaging. The illumination source, typically a lamp, is imaged onto the back focal plane of the condenser lens fa (equivalently, this source can be located exactly at the back focal plane rather than imaged onto it, as depicted in Fig. 1). We refer to the condenser back focal plane as the illumination aperture. The transilluminated light emanating from the sample, located at plane 1, is then imaged onto a CCD camera, located at plane 3. For simplicity, we consider unit magnification only, though our formalism may easily be generalized to arbitrary magnification. The back aperture of the objective lens fb is located at plane 2. We refer to the objective back aperture as the detection aperture. In both the illumination and detection apertures we introduce translatable light blocks that partially block these apertures to arbitrary fill factors. In Fig. (1) these blocks are introduced from the same side, along the transverse x-axis. Again for simplicity, we consider square apertures which allows us to treat the x and y axes independently, Henceforth, we will consider only the x axis, with the understanding that conventional brightfield imaging occurs along the y axis.

Our strategy will be to derive the mutual coherence incident on (superscript in) and emanating from (superscript out) the sample, followed by the intensity distribution incident on the CCD camera. We begin by noting that the field amplitudes in planes separated by a single lens are related by simple Fourier transforms. Planes 0 and 2 are conjugate to one another and will be referred to as Fourier planes relative to the sample (1) and image (3) planes (also conjugate to one another).

 figure: Fig. 1.

Fig. 1. Experimental setup: incoherent white light transilluminates a sample in a 6 f (unit-magnification) imaging line. Partial beam blocks are introduced in the illumination (top) and/or detection (bottom) apertures.

Download Full Size | PDF

If the illumination field amplitude at plane 0 (just before the partial block) is given by E 0(x0), then the field amplitude incident on the sample is

E1(x1)=α1α2E0(ξ0)eix1ξ0dξ0

where we have used the Fourier coordinate ξ0=kx 0/fa (k=2π/λ and λ is the average illumination wavelength) and α1,2=ka 1,2/fa , where a 1 and a 2 represent the physical limits of the illumination aperture as prescribed by the position of its partial-beam block (i.e. a 1,2 are spatial coordinates whereas α1,2 are Fourier coordinates). If the block is removed then a 1→-a and a 2a, meaning that the fully opened aperture is a square of spatial dimension 2a. We omitted the prefactor in front of the integral in Eq. (1) since this plays no role in our discussion.

We define the mutual coherence[16] by

J(x,x)=E(x)E*(x)¯

where the overbar represents a time average. From this, we obtain

J1in(x1,x1)=α1α2dξ0α1α2dξ0J0(ξ0,ξ0)ei(x1ξ0x1ξ0)

This is the mutual coherence function incident on the sample for an arbitrary illumination mutual coherence. For the particular case where the illumination is uniform in intensity but spatially incoherent we write[16]

J0(ξ0,ξ0)=δ(ξ0ξ0)

We then obtain

J1in(x1,x1)=J1in(x1d)=αdeiαcx1dsinc(12αdx1d)

where we have defined x 1d=x′ 1-x 1 and

αd=α2α1
αc=12(α1+α2)

Throughout this paper, we will use the subscripts d to denote the difference between two variables and c to denote their average (or center). In other words, αd denotes the width of the illumination aperture and αc its offset (in Fourier coordinates).

Three comments can be made here. On examining Eq. (5) we note that the mutual coherence incident on the sample is independent of x 1c, meaning that the intensity incident on the sample is spatially uniform independently of the aperture block position. This follows from the general principle of Köhler illumination. Moreover, because the aperture size is finite, the incident illumination is now partially coherent over an approximate range 2π/αd , as prescribed by the sinc function in Eq. (5). Finally, the effect of an offset in the aperture is to introduce a complex phase in the (otherwise real) incident mutual coherence. This phase will play a critical role in producing a gradient-field contrast.

We now derive the intensity distribution I 3(x 3) at the CCD camera given an arbitrary mutual coherence J1out(x 1,x′ 1) emanating from the sample. Following the same line of reasoning as above, we obtain

J3(x3,x3)=β1β2dξ2β1β2dξ2J2(ξ2,ξ2)ei(x3ξ2x3ξ2)

where we have used the Fourier coordinates ξ2=kx 2/fb and β 1,2=kb 1,2/fb , where b 1 and b 2 represent the physical (spatial) limits of the detection aperture as prescribed by the position of its partial-beam block. As above, if the beam block is removed, then the detection aperture is a square of spatial dimension 2b.

Similarly

J2(ξ2,ξ2)=dx1dx1J1out(x1,x1)ei(ξ2x1ξ2x1)

where the integration limits can be extented to infinity. Combining Eqs. (7) and (8) and integrating over the detection aperture coordinates, we find

J3(x3,x3)=βd2dx1dx1J1out(x1,x1)eiβc(x1d+x3d)sinc(12βd(x1+x3))sinc(12βd(x1+x3))

Finally, we obtain

I3(x3)=dx1cdx1dG13(x3+x1c,x1d)J1out(x1c,x1d)

where we have introduced the transfer function

G13(x3+x1c,x1d)=βd2eiβcx1dsinc(12βd(x3+x1c12x1d))sinc(12βd(x3+x1c+12x1d))

and made use of the definition I 3(x 3)=J 3(x 3,x 3). We have also recast mutual coherence in the form J 1(x 1c,x 1d)= E1(x1cx1d/2)E1*(x1c+x1d/2)¯ .

The function G 13 in Eq. (10) can be thought of as a kind of point spread function (PSF), however instead operating on an incoherent intensity, as does a normal PSF, it operates on a partially coherent intensity. As in the illumination case, an offset in the detection aperture introduces a complex phase in G 13. As will be seen below, the interplay between this phase and the phase in Eq. (5) will ultimately determine the contrast level in our graded-field microscope.

So far we have derived the mutual coherence incident on the sample, and also the intensity distribution incident on the camera given an arbitrary output mutual coherence emanating from the sample. The remaining step is to establish the link between the mutual coherences incident and emanating fromthe sample. This is done by assuming that the sample is thin and of arbitrary complex amplitude transmitivity t(x 1) such that

E1out(x1)=t(x1)E1in(x1)

The aim of our microscope is to reveal phase variations in t(x 1). From the definition of mutual coherence, we obtain the relation

J1out(x1c,x1d)=T(x1c,x1d)J1in(x1c,x1d)

where we have introduced the mutual coherence transmitivity

T(x1c,x1d)=t(x1c12x1d)t*(x1c+12x1d)

The link between J1in and J1out has therefore been established, at least formally. Equations (5), (10) and (13) are the main results of this section. These define the relation between the observed intensity I 3 at the camera and the phase variations in the sample manifest in T. This relationship will be explicitly derived below.

3. Symmetry properties

In our model the sample is entirely characterized by its complex amplitude transmittivity t(x 1), which we assume to be arbitrary and hence impose no conditions on its profile. By construction, however, the associated mutual coherence transmittivity obeys the following condition

T(x1c,x1d)=T(x1cx1d)*

We can therefore decompose this complex transmitivity into its real and imaginary components such that T(x 1c,x 1d)=Tr (x 1c,x 1d)+iTi (x 1c,x 1d), and note that the real functions Tr (x 1c,x 1d) and Ti (x 1c,x 1d) are respectively even and odd in x 1d.

Similarly, we decompose J 1(x 1d) into its real and imaginary components such that J 1(x 1d)=Jr (x 1d)+iJi (x 1d), where

Jr(x1d)=αdcos(αcx1d)sinc(12αdx1d)
Ji(x1d)=αdsin(αcx1d)sinc(12αdx1d)

and note that Jr (x 1d) and Ji (x 1d) are respectively even and odd in x 1d.

Finally, we decompose G 13(x 3+x 1c,x 1d) into its real and imaginary components

Gr(x3+x1c,x1d)=βd2cos(βcx1d)sinc(12βd(x3+x1c12x1d))sinc(12βd(x3+x1c+12x1d))
Gi(x3+x1c,x1d)=βd2sin(βcx1d)sinc(12βd(x3+x1c12x1d))sinc(12βd(x3+x1c+12x1d))

respectively even and odd in x 1d.

Our strategy will be to isolate only the even components in the integrand of Eq. (10). To this end, we further define the sample independent transfer functions

Ke(x3+x1c,x1d)=Gr(x3+x1c,x1d)Jr(x1d)Gi(x3+x1c,x1d)Ji(x1d)
Ko(x3+x1c,x1d)=Gi(x3+x1c,x1d)Jr(x1d)+Gr(x3+x1c,x1d)Ji(x1d)

which are again respectively even and odd in x 1d. Through simple trigonometric relations we find

Ke(x3+x1c,x1d)=cos((αc+βc)x1d)K(x3+x1c,x1d)
Ko(x3+x1c,x1d)=sin((αc+βc)x1d)K(x3+x1c,x1d)

where

K(x3+x1c,x1d)=αdβd2sinc(12αdx1d)sinc(12βd(x3+x1c12x1d))sinc(12βd(x3+x1c+12x1d))

Finally, keeping only the even terms in Eq. (10), we obtain

I3(x3)=(Ke(x3+x1c,x1d)Tr(x1c,x1d)+Ko(x3+x1c,x1d)Ti(x1c,x1d))dx1cdx1d

This equation along with the Eqs. (22), (23) and (24) constitute the main results of this section. In effect, Eq. (25) explicitly relates the sample transmittivity, encoded in T(x 1c,x 1d), to the final intensity detected by the CCD camera. The functions Ke (x 3+x 1c,x 1d) and Ko (x 3+x 1c,x 1d) play dual roles. They serve as transfer functions for the coordinate x 1c whereas they serve as window functions for the coordinate x 1d. As transfer functions, they essentially image a sample point x 1c to a detector point x 3.with an attendant loss of resolution characterized by the width of K(x 3+x 1c,x 1d). As window functions their role is more subtle in that they reveal non-local features in the sample transmittivity. The roles of Ke and Ko as window functions are shown in Fig. (2) for various states of aperture blocking.

A benefit of the coordinate transformation in Eqs (6) is that it effectively isolates the roles of the aperture widths and aperture offsets in Eqs. (22) and (23). The aperture widths appear only in K and govern image resolution. The aperture offsets, on the other hand, govern the relative weights of Ke and Ko . If neither aperture is offset (α c =β c =0) then KeK and Ko →0. This corresponds to a standard brightfield configuration and the Ke component in Eq. (25) can be thought of as performing brightfield imaging. When aperture offsets are introduced, then Ko begins to play a role at the expense of Ke . Because of the asymmetry in x 1d, this Ko component can be thought of as performing gradient imaging. As we will see below, Ke produces amplitude contrast whereas Ko produces phase contrast.

 figure: Fig. 2.

Fig. 2. Plots of the window functions Ke(0, x1d) and Ko (0, x 1d) for different aperture block configurations (shown schematically on left). Ke reveals amplitude fluctuations whereas Ko reveals phase-gradient fluctuations in the sample. B characterizes the net brightfield contribution, or background level, as defined by the integral of Ke (Eq. (26)). The argument x 1d is normalized to be unitless.

Download Full Size | PDF

4. Graded field tuning

A better appreciation of the effects of Ke and Ko is gained by considering their net contributions. We quantify these contributions by evaluating the integrals of Ke and Ko over x 1c and x 1d . It is clear that the integral of Ko vanishes because Ko is odd in x 1d. In general, however, the integral of Ke does not vanish and we define the net brightfield contribution by

B=Ke(x3+x1c,x1d)dx1cdx1dKtotal(x3+x1c,x1c)dx1cdx1d

where Ktotal corresponds to the total brightfield contribution when the aperture blocks are removed. In other words, the parameter B characterizes the extent to which the microscope contrast is graded. If B=0 then the contrast is fully darkfield; if B=1 then the contrast is fully brightfield. B can be tuned within this range simply by adjusting the aperture offsets. We note that the integrals over x 1c in Eq. (26) are straightforward from:

sinc(12βd(x3+x1c+12x1d))sinc(12βd(x3+x1c12x1d))dx1c=2πβdsinc(12βdx1d)

A pictoral representation of B is shown in Fig. (2). B can be thought of as the relative amount of ballistic (i.e. unscattered) illumination light that arrives at the detector. When the illumination and detection apertures are fully open then all the ballistic illumination arrives at the detector and B=1. When beam blocks are introduced, then B is reduced in proportion to the to the intersection of the aperture areas. Finally, when the beam blocks cover exactly half of their respective apertures, then B=0. In this case, there is no brightfield background and the contrast is purely darkfield. The illumination light that directly traverses the sample (i.e. ballistic light) is completely blocked and the only light impinging on the detector arises from sample scattering.

5. Phase versus amplitude contrast

We distinguish phase versus amplitude contrast by explicitly separating the sample complex transmittivity t(x 1) into real and imaginary components

t(x1)=1q(x1)+ip(x1)

where the real functions q(x 1) and p(x 1) respectively impart amplitude and phase variations to the transmitted light. We assume that the sample is thin, meaning both q(x 1) and p(x 1) are much smaller than 1. To first order, we then write

Tr(x1c,x1d)1q(x1c12x1d)q(x1c+12x1d)
Ti(x1c,x1d)p(x1c+12x1d)p(x1c12x1d)

thereby segregating amplitude and phase variations into the real and imaginary parts of T(x 1c,x 1d) respectively.

The above relations suggest that the brightfield component Ke in Eq. (10) produces amplitude contrast while the gradient component Ko produces phase contrast. More precisely, Ke reveals q(x 1c) only when B>0 (if B=0 then Ke reveals the local curvature in q(x 1c), or the second derivative q″(x 1c)). In contrast, Ko reveals the local slope in p(x 1c), or the first derivative p′(x 1c), thereby further refining its role as performing “phase gradient” imaging (the possibility of a partial aperture block to reveal phase gradients has been previously recognized[11]). Our phase gradient imaging is sensitive to the sign of p′(x 1c). Positive phase gradients lead to an increase in detected intensity whereas negative phase gradients lead to a decrease in detected intensity. We note that in the case of darkfield imaging (B=0) then the detected intensity cannot be less than zero and our phase gradient contrast becomes one-sided. That is, positive phase gradients are detected whereas negative phase gradients are not. In particular, one can verify that when B=0 (i.e. αc =αd /2 and βc =βd /2) and when the sample complex transmittivity is a pure phase (i.e. t(x 1)=exp[i ϕ(x 1)]) then the Ke and Ko integrals in Eq. (25) exactly cancel for negative phase gradients. However once a small brightfield background is introduced by slightly opening one or both aperture blocks (B≲0), then negative phase gradients become observable and the image takes on a 3D relief appearance, much like a DIC image. Experimental examples of graded-field imaging are shown in Fig. (3).

 figure: Fig. 3.

Fig. 3. Experimental images of pyramidal neurons in an acute rat hippocampus slice (depth ~50µm). Images were taken with no aperture blocks (fully brightfield configuration - panel A), with a partial beam block in the illumination (panel B) or (panel C) detection aperture, and with partial beam blocks in both apertures (panel D). In the experimental setup, the illumination aperture was larger than the detection aperture. The slices were 400mm thick; the rat was 15 days old.

Download Full Size | PDF

6. Image resolution

We recall that Ke and Ko serve as transfer functions for the coordinate x 1c and as window functions for the coordinate x 1d. Image resolution therefore depends on the ranges allowed for these coordinates. In general, these ranges are determined by the widths and offsets of the illumination and detection apertures, whose roles, according to Eqs. (22) and (23), can be analyzed separately.

We begin by analyzing the role of the aperture widths. Since these widths appear only in K (Eq. (24)), their role is the same for both Ke and Ko . We consider two limits that characterize qualitatively different states of illumination, which we refer to as incoherent (adbd ) and coherent (adbd ).

In the case adbd (incoherent illumination) we can make the approximation

K(x3+x1c,x1d)αdβd2sinc(12αdx1d)sinc2(12βd(x3+x1c))

The roles of K as transfer and window functions have become separated here, and the ranges of x 1d and x 1c are roughly defined by |x 1d≳ 2π/αd and |x 3+x 1c|≳π/βd . Because K is localized to small x 1d’s, it effectively reduces to an incoherent PSF where the observed light intensity at the detector position x 3 depends only on the transmitted light intensity in the vicinity of the sample position x 1c.

In the opposite case adbd (coherent illumination) we can make the approximation

K(x3+x1c,x1d)αdβd2sinc(12βd(x3+x1c12x1d))sinc(12βd(x3+x1c+12x1d))

The roles of K as a window and transfer function remain inseparable. The ranges of x 1d and x 1c are now roughly defined by |x 1d|≳2π/βd and |x 3+x 1c|≳π/βd , which are no longer independent. The observed light intensity at the detector position x 3 depends on the distribution of transmitted light fields (as opposed to intensities) in the vicinity of the sample position x 1c. We note that the limit adbd effectively corresponds to slit illumination, as is the case for both Schlieren and Hoffman contrast techniques[10, 12, 15].

In both cases of incoherent and coherent illumination, transfer function resolution from sample position x 1c to detector position x 3 is determined by the detection aperture width bd alone. When bd becomes large, then K is non-zero only when x 3≈-x 1c and the transfer function resolution becomes high (note: the image is inverted relative to the object).

Final image resolution also depends on the resolution of the window function for coordinate x 1d, since ultimately this window function is responsible for revealing non-local transmission variations in the sample, such as phase gradients. The range allowed for |x 1d| depends not only on the aperture widths, as discussed above, but also on the aperture offsets. If the aperture offsets are too large, then the cosine and sine terms in Eqs. (22) and (23) oscillate too rapidly, and both Ke and Ko are ineffective at producing contrast. If instead the aperture offsets are too small, then the sine term in Eq. (23) oscillates too slowly, and Ko is ineffective at producing phase gradient contrast. As a rule of thumb, graded-field microscopy is effective at revealing phase gradients when αc +βcηαd for incoherent illumination, or αc +βcηβd for coherent illumination, where η ranges between 12 and 1 (a larger η leads to greater suppression of brightfield background). A simple conclusion may be drawn from this rule of thumb: If only a single aperture block is used to reveal phase gradients, then for incoherent illumination it is more effective to partially block the illumination aperture (as corroborated by Fig. (3)), whereas for coherent illumination it is more effective to partially block the detection aperture. In both cases, the effect of a second aperture block is then to suppress brightfield background. An advantage of incoherent over coherent illumination is that it allows higher phase gradient resolution since it makes use of a larger illumination aperture. Correspondingly, this is an advantage of graded-field microscopy, which allows the use of a large illumination aperture, over Schlieren or Hoffman contrast microscopies, both of which use slit illumination.

7. Conclusion

In summary, we have presented the formalism for a general graded-field microscopy technique involving the partial blocking of illumination and detection apertures. The resultant aperture offsets lead to phase gradient imaging as well as adjustable brightfield background suppresion. The main advantage of graded-field microscopy is its extreme simplicity and versatility. Graded-field microscopy requires no specialized optics (as opposed to DIC or Zernike phase contrast), and can be implemented with any widefield microscope. Moreover, because it makes use of adjustable aperture blocks, graded-field micirscopy can readily be tuned over different imaging configurations ranging from fully brightfield to fully darkfield. The adavntages of graded-field microscopy should make it useful for general phase-contrast imaging applications.

Acknowledgments

The authors acknowledge the support of the Whitaker Foundation and of the NIH for this work.

References and links

1. F. Zernike, “Das Phasenkontrastverfahren bei der mikroskopischen Beobachtung [in German],” Z. Tech. Phys. 16, 454 (1935).

2. F. Zernike, “How I discovered phase contrast,” Science 121, 345–349 (1955). [CrossRef]   [PubMed]  

3. G. Nomarski, “Microinterféromètre différentiel à ondes polarisées [in French],” J. Phys. Radium 16, S9 (1955).

4. R. D. Allen, G. B. David, and G. Nomarski, “The Zeiss-Nomarski differential interference equipment for transmitted-light microscopy,” Z. Wiss. Mikrosk. 69, 193–221 (1969). [PubMed]  

5. B. Kachar, “Asymmetric illumination contrast: a method of image formation for video microscopy,” Science 227, 766–768 (1985). [CrossRef]   [PubMed]  

6. W. B. Piekos, “Diffracted-light contrast enhancement: A re-examination of oblique illumination,” Micros. Res. Tech. 46, 334–337 (1999). [CrossRef]  

7. S. Inoue, Video Microscopy (Plenum Press, New York, 1986).

8. H. U. Dodt, M. Eder, A. Frick, and W. Zieglgänsberger, “Precisely localized LTD in the neocortex revealed by infrared-guided laser stimulation”, Science 286, 111–113 (1999). [CrossRef]  

9. C. F. Saylor, “Accuracy of microscopical methods for determining refractive index by immersion,” J. Res. US Natl. Bur. Stds. 15, 277 (1935).

10. E. H. Linfoot, Recent advances in optics (Clarendon Press, Oxford, 1955).

11. J. Ojeda-Castaneda and L. R. Berriel-Valdos, “Classification scheme and properties of schlieren techniques,” Appl. Opt. 16,18, 3338–3341 (1979). [CrossRef]  

12. J. G. Dodd, “Interferometry with Schlieren microscopy,” Appl. Opt. 16,16, 470–472 (1977). [CrossRef]   [PubMed]  

13. D. Axelrod, “Zero-cost modification of bright field microscopes for imaging phase gradient on cells: Schlieren optics,” Cell Biophys. 3, 167–173 (1981). [PubMed]  

14. S. Lowenthal and Y. Belvaux, “Observation of phase objects by optically processed Hilbert transform,” Appl. Phys. Lett. 11, 49–51 (1967). [CrossRef]  

15. R. Hoffman and L. Gross, “Modulation contrast microscopy,” Appl. Opt. 14, 1169–1176 (1975). [CrossRef]   [PubMed]  

16. M. Born and E. Wolf, Principles of optics (Cambridge University Press, Cambridge, UK, 1999).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Experimental setup: incoherent white light transilluminates a sample in a 6 f (unit-magnification) imaging line. Partial beam blocks are introduced in the illumination (top) and/or detection (bottom) apertures.
Fig. 2.
Fig. 2. Plots of the window functions Ke(0, x1d) and Ko (0, x 1d ) for different aperture block configurations (shown schematically on left). Ke reveals amplitude fluctuations whereas Ko reveals phase-gradient fluctuations in the sample. B characterizes the net brightfield contribution, or background level, as defined by the integral of Ke (Eq. (26)). The argument x 1d is normalized to be unitless.
Fig. 3.
Fig. 3. Experimental images of pyramidal neurons in an acute rat hippocampus slice (depth ~50µm). Images were taken with no aperture blocks (fully brightfield configuration - panel A), with a partial beam block in the illumination (panel B) or (panel C) detection aperture, and with partial beam blocks in both apertures (panel D). In the experimental setup, the illumination aperture was larger than the detection aperture. The slices were 400mm thick; the rat was 15 days old.

Equations (33)

Equations on this page are rendered with MathJax. Learn more.

E 1 ( x 1 ) = α 1 α 2 E 0 ( ξ 0 ) e i x 1 ξ 0 d ξ 0
J ( x , x ) = E ( x ) E * ( x ) ¯
J 1 in ( x 1 , x 1 ) = α 1 α 2 d ξ 0 α 1 α 2 d ξ 0 J 0 ( ξ 0 , ξ 0 ) e i ( x 1 ξ 0 x 1 ξ 0 )
J 0 ( ξ 0 , ξ 0 ) = δ ( ξ 0 ξ 0 )
J 1 in ( x 1 , x 1 ) = J 1 in ( x 1 d ) = α d e i α c x 1 d sinc ( 1 2 α d x 1 d )
α d = α 2 α 1
α c = 1 2 ( α 1 + α 2 )
J 3 ( x 3 , x 3 ) = β 1 β 2 d ξ 2 β 1 β 2 d ξ 2 J 2 ( ξ 2 , ξ 2 ) e i ( x 3 ξ 2 x 3 ξ 2 )
J 2 ( ξ 2 , ξ 2 ) = d x 1 d x 1 J 1 out ( x 1 , x 1 ) e i ( ξ 2 x 1 ξ 2 x 1 )
J 3 ( x 3 , x 3 ) = β d 2 d x 1 d x 1 J 1 out ( x 1 , x 1 ) e i β c ( x 1 d + x 3 d ) sinc ( 1 2 β d ( x 1 + x 3 ) ) sinc ( 1 2 β d ( x 1 + x 3 ) )
I 3 ( x 3 ) = d x 1 c d x 1 d G 13 ( x 3 + x 1 c , x 1 d ) J 1 out ( x 1 c , x 1 d )
G 13 ( x 3 + x 1 c , x 1 d ) = β d 2 e i β c x 1 d sinc ( 1 2 β d ( x 3 + x 1 c 1 2 x 1 d ) ) sinc ( 1 2 β d ( x 3 + x 1 c + 1 2 x 1 d ) )
E 1 out ( x 1 ) = t ( x 1 ) E 1 in ( x 1 )
J 1 out ( x 1 c , x 1 d ) = T ( x 1 c , x 1 d ) J 1 in ( x 1 c , x 1 d )
T ( x 1 c , x 1 d ) = t ( x 1 c 1 2 x 1 d ) t * ( x 1 c + 1 2 x 1 d )
T ( x 1 c , x 1 d ) = T ( x 1 c x 1 d ) *
J r ( x 1 d ) = α d cos ( α c x 1 d ) sin c ( 1 2 α d x 1 d )
J i ( x 1 d ) = α d sin ( α c x 1 d ) sinc ( 1 2 α d x 1 d )
G r ( x 3 + x 1 c , x 1 d ) = β d 2 cos ( β c x 1 d ) sinc ( 1 2 β d ( x 3 + x 1 c 1 2 x 1 d ) ) sinc ( 1 2 β d ( x 3 + x 1 c + 1 2 x 1 d ) )
G i ( x 3 + x 1 c , x 1 d ) = β d 2 sin ( β c x 1 d ) sinc ( 1 2 β d ( x 3 + x 1 c 1 2 x 1 d ) ) sinc ( 1 2 β d ( x 3 + x 1 c + 1 2 x 1 d ) )
K e ( x 3 + x 1 c , x 1 d ) = G r ( x 3 + x 1 c , x 1 d ) J r ( x 1 d ) G i ( x 3 + x 1 c , x 1 d ) J i ( x 1 d )
K o ( x 3 + x 1 c , x 1 d ) = G i ( x 3 + x 1 c , x 1 d ) J r ( x 1 d ) + G r ( x 3 + x 1 c , x 1 d ) J i ( x 1 d )
K e ( x 3 + x 1 c , x 1 d ) = cos ( ( α c + β c ) x 1 d ) K ( x 3 + x 1 c , x 1 d )
K o ( x 3 + x 1 c , x 1 d ) = sin ( ( α c + β c ) x 1 d ) K ( x 3 + x 1 c , x 1 d )
K ( x 3 + x 1 c , x 1 d ) = α d β d 2 sinc ( 1 2 α d x 1 d ) sinc ( 1 2 β d ( x 3 + x 1 c 1 2 x 1 d ) ) sinc ( 1 2 β d ( x 3 + x 1 c + 1 2 x 1 d ) )
I 3 ( x 3 ) = ( K e ( x 3 + x 1 c , x 1 d ) T r ( x 1 c , x 1 d ) + K o ( x 3 + x 1 c , x 1 d ) T i ( x 1 c , x 1 d ) ) d x 1 c d x 1 d
B = K e ( x 3 + x 1 c , x 1 d ) d x 1 c d x 1 d K total ( x 3 + x 1 c , x 1 c ) d x 1 c d x 1 d
sinc ( 1 2 β d ( x 3 + x 1 c + 1 2 x 1 d ) ) sinc ( 1 2 β d ( x 3 + x 1 c 1 2 x 1 d ) ) d x 1 c = 2 π β d sinc ( 1 2 β d x 1 d )
t ( x 1 ) = 1 q ( x 1 ) + ip ( x 1 )
T r ( x 1 c , x 1 d ) 1 q ( x 1 c 1 2 x 1 d ) q ( x 1 c + 1 2 x 1 d )
T i ( x 1 c , x 1 d ) p ( x 1 c + 1 2 x 1 d ) p ( x 1 c 1 2 x 1 d )
K ( x 3 + x 1 c , x 1 d ) α d β d 2 sinc ( 1 2 α d x 1 d ) sinc 2 ( 1 2 β d ( x 3 + x 1 c ) )
K ( x 3 + x 1 c , x 1 d ) α d β d 2 sinc ( 1 2 β d ( x 3 + x 1 c 1 2 x 1 d ) ) sinc ( 1 2 β d ( x 3 + x 1 c + 1 2 x 1 d ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.