Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot phase-stepped wide-field coherence-gated imaging

Open Access Open Access

Abstract

We present a single-shot wide-field CCD based coherence-gated imaging technique that utilizes spatially separated phase-stepped images and requires only one CCD camera to achieve simultaneous acquisition of four phase-stepped images. This technique provides a relatively low cost system for depth-resolved imaging of dynamic samples. We demonstrate real-time coherence-gated imaging of a moving watch cog, 3D reconstructions of a coin, phase measurements of the surface of a test-chart and depth-resolved imaging in a weakly scattering sample of onion.

©2003 Optical Society of America

1. Introduction

There is currently much interest in optical techniques for imaging the three-dimensional topography of structures, particularly in the case of translucent or scattering samples. When imaging such samples it is necessary to employ some means of distinguishing the light originating from the plane of interest from the scattered or out-of-focus light. One well established technique for achieving this goal is that of Optical Coherence Tomography (OCT) [1]. OCT uses a coherence gate, provided by the short coherence length of the optical source, to enhance axial (Z) resolution and to discriminate against scattered light. OCT is usually implemented in a fibre-optic based Michelson interferometer. A variable optical delay between the reference and sample arms is used to produce a heterodyne signal and, simultaneously, an axial scan through the sample. The detected signal can then be electronically demodulated and is proportional to the square root of the reflectivity of the sample in the detection volume. A XZ cross-section image of the sample is then produced by laterally (X) scanning the detection point. However, the requirement for mechanical scanning limits the acquisition rate, although video rate imaging has been achieved via the use of a high-speed optical delay line [2]. Also, the use of a fibre-optic interferometer requires a spatially coherent source such as a super luminescent diode or mode-locked laser.

There is considerable interest in developing instruments capable of performing Wide-field Coherence-gated Imaging (WCGI) that acquire an en face (XY depth-resolved) “slice” of the sample with parallel pixel acquisition. In this case only one-dimensional axial scanning is required to interrogate a three-dimensional sample volume. Several methods for performing WCGI have been proposed. Generally these techniques utilize a bulk-optic Michelson (or variation such as Linnik microscope) interferometer in combination with a two-dimensional detector to produce coherence-gated images. A smart-pixel CMOS detector array has been developed [3] that essentially implements an array of OCT detectors in parallel. Each pixel in the detector array has associated electronics to extract the heterodyne signal generated by the scanning reference mirror. Currently, an array of 58×58 pixels has been achieved and this technique has also demonstrated video-rate volumetric imaging [4]. Electronic holography [5], also described as off-axis digital holography [6], is another WCGI approach. The chief drawback of off-axis digital holography is that the CCD detector must have a sufficient resolution to adequately sample the resulting fringe pattern. The fringe period may be increased by changing the reference beam alignment but it must remain sufficiently small that the hologram spatial frequencies (±1 orders) and the 0th order do not overlap in frequency space [7]. These conditions require the CCD detector to have a large number of pixels and the time required to compute the reconstructed holographic image scales with the size of the image. Digital holography also permits the calculation of the optical phase at the sample [8]. Photorefractive holography is another WCGI technique, with the unique advantage that the background of scattered light is intrinsically removed by the photorefractive holographic recording and reconstruction process, allowing only the coherence-gated image to be detected at the CCD [9]. This is because the photorefractive response depends on the spatial derivative of the incident light intensity, rather than the integrated intensity. Photorefractive devices such as AlGaAs/GaAs multiple quantum well devices permit high-speed holographic recording and reconstruction [10] and can potentially image through thicker scattering media than direct CCD detection based techniques. Unfortunately, however, the required photorefractive devices are not yet commercially available.

Various issues associated with off-axis holography, including constraints on the detector resolution and the problem of walk-off [11], may be avoided if a collinear beam geometry is employed and an interferogram with fringe modulation along the Z axis is recorded. This approach may be described as phase-shifting digital holography or phase-stepping image-plane interferometry and is a wide-field analogue of OCT, offering the potential of higher imaging rates. In common with other WCGI techniques, it also offers the ability to use sources of low spatial coherence such as LEDs [12] or thermal sources [13, 14]. In the work by Beaurepaire et al [12], the reference mirror of the interferometer is modulated by a fraction of a wavelength in synchronization with stroboscopic illumination and a CCD camera records the resulting interferograms. By varying the phase delay between the reference mirror oscillation and the illumination pulse, a series of (typically four) images, corresponding to different phase shifts, can be recorded. This technique may be described as temporally multiplexed phase shifting. Digital processing of the phase-shifted images allows the coherence-gated image to be extracted, together with additional phase information on the profile of the surface being imaged. Temporal phase shifting techniques with signal averaging allow high-quality images to be obtained with high sensitivity and dynamic range [15] and the use of thermal light sources provides coherence lengths down to the micron level [13]. Images can be recorded in the presence of a background of scattered light and the processing necessary to extract the sectioned image can be performed at up to 50 frames per second. However, one issue with such temporal phase shifting techniques is that the sample must be essentially stationary during the time taken to acquire the four phase stepped images. This significantly compromises the ability to image dynamic samples.

To avoid the problem of sample movement during phase-shifting image acquisition, Smythe and Moore demonstrated simultaneous phase-stepped imaging [16]. Here, four separate phase-stepped images were produced by means of polarization elements and recorded using four synchronized CCD detectors to achieve instantaneous phase measuring interferometry. There have been several other attempts at performing single-shot phase-stepped imaging. In [17], two Savart elements (acting as both a beam displacer and analyzer in a polarization phase-stepping scheme) and two CCD cameras were used to demonstrate a real-time single-acquisition four-bucket interferometer. However, the use of calcite beam displacing prisms introduces significant astigmatism to the final images. In [18], a special Ronchi phase grating was applied to a phase-stepping interferometer to generate four phase-stepped diffracted first-order images. Although this method can be used for dynamic deformation measurement (narrow line-width illumination), it is not compatible with depth-resolved low coherence imaging because of the dispersion introduced by the phase grating. The optical efficiency of this technique (~ 8% per channel) is also an issue. Depth-resolved imaging using simultaneous phase stepping has been demonstrated [19] using spatially incoherent light to achieve depth resolution in a similar manner to that described by Sun and Leith [20]. In this technique, which is described as wide-field confocal microscopy, the depth resolution is due to the combination of the broad source and numerical aperture sectioning introduced by the microscope objectives. This technique is limited to small objective-sample separations if a fine depth resolution is required.

In this letter we report for the first time, to the best of our knowledge, a low (temporal) coherence phase-shifting interferometer. This exploits an achromatic spatially multiplexed four-channel polarization phase-stepped imaging set-up to realize a relatively low-cost single-shot WCGI technique. The single-shot acquisition of four phase-stepped images by one CCD allows the reconstruction of the amplitude and the optical phase of a dynamic depth-resolved object profile, potentially at frame rates exceeding 1000/second, including in the presence of scattered light. The achromatic nature of the optical system allows illumination with a broad spectral bandwidth, hence allowing a fine depth resolution that is independent of the numerical aperture of the system.

2. Experimental setup

The overall system is divided into two distinct parts, see Fig. 1, consisting of a low coherence polarizing Michelson interferometer and a Four-Channel Polarization Phase Stepper (FCPPS). The polarizing Michelson-type interferometer produces object and reference beams with mutually orthogonal polarizations. Here, the use of the word interferometer is slightly inaccurate, as in this part of the set-up there is no analyzing polarizer; i.e. if a screen is placed at S2 (Fig. 1) then no interference fringes would be observed. The light exiting the interferometer is then imaged onto the input plane of the FCPPS, which produces 4 spatially separated polarization phase-stepped images at its output image plane, where a CCD detector is placed.

 figure: Fig. 1

Fig. 1 Experimental setup; O object, R reference mirror, L1-5 lenses, S1-2 slits, PBS1-3 polarizing beam splitter cube, NPBS non-polarising beam splitter cube, P periscope, M1-6 mirrors, Q1-4 quarter wave plates. Inset shows CCD image acquired with a USAF test chart placed at O.

Download Full Size | PDF

Light from a low spatial/temporal coherence LED (HE8404SG, Hitachi, central wavelength 800 nm, spectral bandwidth 44 nm (full width at half maximum, FWHM) and emitting diameter 360 µm) is collimated and linearly polarized by PBS1. Rotating PBS1 allows the relative intensity in the two arms of the interferometer to be adjusted so that the maximum interference modulation can be achieved. The light is split at PBS2 into two beams, which pass through quarter wave plates Q1 and Q2 and illuminate the object O and a reference mirror R respectively. Passing through Q1 and Q2 for a second time, the reflections from the object and the reference exchange their polarization and exit PBS2 with mutually orthogonal polarizations (Fig. 1). The imaging lens L3 (focal length 100 mm) is positioned so as to achieve approximately unit magnification of the object O onto an image plane at slit S2. Slit S2 forms a rectangular field stop for each of the resulting channels. The effective working distance (minimum object – apparatus separation) for this setup was ~4 cm, although this distance could be increased, depending on the magnification required.

The FCPPS is built within a unit magnification telescope geometry, formed by 180 mm focal length lenses L4 and L5. These lenses are used to relay the image from S2 to the CCD camera. Initially, the orthogonally polarized object and reference beams are equally divided into two channels (a and b) by a non-polarizing beam splitter (NPBS). Quarter wave plate Q3 is placed in beam a with its fast axis orientated parallel to the polarization direction of the object beam (PO). This generates a λ/4 delay between the object and reference beams in channel a, and hence a 90° phase-shift between the interferograms derived from channel a and channel b. Next, quarter wave plate Q4 (fast axis orientated at 45° with respect to both the reference and the object beams) converts each beam into circularly polarized light (the object and reference beams acquire opposite handedness). Periscope (P) is used to rotate the beams by 90°; this is not a necessary part of the setup and serves only to keep all of the beams parallel to the same geometrical plane, thus simplifying alignment. Polarizing beam splitter PBS3 then splits beams a and b into beams 1a, 1b, 2a and 2b, and creates a 180° phase shift between the pairs of images of channel 1 and those of channel 2. This is due to the 90° difference in the orientation of the polarizer, as seen by the two channels (also known as Pancharatnam’s phase [21]).

Alternatively, the polarization phase-stepping may be described using the Jones formalism [22]. The Jones matrices for a horizontal linear polarizer, P0, a vertical linear polarizer, P90, and a quarter wave plate with horizontal fast axis, Q0, and 45° fast axis, Q45, are:

P0=[1000],P90=[0001],Q0=[100i],Q45=12[1ii1]

Therefore, the matrices describing the change in polarization state for each of the four channels of our FCPPS may be described as follows (using the labels for each channel as drawn in Fig. 1):

1a=P0Q45Q0,1b=P0Q45,2a=P90Q45Q0,2b=P90Q45

For clarity, we neglect the effect of the periscope on the polarization; it does not influence the generation of phase-stepped images. A complex amplitude vector can now be used to describe the light entering the FCPPS from each arm of the interferometer:

Ein=[Oexp(iφO)Rexp(iφR)]

where O, R, φO, and φR describe, for a given image point, the amplitudes and phases of the waves in the object and reference arms respectively. The amplitude emerging from each channel of the FCPPS can then be found by multiplying Ein by the appropriate matrix, allowing the corresponding intensity to be obtained, which yields:

I1a=12O2+12R2+ORcos(φOφR)
I1b=12O2+12R2+ORsin(φOφR)
I2a=12O2+12R2ORcos(φOφR)
I2b=12O2+12R2ORsin(φOφR)

It is clear from equation 4 that the desired relative phase shift of 90° between each generated interferogram has been produced.

Finally the four phase-stepped channels are projected onto the CCD camera by lens L5. The spatial separation between the four channels may be controlled by small adjustments to mirrors M1-6. In this experiment a 12-bit cooled progressive scan CCD camera (Hamamatsu, ORCA-ER) was used as the detector. In 1×1 binning mode the camera has a resolution of 1344×1024 pixels and operates at a frame rate of 8.3 Hz, in 2×2 binning mode the frame rate increases to 16.5 Hz. The inset of Fig. 1 shows an example of a phase-stepped four-channel image.

3. Image reconstruction

The image reconstruction algorithm includes geometric correction, non-uniformity compensation and modulation amplitude and phase calculation. Unlike the single channel temporal phase-stepping technique, four separate channels are used to acquire the phase stepped images simultaneously. Any differences caused by image projection, optical aberration and response non-uniformity will directly influence the reconstruction accuracy. Therefore, two corrections are introduced during the image reconstruction. First, it is necessary to acquire an averaged background frame (typically an average of 16 frames) before the image acquisition, which can be used to subtract the dark offset and fixed pattern noise from any acquired images. Once this has been performed, the image is separated into its four constituent phase shifted images. Secondly, it is necessary to correct for any small errors in image position and magnification introduced by imperfect alignment and image aberration; any error in the co-registration of the images will introduce a significant error in any further processing. Such a correction is realized using a table of correction parameters that is determined during system characterization. Currently, the correction parameters are determined manually, however it would be possible to automate this procedure. The translation and magnification of the images is achieved using bilinear interpolation. The relative error in alignment and magnification of each separate sub-image is invariant of the object being imaged, and is performed only once for a given experimental configuration.

3.1 Image processing

At any particular pixel, the intensity of the interferogram, In, can be written as:

In=AnO2+BnR2+2ORγ(δ)MnAnBncos(ϕ(δ)+n)

where the subscript n refers to the particular phase shifted image (n = 0, π/2, π and 3π/2). The variables O and R represent the amplitude of the illumination coming from the object and reference beams respectively. The parameters An and Bn correspond to the transmission efficiencies of the object and reference beams for each phase stepped image n. The optical phase difference, Φ, is a function of the path mismatch, δ, between the object and reference beams. Similarly, the degree of interference between the object and reference beams is described by the modulus of the complex degree of coherence, |γ(δ)| (i.e. the envelope of the interference signal). In order to account for the small decrease in the fringe modulation experienced due to imperfect polarization optics, it is necessary to introduce the parameter Mn (N.B. Mn would equal unity for perfect polarizing optics, actual values are slightly below this ideal case). It is then possible to modify the standard four phase algorithm [23] to obtain the amplitude of the interference signal, S, for a given pixel:

SOγ(δ)(1L12(I3π2A3π2Iπ2Aπ2K1)2+1L22(I0A0IπAπK2)2)12

This formula is used to calculate the interference signal amplitude distribution in the sectioned image. Here, K1,2 and L1,2 are system constants that depend on An, Bn, Mn and R. All correction parameters are retrieved from the calibration images acquired during the system characterization; they are independent of the object placed at O (Fig. 1) and therefore are constant provided that the setup remains otherwise unchanged.

It is also possible to derive an expression for the wrapped phase ϕ at the sample:

tan(ϕ)=L2(I3π2A3π2Iπ2Aπ2K1)L1(I0A0IπAπK2)

Examining the sign of the numerator and denominator in Eq. (7) allows the phase to be determined to modulo 2π. If the absolute phase is required, then a suitable phase unwrapping algorithm must then be employed.

One significant aspect of this technique is that it may be adapted to different interferometer geometries for different applications. For instance, the FCPPS can also be combined with a high numerical aperture Linnik interferometer to image the cellular structure of living tissue or with a Mach-Zehnder interferometer to image samples in transmission. The broad spectral/spatial bandwidth of the system also allows the utilization of low-cost incoherent optical sources, such as LEDs and white light sources, to achieve both high depth and high lateral resolutions, together with a reduction of inter-pixel cross talk [20].

3.2 Signal to noise ratio

If the extra correction terms in Eq. (2) are assumed to be small, which is reasonable, then it is possible to calculate the expected signal to noise ratio (SNR) in terms of the standard four phase algorithm:

S=(IπI0)2+(I3π2Iπ2)2

where S is the calculated signal at a particular pixel. If the CCD is operated near to its full well capacity, ξ, then the shot noise (obeying a Poisson distribution) can be assumed to dominate over any other noise processes present in the detection. The maximum possible signal Sm=1, in units of electrons would be equal to ξ, assuming a fringe modulation depth of unity. The noise, Sm=0, can be found in the case of zero fringe modulation, i.e. when I 0, π/2, π, 3π/2 are all (apart from the noise) equal to ξ/2. Because of the noise, the mean value of Sm=0 is non-zero and can be shown to be 2ξ , hence giving the SNR to be ξ2 . The quoted full-well capacity of the ORCA-ER is 1.8×104 and this gives a predicted SNR of 39.5 dB, this value will increase by 6 dB for each increase in the binning used i.e. for 2×2 binning the predicted SNR is 45.5 dB. If 8×8 binning were used then the predicted SNR is 57.5 dB, giving a final image with ~80×62 pixels. It is important to note that in order to increase the SNR it is necessary to use software rather than hardware binning. This is due to the finite size of the CCD horizontal shift register. Hardware binning may be used to increase the frame rate; whether there is a corresponding improvement in SNR depends on the design of the CCD chip used.

4. Experimental results

4.1 Axial resolution and dynamic range

 figure: Fig. 2.

Fig. 2. Sectioning curve obtained with Hitachi LED and 2×2 software binning, (☐) Measured points, (.....) Sectioning curve calculated from measured LED spectrum. Gaussian FWHM is 6.2 µm

Download Full Size | PDF

In order to measure the axial response of the system a mirror was used as the object and mechanically scanned through the focal plane and the results are shown in Fig. 2. The measured LED spectrum is slightly skewed and has a Gaussian FWHM of 44 nm. The predicted sectioning curve, calculated from the measured spectrum, is also shown in Fig. 2.

The small discrepancy between theory and experiment in the wings of the sectioning curve (Fig. 2) is attributed to small errors in the alignment of quarter wave plates Q3 and Q4 (see Fig. 1), which give rise to small phase deviations. The background level was measured to be 6.3×10-3 giving a corresponding SNR of 44 dB. This compares well with the signal to noise ratio calculated in section 3.2 of 45.5 dB.

4.2 Real-time sectioned imaging

 figure: Fig. 3.

Fig. 3. (a) (0.3 MB) Movie of direct image of watch cog, (b)-(d) (all 0.2 MB) movies of processed sectioned images at depths of z = 0.55 mm, 0.97 mm and 2.54 mm respectively, relative to the front surface (see (a)). The exposure time was 1 ms and a frame rate of 16.5 Hz using 2×2 hardware binning. [Media 3] [Media 4]

Download Full Size | PDF

Since no lateral scanning and mechanical phase-stepping are required, this technique may perform depth-resolved imaging of moving samples, where the maximum speed of movement is only limited by the camera integration time or the duration of the illumination pulse (which ultimately could be as short as tens of femtoseconds). Figure 3 shows an example of depth-resolved imaging of moving watch cogs acquired using the setup in Fig. 1. The camera frame rate for recording these movies was increased to 16.5 Hz by use of 2×2 pixel hardware binning. Figure 3(b-d) are movies, each demonstrating depth-resolved imaging at a different depth in the sample. The processed image size is 276×196 pixels, which corresponds to 3.6×2.6 mm field of view with ~15 µm lateral resolution. The images in Fig. 3 are cropped to 140×130 pixels to highlight the moving components. During the movie acquisition 50 frames, each with an exposure time of 1 ms, were acquired. The optical power entering the Michelson interferometer for this experiment (i.e., the power exiting PBS1 in Fig. 1) was 0.4 mW.

4.3 3D image reconstruction

To demonstrate the ability of the system to acquire a stack of sectioned images at a distance, a five pence coin was placed in the object arm of the interferometer. The time required to scan the volume was limited by the time taken for the mechanical scanning of the object. A CCD shutter time of 120 ms for each frame acquisition was used and the optical power after PBS1 was 60 µW. The stack of acquired images were processed and then rendered using Spyglass Slicer software, the result is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Computer 3D rendering of a set of 65 slices acquired of the numeral 5 on a 5 pence piece. The distance between successive acquisitions is 2 µm and the field of view is 2.9×3.9×0.13 mm. (a) and (b) are reconstructed depth-resolved images separated in height by 70 µm, (c) is a computer rendering of the acquired volume.

Download Full Size | PDF

4.4 Measurement of optical phase

In order to image smaller features the Michelson interferometer in Fig. 1 was modified to form a Linnik type interferometer. This was performed by the introduction of two matched microscope objectives (focal lengths 18 mm, NA = 0.15) into each arm of the interferometer. The positions of these lenses are indicated with dotted grey lines in Fig. 1. Lens L3 was replaced with one having a focal length of 200 mm and the distance L3 - S2 was set equal to this focal length. The microscope gave a magnification of ×11 and a raw image acquired with this setup is shown in the inset of Fig. 1. With this system it is possible to resolve the smallest USAF test chart bars, corresponding to a lateral resolution of ~3 µm. To demonstrate the ability of the system to measure the optical phase, this image was then processed according to Eq. (3) to give the images shown in Fig. 5 with sub-wavelength depth resolution.

 figure: Fig. 5.

Fig. 5. (a) calculated wrapped phase image of USAF test chart obtained using ×11 magnification, field of view 350×260 µm (b) unwrapped false-color image of (a) with linear tilt subtracted. Analysis of (b) gives the thickness of the metallic coating of the test chart to be 120 nm.

Download Full Size | PDF

It was not possible to measure the root mean square (RMS) phase noise over successive acquisitions due to mechanical instabilities in the apparatus. An estimate of the noise suggests that the standard deviation in the calculated height is less than 5 nm. Measurement of the phase noise and comparison with theoretical calculation will be the subject of future work.

4.5 Depth-resolved imaging in a weakly scattering sample

Using 2×2 software binning a sample of onion was imaged using the Linnik interferometer described in section 4.4. Here the contrast is due to refractive index change within the sample, which is typically greatest at the cell boundaries. The results are shown in Fig. 6 and a total of 90 slices were acquired with the object being translated by a step of 3 µm (in air) between slices. The exposure time for each acquisition was 3.6 ms, and again the total time required to acquire the stack was limited by the mechanical translation stage used. The optical power entering the interferometer was 1.5 mW.

 figure: Fig. 6.

Fig. 6. (a) (1 MB) Movie of calculated sectioned image for a sample of onion (x-y plane), field of view 270×250 µm. (b) (0.2 MB) movie of data volume shown in (a) re-sampled into an x-z slice, field of view 270×210 µm (assuming a sample refractive index of n = 1.3).

Download Full Size | PDF

The deepest resolvable feature was 150 µm beneath the surface of the onion (assuming a sample refractive index of n = 1.3). No post-processing of the images beyond the calculation of the sectioned image as described in Section 3.1 has been performed. The axial streaks appearing in Fig. 6(b) are due to small discrepancies introduced by differences in numerical aperture between the different imaging channels. These streaks could be removed by improving the design of the FCPSS to include a common aperture that defines the numerical aperture of each of the four channels to be the same. We note that this artifact could also be easily removed by employing a post-calculation high-pass filter (in the axial direction) on the image stack. The use of water immersion objectives would significantly reduce the strong surface reflection at the air-cell boundary and enable deeper penetration into the sample.

5. Discussion and Conclusions

We have demonstrated wide-field single-shot depth-resolved imaging using low (temporal) coherence phase-stepping interferometry for what we believe to be the first time. To this end we have designed and constructed an achromatic FCPSS and demonstrated it operating with both Michelson and Linnik type interferometers to provide wide-field coherence-gated imaging. By performing an axial scan through the sample we have also demonstrated 3D imaging. The transverse resolution of the instrument was constrained by the number of CCD pixels and the magnification of the optical system used. The depth resolution was achieved using a broadband source (LED) with a spectral width of 44 nm giving an axial response with a FWHM of 6.2 µm. We have also demonstrated that it is possible to measure the optical phase giving sub-wavelength depth resolution. In the future we plan to implement the Linnik interferometer with higher numerical aperture microscope objectives in order to improve both lateral and axial resolutions.

Our experimental setup would be capable of imaging birefringent samples. This is because any birefringence induced changes in the polarization state of the object beam are removed as the object beam returns through PBS2 (see Fig. 1), i.e., the object beam polarization upon exiting PBS2 will always be horizontal. We suggest that it will be possible to obtain information about the birefringent nature of the sample by the careful introduction of additional wave plates in the object arm (next to Q2, Fig. 1) and then acquiring several images with varying wave plate orientations.

We note that the optical design used in this paper could be improved by the use of a better engineered optical positioning system, removing the need for the periscope (see P in Fig. 1). Also, a polarizing beam splitter cube could be used in place of mirror M6 (see Fig. 1) so as to relax alignment constraints and allow a more compact setup. Similarly an additional non-polarizing beam splitter cube could be introduced at mirror M3, although this would result in a reduction in the optical efficiency by half. Given a suitable alignment process we suggest that the FCPSS could be constructed by physically cementing together beam splitter cubes and other optical elements (such as wave plates and right angled prisms) to form a rugged practical device.

This wide-field single-shot 3-D imaging technique allows the reconstruction of both the amplitude and optical phase of a depth-resolved object profile in a single acquisition and is therefore applicable to dynamic samples. We have shown that this technique can be used to provide sectioned images of fast moving objects, e.g. watch cogs. In principle the speed of depth-resolved image acquisition is limited only by the CCD camera and may exceed 1000 frames per second. Since this is a coherence gating technique, it is capable of discriminating against incoherent background light and therefore has the potential to image moving biological specimens in the presence of weakly scattering media. Ultimately, the ability of the system to image through scattering media is limited by the full-well capacity of the CCD, although further optimization and appropriate spatial filtering should permit imaging through more strongly scattering samples than those presented here.

Acknowledgements

This research was funded by the Engineering and Physical Sciences Research Council (EPSRC) and Holoscan (U.K.) Ltd. Christopher Dunsby acknowledges funding from Holoscan UK Ltd. The authors would like to thank A. Dubra for helpful discussions on phase unwrapping methods.

References and Links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical Coherence Tomography,” Science 254 (5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. A. M. Rollins, M. D. Kulkarni, S. Yaxdanfar, R. Ung-arynyawee, and J. A. Izatt, “In vivo video rate optical coherence tomography,” Opt. Express 3, 219–229 (1998) http://www.opticsexpress.org/abstract.cfm?URI=OPEX-3-6-219. [CrossRef]   [PubMed]  

3. M. Ducros, M. Laubsher, B. Karamater, S. Bourquin, T. Lasser, and R. P. Salathé, “Parallel optical coherence tomography in scattering samples using a two-dimensional smart-pixel detector array,” Opt. Commun. 202, 29–35 (2002) [CrossRef]  

4. M. Laubscher, M. Ducros, B. Karamata, T. Lasser, and R. Salathé, “Video-rate three-dimensional optical coherence tomography,” Opt. Express 10, 429–435 (2002) http://www.opticsexpress.org/abstract.cfm?URI=OPEX-10-9-429. [CrossRef]   [PubMed]  

5. H. Chen, Y. Chen, D. Dilworth, E. Leith, J. Lopez, and J. Valdmanis, “Two-dimensional imaging through diffusing media using 150-fs gated electronic holography techniques,” Opt. Lett. 16, 487–489 (1991) [CrossRef]   [PubMed]  

6. E. Cuche, P. Poscio, and C. Depeursinge, “Optical tomography by means of a numerical low-coherence holographic technique,” J. Opt. 28, 260–264 (1997) [CrossRef]  

7. P. Hariharan, Optical Holography: Principles, techniques and applications (Cambridge University Press, 1996), Chap. 2. [CrossRef]  

8. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phasecontrast microscope by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38, 6994–7001 (1999) [CrossRef]  

9. S. C. W. Hyde, N. P. Barry, R. Jones, J. C. Dainty, P. M. W. French, M. B. Klein, and B. A. Wechsler, “Depth-resolved holographic imaging through scattering media by photorefraction,” Opt. Lett. 20, 1331–1333 (1995) [CrossRef]   [PubMed]  

10. Y. Gu, Z. Ansari, C. Dunsby, D. Parsons-Karavassilis, J. Siegel, M. Itoh, P. M. W. French, D. D. Nolte, W. Headley, and M. R. Melloch, “High-speed 3D imaging using photorefractive holography with novel low-coherence interferometers,” J. Mod. Opt. 49, 877–887 (2002) [CrossRef]  

11. S. C. W. Hyde, N. P. Barry, R. Jones, J. C. Dainty, and P. M. W. French, “High resolution depth resolved imaging through scattering media using time resolved holography,” Opt. Commun. 122, 111–116 (1996) [CrossRef]  

12. E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full-field optical coherence microscopy,” Opt. Lett. 23, 244–246 (1998) [CrossRef]  

13. L. Vabre, A. Dubois, and A. C. Boccara, “Thermal-light full-field optical coherence tomography,” Opt. Lett. 27, 530–532 (2002) [CrossRef]  

14. B. Laude, A. De Martino, B. Drévillon, L. Benattar, and L. Schwartz, “Full-field optical coherence tomography with thermal light,” Appl. Opt. 41, 6637–6645 (2002) [CrossRef]   [PubMed]  

15. A. Dubois, L. Vabre, A. C. Boccara, and E. Beaurepaire, “High-resolution full-field optical coherence tomography with a Linnik microscope,” Appl. Opt. 41, 805–812 (2002) [CrossRef]   [PubMed]  

16. R. Smythe and R. Moore, “Instantaneous phase measuring interferometry,” Opt. Eng. 23, 361–364 (1984)

17. A. L. Weijers, H. Brug, and H. J. Frankena, “Polarisation phase stepping with a Savart element,” Appl. Opt. 37, 5150–5155 (1998) [CrossRef]  

18. Q. Kemao, M. Hong, and W. Xiaoping, “Real-time polarization phase shifting technique for dynamic deformation measurement,” Opt. Lasers Eng. 31, 289–295 (1999) [CrossRef]  

19. N. B. E. Sawyer, S. P. Morgan, M. G. Somekh, C. W. See, X. F. Cao, B. Y. Shekunov, and E. Astrakharchik, “Wide field amplitude and phase confocal microscope with parallel phase stepping,” Rev. Sci. Inst. 72, 3793–3801 (2001) [CrossRef]  

20. P. C. Sun and E. N. Leith, “Broad-source image plane holography as a confocal imaging process,” Appl. Opt. 33, 597–602 (1994) [CrossRef]   [PubMed]  

21. J. A. Ferrari, E. M. Frins, and C. D. Perciante, “A new scheme for phase-shifting ESPI using polarized light,” Opt. Commun. 202, 233–237 (2002) [CrossRef]  

22. A. Gerrard and J. M. Burch, Introduction to Matrix Methods in Optics (Wiley, 1975), Chap. 4.

23. K. Creath, Phase-Measurement Interferometry Techniques, Progress in Optics XXVI, Ed. E. Wolf (Elsevier Science, 1988) Chap. 5.

Supplementary Material (6)

Media 1: MOV (309 KB)     
Media 2: MOV (180 KB)     
Media 3: MOV (134 KB)     
Media 4: MOV (150 KB)     
Media 5: MOV (947 KB)     
Media 6: MOV (194 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Experimental setup; O object, R reference mirror, L1-5 lenses, S1-2 slits, PBS1-3 polarizing beam splitter cube, NPBS non-polarising beam splitter cube, P periscope, M1-6 mirrors, Q1-4 quarter wave plates. Inset shows CCD image acquired with a USAF test chart placed at O.
Fig. 2.
Fig. 2. Sectioning curve obtained with Hitachi LED and 2×2 software binning, (☐) Measured points, (.....) Sectioning curve calculated from measured LED spectrum. Gaussian FWHM is 6.2 µm
Fig. 3.
Fig. 3. (a) (0.3 MB) Movie of direct image of watch cog, (b)-(d) (all 0.2 MB) movies of processed sectioned images at depths of z = 0.55 mm, 0.97 mm and 2.54 mm respectively, relative to the front surface (see (a)). The exposure time was 1 ms and a frame rate of 16.5 Hz using 2×2 hardware binning. [Media 3] [Media 4]
Fig. 4.
Fig. 4. Computer 3D rendering of a set of 65 slices acquired of the numeral 5 on a 5 pence piece. The distance between successive acquisitions is 2 µm and the field of view is 2.9×3.9×0.13 mm. (a) and (b) are reconstructed depth-resolved images separated in height by 70 µm, (c) is a computer rendering of the acquired volume.
Fig. 5.
Fig. 5. (a) calculated wrapped phase image of USAF test chart obtained using ×11 magnification, field of view 350×260 µm (b) unwrapped false-color image of (a) with linear tilt subtracted. Analysis of (b) gives the thickness of the metallic coating of the test chart to be 120 nm.
Fig. 6.
Fig. 6. (a) (1 MB) Movie of calculated sectioned image for a sample of onion (x-y plane), field of view 270×250 µm. (b) (0.2 MB) movie of data volume shown in (a) re-sampled into an x-z slice, field of view 270×210 µm (assuming a sample refractive index of n = 1.3).

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

P 0 = [ 1 0 0 0 ] , P 90 = [ 0 0 0 1 ] , Q 0 = [ 1 0 0 i ] , Q 45 = 1 2 [ 1 i i 1 ]
1 a = P 0 Q 45 Q 0 , 1 b = P 0 Q 45 , 2 a = P 90 Q 45 Q 0 , 2 b = P 90 Q 45
E in = [ O exp ( i φ O ) R exp ( i φ R ) ]
I 1 a = 1 2 O 2 + 1 2 R 2 + OR cos ( φ O φ R )
I 1 b = 1 2 O 2 + 1 2 R 2 + OR sin ( φ O φ R )
I 2 a = 1 2 O 2 + 1 2 R 2 OR cos ( φ O φ R )
I 2 b = 1 2 O 2 + 1 2 R 2 OR sin ( φ O φ R )
I n = A n O 2 + B n R 2 + 2 O R γ ( δ ) M n A n B n cos ( ϕ ( δ ) + n )
S O γ ( δ ) ( 1 L 1 2 ( I 3 π 2 A 3 π 2 I π 2 A π 2 K 1 ) 2 + 1 L 2 2 ( I 0 A 0 I π A π K 2 ) 2 ) 1 2
tan ( ϕ ) = L 2 ( I 3 π 2 A 3 π 2 I π 2 A π 2 K 1 ) L 1 ( I 0 A 0 I π A π K 2 )
S = ( I π I 0 ) 2 + ( I 3 π 2 I π 2 ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.