Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-beam manually actuated distortion-corrected imaging (DMDI): two dimensional scanning with a single-axis galvanometer

Open Access Open Access

Abstract

We recently demonstrated a new two-dimensional imaging paradigm called dual-beam manually actuated distortion-corrected imaging (DMDI). This technique uses a single mechanical scanner and two spatially separated beams to determine relative sample velocity and simultaneously corrects image distortions due to manual actuation. DMDI was first demonstrated using a rotating dual-beam micromotor catheter. Here, we present a new implementation of DMDI using a single axis galvanometer to scan a pair of beams in approximately parallel lines onto a sample. Furthermore, we present a method for automated distortion correction based on frame co-registration between images acquired by the two beams. Distortion correction is possible for manually actuated motion both perpendicular and parallel to the galvanometer-scanned lines. Using en face OCT as the imaging modality, we demonstrate DMDI and the automated distortion correction algorithm for imaging a printed paper phantom, a dragon fruit, and a fingerprint.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

High resolution optical imaging modalities such as optical coherence tomography (OCT), confocal and multiphoton microscopy continue to show promise for diagnostic imaging. For two-dimensional imaging, two-axis scanning mechanisms such as rotary-pullback catheters, and dual-axis galvanometer or MEMS scanners, amongst other methods, are commonly employed [1,2] However, these scanners have fields-of-view (FOV) that are limited by the stroke of the linear motor in the case of rotary-pullback catheters, or by the size of the scan lens in the case of galvanometer and MEMS scanners, making it challenging to image large areas. Additionally, scanning modalities that require imaging in regular, pre-defined patterns can be susceptible to artifacts due to patient and/or clinician motion.

To extend field of view, groups have developed manually actuated capsules, such as those for upper gastrointestinal tract imaging [3,4], however images acquired with these devices can be distorted along the manually actuated dimension due to non-uniform pullback speed. To compensate for patient and/or clinician motion artifacts, magnetic sensors [5,6], IR optical tracking [7], linear position encoders [8], and IR cameras [9], have been used to monitor the position of the imaging probe with respect to the sample. However, the required additional hardware adds complexity to the design, can in some cases be costly, and potentially introduces additional challenges, (e.g., magnetic tracking requires that the procedure environment is free of magnetic field distorting materials to maintain tracking accuracy). Sensor-less techniques such as speckle decorrelation [10–13] have been developed for one-dimensional motion correction; however, extension to 2D motion correction is not straightforward.

To provide extended FOV and motion artifact correction without a companion position-tracking modality, our group recently demonstrated a new imaging paradigm called dual-beam manually actuated distortion-corrected imaging (DMDI) [14]. DMDI utilizes two imaging beams, that are spatially separated along the one dimension (slow axis), and scanned mechanically along the orthogonal dimension (fast axis). As the DMDI imaging head or a sample is primarily actuated along the slow axis, the same sample features are imaged by both beams at different times. The time interval between one beam imaging the same sample feature as the other beam, and knowledge of the spatial separation of the beams as a function of time, can be used to determine the average velocity of the sample or imaging head within that interval. If multiple features can be co-registered between both images, the instantaneous velocity profile can be estimated, and image distortions due to manual actuation can be corrected.

DMDI was first implemented using a dual-beam micromotor catheter (DBMC) [14] which could be useful for in vivo imaging of internal vessels, airways, or other luminal organs. In this work we demonstrate a new implementation of DMDI that utilizes a single-axis galvanometer to scan two approximately parallel lines on a sample. This implementation could be useful for imaging planar body sites such as skin or the oral cavity. As the scan pattern of the galvanometer DMDI implementation is relatively simple, we present a parameter-free method for automated distortion correction. We demonstrate DMDI imaging with successful automated distortion correction for the imaging of a printed paper phantom, a cut dragon fruit cross-section, and a fingerprint.

2. Methods

2.1 OCT imaging system

Due to its availability within our lab, en face OCT imaging is used to demonstrate this implementation of DMDI, although in principle, any point-scanning imaging modality could be used. The dual-channel OCT imaging system is identical to that previously described [14]. Briefly, a 23 mW, flaser = 100 kHz, 1310 nm swept-source laser (AXP50125-6, Axsun Technologies, Billerica, MA) is 50/50 split and fed into two parallel Mach-Zehnder OCT interferometers. As the power is reduced by a factor of two, the detection sensitivity of each individual OCT channel, compared to the corresponding single channel system, is reduced by at least 6dB. The sample arms of each interferometer feed into the imaging head. OCT signals returning from the imaging head are combined with their respective reference arms and fed into a pair of balanced detectors (PDB480-AC, Thorlabs Inc., Newton, NJ) which are simultaneously digitized by a dual-channel, k-clocked acquisition board (ATS9350, Alazar Technologies, Inc., Point Claire, QC). The path length difference between the two interferometers is approximately 2 m, sufficiently different such that there is no OCT cross-talk between them.

2.2 Dual-beam galvanometer imaging head

The galvanometer imaging head designed to scan a pair of parallel lines onto the sample plane is schematically illustrated in Fig. 1(a). The coordinate system of the imaging head is defined such that the x- and y-axes are perpendicular and parallel to the scan lines respectively with the z-axis oriented upwards. The origin is chosen as the point midway between the two scan lines in the x-dimension, approximately halfway along the scan line in the y-dimension as defined in the Calibration section below, and as the bottom of the glass slide for the z-dimension. A dual SMF-28 fiber pigtail (DFP, KFP-P-2-8D-250S-250-1310-N, Photop Technologies, Inc., Santa Clara, CA) and graded index lens (GRIN, GRIN2313A, Thorlabs Inc.) are aligned in a glass ferrule and separated by the GRIN working distance (i.e. DFP exit face at the back focus of the GRIN). Both the DFP and GRIN lens entrance face are polished at 8° and anti-reflection coated for 1310 nm. The centers of the exit faces of the two fibers within the DFP are located on the pivot axis of the 8° face with a separation of 250 µm and each equidistant from the center of the face. If perfectly assembled, the A and B beams are collimated and the chief rays of the beams intersect and emerge from the exit face of the GRIN at the same location and diverge from each other with increasing distance.

 figure: Fig. 1

Fig. 1 Dual channel OCT imaging system and galvanometer configuration. a) Schematic of the dual-beam galvanometer imaging head. DFP = dual fiber pigtail, GRIN = graded index lens. Approximate paths of beams A and B are shown in blue and red respectively. b) Detailed view of the scan pattern at the sample with x,y-axes, scan lengths lA, and lB, yoffset, and scan pattern function S(y) defined.

Download Full Size | PDF

Only the second (large mirror) axis of a standard two-axis galvanometer system (GVS002, Thorlabs Inc.) is mounted within a mounting adapter (GCM102, Thorlabs Inc.) that also contains a 7.5 mm working distance scan lens (LSM02-BB, Thorlabs Inc.). The DFP and GRIN assembly is positioned at the viewing port of the mounting adapter. The assembly is rotated around its axis and aligned to the mounting adapter such that the two beams both hit as best as possible the rotational axis of the galvanometer mirror. The approximate light paths are illustrated in Fig. 1(a) as blue and red lines for beams A and B respectively. The exit face of the GRIN is positioned close to the galvo mirror without risk of collision when the mirror rotates. In this way, the parfocal plane of the scan lens lies between the intersection point of the beams emerging from the GRIN and the galvo rotational axis, ensuring optimal performance of the scan lens and that the A and B beams enter the sample close to parallel aligned to the z-axis. The separation of the scan lines at the sample plane is determined by the angle between the A and B beams at the GRIN exit face and the distance to the scan lens. The precise scanning pattern of the two beams is determined through the calibration procedure described below. The focused spot size of each beam at the sample plane is 80x80 μm FWHM measured with a beam profiler (BeamMap2, DataRay Inc., Redding, CA).

The galvo mirror is driven in open loop mode by a saw tooth waveform that is clocked by the laser sweep trigger. The waveform contains nfr = 1000 samples yielding a galvo frame rate of ffr = 100 Hz. The drive waveform is implemented to maximize the positive y scanning duty cycle of the scan while not exceeding the maximum acceleration limit of the galvanometer. A small strip of reflective metalized mylar placed near the positive y turning point of the galvo scan pattern such that it occludes the sample in both beams at the same time during the scan period, is used as an optical frame synchronization marker. A lens tube (SM2L1, Thorlabs Inc.) is used to stand off the sample from the scan lens. All samples are imaged through a standard 1 mm thick glass slide that can slide relative to the lens tube while keeping the sample at the working distance of the scan lens.

2.3 Image preprocessing

Two imaging channels, A(N,z) and B(N,z), are acquired as continuous streams of equal time-spaced A-lines with time index N, where z is the OCT depth dimension. A mean intensity projection over the 1 mm of depth from the bottom of the glass slide in the z-dimension is used to generate 1-dimensional A(N) and B(N) images. The linear part of the forward scan, or the active part of the scan, consists of nactive = 743 A-lines, and terminates with the optical frame synchronization marker. Correspondingly, the number of inactive A-lines is ninactive = nfrnactive = 257 A-lines.

To present the raw images in a uniform manner, the A-lines before the first complete frame, consisting of Nref + ninactive A-lines, are cropped and A(N) and B(N) are reformatted to A(fr,n) and B(fr,n), where fr is the integer division of (N-(Nref + ninactive)) by nfr, and n is the A-line index within the frame. In this work, A(fr,n) and B(fr,n) are only analyzed and presented for the active part of the scan. For any point in the A(fr,n) and B(fr,n) images, the time elapsed from the start of the data acquisition is

t(fr,n)=(frnfr+n+Nref+ninactive)ΔtAline
where ΔtA-line = 1/flaser. Due to residual reflection artefacts from the optical setup that are present in each frame, the cropped images A(fr,n) and B(fr,n) are pseudo flat-field corrected using a flat-field obtained from an image without a sample present.

2.4 Calibration

Calibration of this galvanometer implementation of DMDI uses a process similar to the DBMC implementation [14]. A calibration pattern of equally spaced black lines, with 0.678 mm separation between white to black transitions, printed on standard white printer paper was used. The paper phantom was affixed to the underside of the movable glass slide.

The first calibration step is to determine the y positions of the beams as a function of n. To accomplish this, the lines of the calibration pattern were aligned along the x-axis (perpendicular to the scan lines) and the imaging frame of the stationary phantom was averaged for about 3 seconds. Using the known spacing of the white to black transitions of the calibration pattern, Δycal, the positions of the transitions were fit versus n(A,B)i, their A-line positions in the A(fr,n) and B(fr,n) images. (Note: Going forward, for notational simplicity we use the subscripted (A,B) to represent two distinct terms or functions corresponding to each of the A and B imaging channels). The linear regression fit resulted in residuals less than the optical resolution of the system. Choosing the midpoint of the active scan region of the A beam as the y origin, y(A,B), the y positions of the A and B beams as a function of n, defined in the reference frame of the imaging head, are:

yA(n)=lA(nnactive112),n[0,nactive1]
yB(n)=lA2yoffset+lB(n(nactive1)),n[0,nactive1]
where lA = 10.21 mm and lB = 10.29 mm are the scan lengths of the A and B beams respectively, and yoffset = 0.29 mm is the difference between the maximum negative y excursions of the A and B beams. This parameterization of the beam pattern is shown schematically in Fig. 1(b). The small difference in the scan lengths and existence of the offset are due to slight misalignments of the optics within the imaging head. Equations (2) and (3) allow images collected as A(fr,n) and B(fr,n) to be interpolated to become images A(fr,y) and B(fr,y) that are equally sampled in y.

The second calibration step determines the separation of the A and B beams as a function of y. To accomplish this, the calibration pattern lines were aligned along the y-axis (parallel to the scan lines) and was pushed at constant x speed, vx,cal = 10 mm/s, using a linear stepper motor. The resulting y-calibrated images from each beam are a series of lines oriented along the y dimension and equally spaced along the fr dimension. As a function of y, the positions of negative-going edges in fr versus the edge number were fit using linear regression to yield slopes m(A,B)(y) and intercepts b(A,B)(y). The slopes are constant and are indicative of the speed used to push the calibration pattern. The intercepts describe the average shape of the lines as a function of y for each of the A and B beams. Similar to [14] we define the functions Sfr(y), the scan pattern, and Ffr(y), the fixed pattern, in units of frames as

Sfr(y)=12[bA(y)bB(y)]
Ffr(y)=12[bA(y)+bB(y)]

Note that Sfr(y), the scan pattern, and Ffr(y), are only defined at y positions where both A and B beams image the target. Sfr(y) and Ffr(y), in units of frames, can be converted to S(y) and F(y), in units of distance, using Eqs. (1)-(3). In the case where lA = lB, and yoffset = 0, S(y), the scan pattern, and F(y), the fixed pattern, in units of distance are

S(y)=Sfr(y)vx,calfgalvo
F(y)=Ffr(y)vx,calfgalvo.

In this setup, as lA and lB are approximately equal and yoffset is small, Eqs. (6) and (7) are approximate equalities. The scan pattern describes the distance between each beam as a function of y, and is used for DMDI vx speed calculation. The fixed pattern describes the deviation of the midpoint between the two beams from a straight line, as a function of y, and is used for image distortion correction. In this implementation, both Eqs. (6) and (7) are found to be independent of y to within the resolution of the optical set up, with S = d0/2 where d0 = 2.32 mm, is the beam separation. Any constant term in F is inconsequential for distortion correction as will be shown below.

In summary, the calibration of this galvanometer setup reveals that the A and B beams scan parallel lines of approximately equal length and y range, and are separated by a fixed distance d0.

2.5 Sample preparation and imaging

Imaging of three samples – a printed paper QR code, a dragon fruit, and a finger – are demonstrated in this work.

For the paper phantom, a QR code was oriented at an angle relative to the x,y-axes, shown in Fig. 3(a), and fixed to the underside of a glass slide. The angled orientation enhances image frame uniqueness compared to images acquired along the pixelation axes of the QR code. The glass slide was mounted on a spring-loaded one-axis translational stage oriented along the y-axis that itself was mounted on an optical rail oriented along the x-axis. The sample was manually pushed on the rail in the -x direction while also actuating along the y-axis.

The dragon fruit was sliced into an approximately 1 cm thick cross-section to expose the whitish flesh and black seeds and was placed in a Petri dish. A glass slide was placed on top with a layer of water in between to prevent drying and eliminate air bubbles. The slide was firmly held in place with plastic wrap and tape while ensuring the scanning region was not occluded. The imaging location is roughly shown in Fig. 4(a). The Petri dish was mounted on the translation stage as for the QR code and pushed in the + x direction while also actuating along the y-axis.

For finger imaging, a wetted index finger pressed a glass slide against the lens tube of the imaging head. The glass slide and finger were pulled roughly in the + x direction while maintaining consistent pressure and contact. The imaging location is roughly shown in Fig. 5(a). For all samples, the mean projection from the top 1 mm of the sample below the glass slide was used to generate the en face OCT images.

2.6 Velocity estimation

DMDI operates by estimating vx(t) and vy(t), the x and y velocities of the imaging head viewed as if the sample is stationary. If k point features common to both A(fr,n) and B(fr,n) images can be co-registered with coordinate pairs [(frA,nA), (frB,nB)]k, the average x and y velocities from time tA to tB, calculated using Eq. (1), for the kth coordinate pair, are

v¯x,AB,k=S(yA,k)+S(yB,k)tB,ktA,k
v¯y,AB,k=yB(nB,k)yA(nA,k)tB,ktA,k.

In this scanning implementation, the scan pattern function is a constant equal to half the beam separation (i.e. S(yA,k) = S(yB,k) = d0/2). For a set of coordinate pairs [(frA,nA), (frB,nB)]k and corresponding time intervals (tA,tB)k, there may be several average velocities defined for any particular time t. The instantaneous velocities, vx(t) and vy(t) at any time t can be estimated as a weighted average of all average velocities defined at that time point:

v(x,y)(t)=k(v¯(x,y),AB,kwk(t))kwk(t)
where w is the discrete normalized (twk(t)=1) weighting function defined as

wk(t)={1|tB,ktA,k|+1,t[tB,k,tA,k]0,elsewhere.

This weighting function ensures that each co-registered feature contributes equally to the final velocity functions. Here, the A and B scan lines are roughly parallel and therefore individual frames or segments of frames, from each image that pass the same features are similar, even with varying velocity patterns. Therefore, instead of finding point features as co-registered coordinate pairs [(frA,nA), (frB,nB)]k, the A(fr,n) and B(fr,n) images can first be calibrated into A(fr,y) and B(fr,y) images using Eqs. (2) and (3), and then entire frames in each image can be compared to find co-registered frame-offset triplets (frA, frB, ΔyAB)k, consisting of (frA, frB)k frame pairs, and corresponding offset ΔyAB,k that represents the shift of the frame from frA,k to frB,k in the y-direction. In this case, Eqs. (8) and (9) become

v¯x,AB,k=d0ffr(frB,kfrA,k)
v¯y,AB,k=ΔyAB,kffr(frB,kfrA,k)
where the time of each frame has been substituted as

t(A,B),k,fr=fr(A,B)Δtfr=fr(A,B)1ffr.

Similarly, the instantaneous velocities for each image frame can be estimated by replacing t(A,B),k,fr in Eqs. (10) and (11).

2.7 Automated frame-offset correlation

This section describes an automated method for extracting a set of frame-offset triplets (frA, frB,ΔyAB)k that allow for high accuracy imaging head velocity estimation as described in section 2.6. First, the A(fr,y) and B(fr,y) images are cropped to have the same y- ranges. Then, a three-dimensional frame-offset cross correlation (FOXC) matrix is constructed by iteratively circular cross correlating each frame of A(fr,y) through the image B(fr,y) with vertical shift of ΔyAB. The set of frame-offset triplets that we wish to extract from the FOXC matrix, follow a path of high correlation values from low frA and frB to high frA and frB. We call this pathway the high correlation triplet pathway (HCTP). By assuming the scan motion is unidirectional, the HCTP will be continuous and always reside in either the frA > frB or frA < frB half of the FOXC matrix depending on the scanning direction. By observing where the highest correlation values reside, we can eliminate the half of the FOXC matrix that corresponds to scanning in the incorrect direction. If we make the further assumption that the scan velocity is non-zero along the x-direction, in general, each frame in the A image will correlate strongly with at most one frame in the B image, and vice versa. Thus, each frame-offset triplet (frA, frB,ΔyAB)k in the HCTP will have unique values for frA and frB.

We select candidate triplets for the HCTP from the FOXC matrix that are maximum projections in both the frA and frB directions. This set of candidate triplets may contain extraneous members due to noise or accidental high frame-offset correlation values if features in the sample are repetitive or lacking (homogenous sample). To remove these, the (frA, frB,ΔyAB)k triplets are ordered by increasing frA. After ordering in frA, a valid set of frame-offset triplets must also be ordered in frB. We assume that the HCTP is the largest set of valid candidate triplets for a given data set. Thus, extraneous triplets can be identified if their removal from the candidate triplets leaves the largest set of valid triplets.

An example of the step-wise extraneous triplet removal process we used is shown in Fig. 2. The set of 13 candidate triplets in Fig. 2(a) only has a valid set length of 5 where both frA and frB are monotonically increasing. Removal of the triplet (7,8,*) (where the ΔyAB value of * is irrelevant) yields the candidate set shown in Fig. 2(b) that has a valid set length of 9. Finally, removal of the triplet (11,25,*) yields the maximum valid set length of 11 shown in Fig. 2(c), corresponding to the HCTP.

 figure: Fig. 2

Fig. 2 Example of extraneous triplet removal from candidate triplets (ΔyAB values not shown) to yield the largest valid set of triplets with monotonically ordered frA and frB. a) 13 candidate triplets with a largest valid set length of 5. b) Removal of (7,8) triplet leaves 12 candidate triplets with largest valid set length of 9. c) Removal of (11,25) triplet leaves the maximum valid set length of 11, corresponding to the HCTP.

Download Full Size | PDF

The HCTP is used in Eqs. (12) and (13) to generate the x and y scan velocities. The beginning and ends of the scans where velocity estimates are not necessarily defined are filled by nearest neighbour extrapolation if required.

2.8 Distortion correction

The final step in DMDI is to transform the images A(fr,n) and B(fr,n) that are sampled uniformly in time into images A(XA,YA) and B(XB,YB), that are sampled uniformly in Cartesian space accounting for the motions of the scan beams and the relative motions of the imaging head and/or sample. For any point (fr,n), the (XA,YA) and (XB,YB) coordinates in their respective images are given by

XA(t)=dx(t)+SFYA(n,t)=yA(n)+dy(t)
XB(t)=dx(t)SFYB(n,t)=yB(n)+dy(t)
where dx(t) and dy(t) are the x and y relative displacements respectively between the sample and imaging head. The displacements dx(t) and dy(t) are calculated by integrating over time the velocities calculated using Eqs. (12) and (13). Note that X(A,B) and Y(A,B) are again defined such that the corrected images are presented as viewed from a position behind the imaging head and as if the samples remain stationary and are imaged by a moving imaging head even though this may not be the case experimentally. As F is subtracted from both the x components, constant terms in F only result in a shift of both images by the same amount, and can therefore be ignored.

3. Results

DMDI analysis and results for the printer paper QR code are shown in Fig. 3. The angled orientation of the QR code is shown in Fig. 3(a), with the approximate start positions of the A and B beams towards the top left and the initial approximate scanning motion of the imaging head assuming a stationary sample indicated by the black arrow. The preprocessed, y-calibrated en face OCT images A(fr,y) and B(fr,y) are shown in Fig. 3(b). As scanning is effectively done in the + x direction, features appear earlier (lower frame index) in the A(fr,y) image compared to the B(fr,y) image. The A(fr,y) and B(fr,y) images appear markedly distorted: lines along the pixelation axes of the QR code that should be straight appear quite wavy; and pixel shapes and sizes are very different from the original pattern, due to the non-uniform scanning motion. It is easy to observe that features of the QR code are compressed near frA = frB = 140 compared to the rest of the image.

 figure: Fig. 3

Fig. 3 DMDI of a) a printed paper QR code phantom mounted at an angle. The approximate start positions of the A and B beams are labeled and shown with an arrow indicating the actuation direction along the x axis viewed as if the sample was stationary. b) Preprocessed and y-calibrated en face OCT images A(fr,y) and B(fr,y). c) (frA,frB) (upper) and (frA,ΔyAB) (lower) maximum intensity projections (MIP) of the 3D FOXC (frame-offset cross-correction) matrix. d) (frA,frB) (upper) and (frA,ΔyAB) (lower) projections of the HCTP (high correlation triplet pathway) as green circles (n = 116) and filtered triplets as red crosses (n = 3). e) Extracted x and y velocities and displacements as a function of time. f) Distortion corrected A(X,Y) and B(X,Y) images.

Download Full Size | PDF

Maximum intensity projections (MIPs) of the FOXC matrix onto the (frA, frB) and (frA, ΔyAB) planes are shown in the upper and lower insets of Fig. 3(c) respectively. The FOXC MIP onto the (frB, ΔyAB) plane is not shown but displays a similar shape to that of (frA, ΔyAB) plane except that it is shifted to higher values of frB. The projection of the continuous path of the HCTP from low (frA, frB) values to high (frA, frB) values in red is easily discerned in the (frA, frB) MIP. This path exists only in the upper half (frB > frA) of the (frA, frB) MIP because scanning was unidirectional in the + x direction. On the other hand, actuation along the y-axis was allowed in both directions, and a direction change can be observed in the (frA, ΔyAB) MIP path as it goes from positive to negative ΔyAB at high values of frA. Projections onto the (frA, frB) and (frA, ΔyAB) planes of the candidate frame-offset triplets selected as those that have maximum intensity projections along both the frA and frB axes in the FOXC matrix are shown in Fig. 3(d). The frame-offset triplets corresponding to red crosses indicate the triplets (n = 3) that were removed according to the filtering method described in Section 2.7, whereas the green circles (n = 116) indicate the HCTP used for velocity estimation. Note that the filtering method removes triplets that have (frA, frB) and (frA, ΔyAB) projections nearby but still noticeably deviating from the HCTP. The x and y velocity estimates calculated from the validated set of frame-offset triplets are shown in Fig. 3(e) (left axis in blue). Corresponding x and y displacements are also shown in Fig. 3(e) (right axis in red). The distortion corrected images created using the effective time-dependent displacements of the imaging head are shown in Fig. 3(f). The distortion corrected images in Fig. 3(f) much closer resemble the original QR code compared to the uncorrected images in Fig. 3(b). In the distortion corrected images, waviness along the QR code pixelation axes is still visible but to a much lesser extent, and stretching and compression of the image along the horizontal axis is much less noticeable compared to the images in Fig. 3(b).

DMDI imaging results for the dragon fruit cross-section are shown in Fig. 4. The imaging head starts on the right side of the photograph [Fig. 4(a)] and scans in the -x direction as indicated by the white arrow. In the A(fr,y) and B(fr,y) images [Fig. 4(b)], black seeds appear as round bright features surrounded by a dark envelope while the white flesh appears darker marked with brighter fibrous structures. Dark holes in the flesh correspond to either displaced seeds or seeds located at depths that have been cropped out in the depth dimension such that only the dark envelope is visualized. The A(fr,y) and B(fr,y) images appear horizontally mirrored compared to Fig. 4(a) as scanning is done from right to left and the images are arranged by increasing frame index. Although all of the seeds are approximately the same size in the photograph [Fig. 4(a)], the seeds vary in size in the A(fr,y) and B(fr,y) images [Fig. 4(b)], with notable compression along the horizontal dimension around frame index 150.

 figure: Fig. 4

Fig. 4 DMDI imaging of a) cut dragon fruit. The orange box approximately shows the imaged region. b-f) image panels are arranged and annotated as in Fig. 3.

Download Full Size | PDF

The MIP’s of the FOXC matrix [Fig. 4(c)] indicate an HCTP that resides in the lower half (frB < frA) of the (frA, frB) MIP and higher frame values of the (frA, ΔyAB) MIP as the B beam imaging precedes the A beam imaging in this scan direction. Frame-offset triplets extracted from the FOXC matrix are shown in [Fig. 4(d)] with the HCTP containing n = 130 triplets with n = 7 filtered triplets resulting from the procedure described in Section 2.7. Velocities and displacements in the x dimension shown in Fig. 4(e) are all negative reflecting the motion along the –x direction. The corrected images, shown in Fig. 4(f), display similar size and shape for the seeds compared to the uncorrected images.

DMDI imaging results from the manually-scanned finger are shown in Fig. 5. The imaging head starts on the right side of the photograph [Fig. 5(a)] and scans in the -x direction. The FOXC matrix projections, HCTP (n = 224) and filtered (n = 1) triplets, velocity estimates, and corrected images as shown in Fig. 5(b)-5(f) respectively.

 figure: Fig. 5

Fig. 5 DMDI imaging of a) a finger. b-f) image panels are arranged and annotated as in Fig. 3.

Download Full Size | PDF

4. Discussion

The key contributions of this work are 1) the demonstration of a new scanning geometry for DMDI, and 2) the development of a parameter-free automated frame co-registration algorithm. As a result of the simplified scanning geometry of two parallel lines, frame-to-frame cross-correlation is a simple and fast way of co-registering image features. Compared to the manual point co-registration that was performed for the original dual beam micromotor catheter [14] which took hours for approximately 300 point pairs, entire automated distortion correction method here takes only approximately 30s for the largest image acquired here –the fingerprint – that has over 400 image frames. This is the processing time using MATLAB and includes the time taken to display intermediate and diagnostic images and information to screen. Thus, there is much room for optimization, reducing display of unnecessary data, and porting to a compiled language, to reduce the processing time of the correction algorithm. By implementing a maximum time interval within which matching frames can be co-registered, it is conceivable that image acquisition and distortion correction could be done on the fly with only minimal delay and given that much of the processing would be amiable to being formulated into a multithreaded parallel environment.

The automated frame co-registration algorithm performs well on a range of feature types and contrast levels. In addition to the high-contrast, blocky QR code, the algorithm could identify common features in the dragon fruit (sporadic, round, high contrast seeds interspersed amongst a fibrous background), and fingerprint (repetitive, linear, low contrast fingerprint ridges). Although the HCTP through the FOXC matrix MIPs for the fingerprint [Fig. 5(c)] is barely discernible, the automated frame-offset correlation procedure described here has little trouble finding it. Especially apparent in the dragon fruit imaging is that the filtering of candidate triplets based on the (frA, frB) ordering requirement [Fig. 4(d) upper panel], concurrently eliminates points that deviate in the (frA, ΔyAB) projection [Fig. 4(d) lower panel]. Thus, no constraints on y velocity or acceleration are needed to further clean up the HCTP.

The image distortion correction method described here does not perfectly remove all image distortions. This is most easily seen in the printed QR code phantom [Fig. 3(f)] where the straight lines can still appear slightly wavy. Residual image distortions are due to two factors. Near the beginning and end of a scan compared to the middle of a scan, there are fewer triplets that contribute to the velocity estimates, resulting in a reduced accuracy in these areas. Secondly, as each triplet defines an average velocity over a time interval, there is inherent averaging of the velocity that smoothes over high accelerations. This smoothing can be reduced somewhat by decreasing the distance between the imaging beams at the expense of reducing the maximum measureable x velocity.

In this implementation of DMDI, the maximum measurable average vx is given by Eq. (12) with a frame difference of one, which with beam separation of 2.32 mm and 100Hz frame rate, gives v¯x,max=232mm/s. However, in reality, velocity estimation accuracy near the this maximum is poor as the next closest measurable average velocities are coarsely spaced corresponding to n = 2,3,4,... integer frame differences that give average velocities v¯x=232mm/sn . The minimum measurable average vx is given by Eq. (12) with a frame difference equal to the number of frames in the image acquisition. Due to the inverse time relationship, DMDI is not practical for measuring zero or slow actuation velocities.

Since velocities are estimated from the time interval of a sample feature passing both beams, vy is inherently dependent on vx. The minimum and maximum measurable vy is given by

v¯y=±yoverlapd0v¯x
where yoverlap is the region in the y-axis which both beams capture the same sample information. In this set up, yoverlap = lByoffset = 10 mm.

In this work, all samples were actuated with non-zero velocity either in the + x or –x direction. Thus, the corresponding HCTP is continuous with all members having either frA < frB or frA > frB, respectively. Although not demonstrated in this work, bidirectional motion correction will require a more sophisticated method for extraction of an HCTP that may not be continuous and may jump back and forth across the frA = frB diagonal in the FOXC matrix. However, bidirectional x dimension scanning should enable coverage of large FOV’s in both the x and y dimensions by moving the imaging head in a zig-zag fashion (bidirectional scanning in the x dimension, and unidirectional scanning in the + y or -y dimension).

Although sufficient for proof-in-principle experiments, several improvements could be made to the dual-beam galvanometer imaging head. In particular, independent adjustment of the imaging beams could eliminate the differences in the scan line length and the skew in the beams leading to simplified expressions for Eqs. (2) and (3), and exact equalities for Eqs. (6) and (7). Also, larger input beams filling the entrance pupil of the scan lens could improve the lateral resolution of the system. The galvanometer drive waveform could also be changed to maximize the imaging duty cycle, currently ~74% for the modified saw tooth waveform. For example, a triangular waveform using both forward and retrace scans could have higher duty cycle with the added benefit of effectively doubling the imaging frame rate. While implemented here with a relatively large galvanometer scanning mirror, the imaging head could be miniaturized for endoscopic imaging using a single-axis MEMS mirror, or adapted to forward-scanning applications using a piezoelectric, or other electromechanical fiber actuation method.

In this work, a standard 1 mm glass slide is used in this work to maintain focus and a constant working distance, and all of the DMDI analysis is performed in two dimensions. As the intersection point of the two beams at the GRIN exit face is located as close as possible to the galvanometer mirror axis with the entrance pupil plane of the scan lens located between them, the beams are assumed to impinge on the sample at normal incidence. Thus, for OCT, feature registration or additional distortion correction in the z dimension was deemed unnecessary at this time. Another degree of freedom that was constrained in this work was rotational motion of the sample in the x-y plane. We plan to address both of these degrees of freedom in future work.

For the dual OCT interferometer imaging system, the 50/50 input power-splitting leads to 6dB loss in detection sensitivity for each channel individually. However, if the distortion correction algorithm performs well, the loss in sensitivity may be mitigated by averaging the two distortion-corrected volumetric images together. At the same time, speckle noise in the images may also be improved.

Drawbacks of this implementation of DMDI are the cost and added complexity of two sets of imaging hardware (interferometers and detectors) and the need to acquire two imaging channels, which if both channels are OCT, doubles the already high data bandwidth requirements. However, the modalities do not necessarily have to be the same. For example, OCT-autofluorescence imaging (OCT-AFI) [15] in one dual-clad fiber could be combined with an additional AFI-only fiber channel to be used for DMDI co-registration and establishing the motion path, and would require only a marginal increase in data bandwidth.

5. Conclusion

We have built and tested a dual-beam galvanometer imaging head that can be used for dual-beam manually actuated distortion-corrected imaging and developed a method for automated co-registration of the two imaging beams. We demonstrate the versatility of the technique and distortion correction algorithm by imaging high-contrast (printed paper phantom and dragon fruit) and low contrast (fingerprint) samples using two-dimensional manual scanning.

Funding

Canadian Institutes of Health Research (CIHR) and the National Sciences and Engineering Research Council of Canada (NSERC).

Acknowledgments

We acknowledge Adrian Tanskanen for writing the galvanometer drive code and Geoffrey Hohert for helpful discussions.

References and links

1. D. C. Adams, Y. Wang, L. P. Hariri, and M. J. Suter, “Advances in Endoscopic Optical Coherence Tomography Catheter Designs,” IEEE J. Sel. Top. Quantum Electron. 22(3), 210–221 (2016). [CrossRef]  

2. C. Zhou, J. G. Fujimoto, T.-H. Tsai, and H. Mashimo, “Endoscopic Optical Coherence Tomography,” in Optical Coherence Tomography: Technology and Applications, 2nd ed., W. Drexler and J. G. Fujimoto, eds. (Springer International Publishing, Switzerland, 2015), pp. 2077–2108.

3. M. J. Gora, J. S. Sauk, R. W. Carruth, K. A. Gallagher, M. J. Suter, N. S. Nishioka, L. E. Kava, M. Rosenberg, B. E. Bouma, and G. J. Tearney, “Tethered capsule endomicroscopy enables less invasive imaging of gastrointestinal tract microstructure,” Nat. Med. 19(2), 238–240 (2013). [CrossRef]   [PubMed]  

4. K. Liang, G. Traverso, H.-C. Lee, O. O. Ahsen, Z. Wang, B. Potsaid, M. Giacomelli, V. Jayaraman, R. Barman, A. Cable, H. Mashimo, R. Langer, and J. G. Fujimoto, “Ultrahigh speed en face OCT capsule for endoscopic imaging,” Biomed. Opt. Express 6(4), 1146–1163 (2015). [CrossRef]   [PubMed]  

5. B. Y. Yeo, R. A. McLaughlin, R. W. Kirk, and D. D. Sampson, “Enabling freehand lateral scanning of optical coherence tomography needle probes with a magnetic tracking system,” Biomed. Opt. Express 3(7), 1565–1578 (2012). [CrossRef]   [PubMed]  

6. B. Lau, R. A. McLaughlin, A. Curatolo, R. W. Kirk, D. K. Gerstmann, and D. D. Sampson, “Imaging true 3D endoscopic anatomy by incorporating magnetic tracking with optical coherence tomography: proof-of-principle for airways,” Opt. Express 18(26), 27173–27180 (2010). [CrossRef]   [PubMed]  

7. J. Ren, J. Wu, E. J. McDowell, and C. Yang, “Manual-scanning optical coherence tomography probe based on position tracking,” Opt. Lett. 34(21), 3400–3402 (2009). [CrossRef]   [PubMed]  

8. N. Iftimia, G. Maguluri, E. W. Chang, S. Chang, J. Magill, and W. Brugge, “Hand scanning optical coherence tomography imaging using encoder feedback,” Opt. Lett. 39(24), 6807–6810 (2014). [CrossRef]   [PubMed]  

9. P. Pande, G. L. Monroy, R. M. Nolan, R. L. Shelton, and S. A. Boppart, “Sensor-Based Technique for Manually Scanned Hand-Held Optical Coherence Tomography Imaging,” J. Sens. 2016, 8154809 (2016). [CrossRef]   [PubMed]  

10. A. Ahmad, S. G. Adie, E. J. Chaney, U. Sharma, and S. A. Boppart, “Cross-correlation-based image acquisition technique for manually-scanned optical coherence tomography,” Opt. Express 17(10), 8125–8136 (2009). [CrossRef]   [PubMed]  

11. X. Liu, Y. Huang, and J. U. Kang, “Distortion-free freehand-scanning OCT implemented with real-time scanning speed variance correction,” Opt. Express 20(15), 16567–16583 (2012). [CrossRef]  

12. Y. Wang, Y. Wang, A. Akansu, K. D. Belfield, B. Hubbi, and X. Liu, “Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal,” Biomed. Opt. Express 6(11), 4302–4316 (2015). [CrossRef]   [PubMed]  

13. N. Uribe-Patarroyo and B. E. Bouma, “Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation,” Opt. Lett. 40(23), 5518–5521 (2015). [CrossRef]   [PubMed]  

14. A. M. D. Lee, G. Hohert, P. T. Angkiriwang, C. MacAulay, and P. Lane, “Dual-beam manually-actuated distortion-corrected imaging (DMDI) with micromotor catheters,” Opt. Express 25(18), 22164–22177 (2017). [CrossRef]   [PubMed]  

15. H. Pahlevaninezhad, A. M. D. Lee, T. Shaipanich, R. Raizada, L. Cahill, G. Hohert, V. X. D. Yang, S. Lam, C. MacAulay, and P. Lane, “A high-efficiency fiber-based imaging system for co-registered autofluorescence and optical coherence tomography,” Biomed. Opt. Express 5(9), 2978–2987 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Dual channel OCT imaging system and galvanometer configuration. a) Schematic of the dual-beam galvanometer imaging head. DFP = dual fiber pigtail, GRIN = graded index lens. Approximate paths of beams A and B are shown in blue and red respectively. b) Detailed view of the scan pattern at the sample with x,y-axes, scan lengths lA, and lB, yoffset, and scan pattern function S(y) defined.
Fig. 2
Fig. 2 Example of extraneous triplet removal from candidate triplets (ΔyAB values not shown) to yield the largest valid set of triplets with monotonically ordered frA and frB. a) 13 candidate triplets with a largest valid set length of 5. b) Removal of (7,8) triplet leaves 12 candidate triplets with largest valid set length of 9. c) Removal of (11,25) triplet leaves the maximum valid set length of 11, corresponding to the HCTP.
Fig. 3
Fig. 3 DMDI of a) a printed paper QR code phantom mounted at an angle. The approximate start positions of the A and B beams are labeled and shown with an arrow indicating the actuation direction along the x axis viewed as if the sample was stationary. b) Preprocessed and y-calibrated en face OCT images A(fr,y) and B(fr,y). c) (frA,frB) (upper) and (frA,ΔyAB) (lower) maximum intensity projections (MIP) of the 3D FOXC (frame-offset cross-correction) matrix. d) (frA,frB) (upper) and (frA,ΔyAB) (lower) projections of the HCTP (high correlation triplet pathway) as green circles (n = 116) and filtered triplets as red crosses (n = 3). e) Extracted x and y velocities and displacements as a function of time. f) Distortion corrected A(X,Y) and B(X,Y) images.
Fig. 4
Fig. 4 DMDI imaging of a) cut dragon fruit. The orange box approximately shows the imaged region. b-f) image panels are arranged and annotated as in Fig. 3.
Fig. 5
Fig. 5 DMDI imaging of a) a finger. b-f) image panels are arranged and annotated as in Fig. 3.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

t ( f r , n ) = ( f r n f r + n + N r e f + n i n a c t i v e ) Δ t A l i n e
y A ( n ) = l A ( n n a c t i v e 1 1 2 ) , n [ 0 , n a c t i v e 1 ]
y B ( n ) = l A 2 y o f f s e t + l B ( n ( n a c t i v e 1 ) ) , n [ 0 , n a c t i v e 1 ]
S f r ( y ) = 1 2 [ b A ( y ) b B ( y ) ]
F f r ( y ) = 1 2 [ b A ( y ) + b B ( y ) ]
S ( y ) = S f r ( y ) v x , c a l f g a l v o
F ( y ) = F f r ( y ) v x , c a l f g a l v o .
v ¯ x , A B , k = S ( y A , k ) + S ( y B , k ) t B , k t A , k
v ¯ y , A B , k = y B ( n B , k ) y A ( n A , k ) t B , k t A , k .
v ( x , y ) ( t ) = k ( v ¯ ( x , y ) , A B , k w k ( t ) ) k w k ( t )
w k ( t ) = { 1 | t B , k t A , k | + 1 , t [ t B , k , t A , k ] 0 , e l s e w h e r e .
v ¯ x , A B , k = d 0 f f r ( f r B , k f r A , k )
v ¯ y , A B , k = Δ y A B , k f f r ( f r B , k f r A , k )
t ( A , B ) , k , f r = f r ( A , B ) Δ t f r = f r ( A , B ) 1 f f r .
X A ( t ) = d x ( t ) + S F Y A ( n , t ) = y A ( n ) + d y ( t )
X B ( t ) = d x ( t ) S F Y B ( n , t ) = y B ( n ) + d y ( t )
v ¯ y = ± y o v e r l a p d 0 v ¯ x
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.