Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Non-iterative aberration correction of a multiple transmitter system

Open Access Open Access

Abstract

Multi-transmitter aperture synthesis provides aperture gain and improves effective aperture fill factor by shifting the received speckle field through the use of multiple transmitter locations. It is proposed that by utilizing methods based on shearing interferometry some low-order aberrations, such as defocus, can be found directly rather than through iterative algorithms. The current work describes the theory behind multi-transmitter aberration correction and describes experiments used to validate this method. Experimental results are shown which demonstrate the ability of such a sensor to solve directly for defocus and toric curvature in the captured field values.

©2011 Optical Society of America

1. Introduction

Coherent aperture synthesis employs techniques such as digital holography and allows for the creation of high-resolution imagery using scalable hardware architectures. Aperture synthesis has been demonstrated using separated, coherent optical receivers and numeric algorithms which can synthesize a single high-resolution image [1,2]. Multi-transmitter aperture synthesis has been shown to be a useful technique to create high-resolution coherent imagery with fewer speckle realizations due to the resulting effectively filled aperture [3], however this method still corrects aberrations by iterating around a sharpness metric. Here, it is proposed to utilize multi-transmitter synthesis to capture overlapping, redundant optical field information to back out low-order phase aberrations which are static across the optical aperture. It will be shown experimentally that this method can be used to synthesize high resolution imagery.

Sparse aperture synthesis algorithms generally apply aberration corrections to individual apertures while evaluating image quality based on sharpness metrics such as those described by Fienup and Miller [4]. Sharpness metrics, while useful, often require some foreknowledge of the target’s relative bright and dark image content. Because of their iterative nature, the metrics impose greater processing burdens and imaging latency, hindering real-time imaging. Furthermore, when utilizing these algorithms with sparse aperture synthesis they perform best with a large number of independent speckle realizations to ensure that speckle noise does not swamp the image synthesis process.

The techniques presented by Rabb et al. make use of sharpness metrics as a means of evaluating the effectiveness of phase aberration correction [3]. It will be shown that the method proposed here utilizes redundantly captured information to make estimates of low-order phase aberrations. Theses aberrations, such as defocus, can be found directly without need of image sharpening algorithms. The proposed method is similar to sheared coherent interferometric photography (SCIP) which uses small shears to estimate local phase gradients and constructs an estimate of the backscattered field [5]. Here, large shear lengths are found in order to estimate low order phase aberrations.

2. Theory

An example proposed multi-transmitter system is shown in Fig. 1 . Note that the field incident to the coherent sensor entrance pupil is shifted by coordinates (xT,yT) which are a function of the transmitter location.

 figure: Fig. 1

Fig. 1 Illustration of a concept multi-transmitter system.

Download Full Size | PDF

The detected pupil-plane field Ud(x,y) can be written as

Ud(x,y)=P(x,y)exp(jφe(x,y)))Ub(xxT,yyT),
where P(x,y) is the pupil function, Ub(x-xT,y-yT) is the backscattered field and ϕe(x,y) is the phase error across the pupil. The shift due to transmitter locations is given by xT and yT. Note that for a given aperture the phase error value is static and doesn’t shift with backscattered field. The phase-front detected by the holographic receiver, Ud(x,y), can be described by taking the argument of Eq. (1) such that
φd(x,y)=2πWe(x,y)+2πWb(xxT,yyT),
where We(x,y) is the wavefront error and Wb(x,y) is the wavefront of the backscattered field. Therefore the detected optical wavefront is the sum of the hardware wavefront error and the transmitter-location-translated, backscattered wavefront. From this point on the amplitude terms will be dropped for convenience. The captured fields must be registered in a common, digital pupil plane by shifting the captured data by the known transmitter location so that the individually captured, and now registered, wavefronts are described by
Wd(x+xT,y+yT)=We(x+xT,y+yT)+Wb(x,y),
where it can be observed that the wavefront error term now shifts with the registered field. An estimate of the “shear” between two overlapped wavefronts estimates can be found by finding the difference in the captured, and registered, wavefronts ΔW(x,y) given by
ΔW(x,y)=We(x+xT0,y+yT0)+Wb(x,y)(We(x+xT1,y+yT1)+Wb(x,y)),
where (xT0,yT0) and (xT1,yT1) represent the effective transmitter shift of each of the captured fields. Note that the receivers measure a common backscattered field over any overlap area and that the difference will represent only the difference in error terms such that

ΔW(x,y)=We(x+xT0,y+yT0)We(x+xT1,y+yT1).

The wavefront error We(x,y) will be modeled using the bivariate expansion given by

We(xxT,yyT)=i,jaij(xxT)i(yyT)j.

The bivariate expansion is convenient because it allows wavefronts to be fit across a variety of support regions. Equation (6) can be substituted into Eq. (5) and it can be shown that

ΔW(x,y)=x(a11Δy+2a20Δx)+y(a11Δx+2a02Δy)+a10Δx+a01Δya11xT1yT1+a11xT0yT0+a20xT02a20xT12+a02yT02a02yT12,
where
Δx=xT1xT0,
and
Δy=yT1yT0,
are the relative transmitter shifts between positions (xT0,yT0) and (xT1,yT1).

The current work is concerned with low-order polynomial fits through the second order which consist of wavefront terms commonly associated with toric curvature and defocus. While not immediately obvious, the parameters a11, a02 and a20 from Eq. (7) can be used to calculate these wavefront terms of interest. Solutions for them can be found by isolating the first-order terms in the difference equation and rewriting the first-order terms ΔW’(x,y) as

ΔW'(x,y)x(a11Δy+2a20Δx)+y(a11Δx+2a02Δy).

By measuring the shear's tilt (which is equivalent to shifts in the focal plane information) the tilt in x can be equated to the first term of Eq. (8) and the tilt in y to the second term. In order to solve for a11, a02 and a20, one more shear measurement is required, leading to an over determined system of equations from which the parameter values are calculated.

An example of a system with two shears from three transmitters is depicted in Fig. 2 . The two shears, in the x and y directions respectively, are shown with the pupil planes registered using the known shifts corresponding to the transmitter locations.

 figure: Fig. 2

Fig. 2 x and y shears for a three transmitter system.

Download Full Size | PDF

In the overlapping segments the wavefront difference is found corresponding to Eq. (5), where xT0, yT0, yT1, xT2 are all zero. The resulting equations for the shear arising from the transmitters at (0,0) and (xT1,0), ΔW01(x,y), and the shear arising from transmitters at (0,0) and (0, yT2), ΔW02(x,y), are

ΔW01(x,y)=2a20xT1xa11xT1y
ΔW02(x,y)=a11yT2x2a02yT2y.

The x tilt, γ01x, and y tilt, γ01y, of ΔW01(x,y) and the x tilt, γ02x, and y tilt, γ02y, of ΔW02(x,y) can then be used to calculate a11, a02 and a20

a11=γ01y2xT1γ02x2yT2
a02=γ02y2yT2
a20=γ01x2xT1

Due to the over determined nature of the equations the two solutions for a11 are simply averaged in Eq. (11). Using the known transmitter locations as well as the measured tilts to calculate the coefficients yields the wavefront error which is used to “flatten” the pupil fields for synthesis in a common pupil plane. Note that the measurement is independent of the target being imaged (as long as there is sufficient backscatter to close the link-budget).

3. Experiment

An experiment was designed to validate the multi-transmitter aperture synthesis theory developed above. The experiment will demonstrate that multiple transmit locations can be used to determine pupil plane aberrations while aiding in multi-transmitter image synthesis.

3.1 Hardware

The theory proposed would be valid for any form digital holography, but the system hardware presented uses a spatial heterodyne variant of digital image plane holography [6]. In this case the back scattered field from a target is imaged onto a camera array where it is mixed with a tilted local oscillator. A diagram of the multi-transmitter experimental setup is shown in Fig. 3 . Not shown in Fig. 3 is a 1.545 micron laser source is used as a master oscillator. The local oscillator is introduced in the pupil plane of the imaging system which uses a 1” diameter, 1000 mm focal length lens to image target space. A 320 by 256 InGaAs camera array windowed to 256 by 256 with 30 micron pixels has been placed a distance of 1.2 m from the lens and the target is located at some distance Z in front of the imaging lens.

 figure: Fig. 3

Fig. 3 The multi-transmitter experiment utilizes focal plane holography with a calibrated aberration source.

Download Full Size | PDF

The target is a USAF symbol and consists of black paint on brushed aluminum as shown in Fig. 4 . It is approximately 28 mm on each side. A calibrated optical aberration, discussed in greater detail below, is then inserted in front of the imaging lens. The transmitter array is shown towards the bottom of the Fig. 3 schematic.

 figure: Fig. 4

Fig. 4 USAF symbol target composed of absorptive tape on brushed aluminum.

Download Full Size | PDF

A front view of the receiver pupil plane is shown in Fig. 5 along with the geometry of the transmitter array. Note that the transmitters are physically separated by approximately 16 mm in both directions transverse to the optic axis of the system allowing for both increased synthetic aperture diameter as well as sufficient overlap for aberration estimation. Based on the transmitter locations shown in Fig. 5 the final, synthesized imagery should have a resolution approximately 63% better than the sub-aperture limited imagery.

 figure: Fig. 5

Fig. 5 An illustration of the front of the multi-transmitter imaging system. The target is sequentially imaged using the Tx locations to capture target and aberration field values. The optical aberration is inserted in front of the 1” lens.

Download Full Size | PDF

If Tx0 is chosen to be at the origin (xT0 and yT0 are both zero) in describing the transmitter positions, then xT1 = 16 mm, yT1 = 0, xT2 = 0, yT2 = 16 mm, xT3 = 16 mm, and yT3 = 16 mm. The additional transmitter location relative to Fig. 2 results in additional x and y shear, providing redundant calculations to those shown in Eq. (11-13), which are averaged to increase accuracy of the estimates. Equations (14-16) show the implementation of Eq. (11-13) for this particular setup, accounting for the averaging of the redundant measurements and the specific transmitter locations.

a11=γ01y+γ02x+γ23y+γ13x416mm
a02=γ02y+γ13y416mm
a20=γ01x+γ23x416mm

Defocus aberration will be created by moving the target along the optic axis and measuring its distance from the imaging lens. This distance can be fed into an OSLO ray-trace simulation. The simulated data will be compared with the data captured using the multi-transmitter estimate. Toric curvature can be created by inserting a matched pair of cylindrical lenses in front of the imaging lens while rotating one of the lenses with respect to the other. Again, an OSLO simulation is used to capture the amount of toric curvature added for a known, relative rotation between the lenses.

3.2 Processing and Results

Each transmitter sequentially illuminates the target and the four backscattered fields are captured through digital holography as shown in Fig. 3. Next, the four captured pupils are registered to a common coordinate system and the wavefront differences are found. The orthogonal tilt terms, are extracted through the use of an image registration algorithm [7] and associated MATLAB code [8]. The image registration algorithm gives the shift in pixels between two images, which corresponds to the tilt of the shear between the two pupils. The transmitter shifts are known values, found through system calibration, therefore the coefficients a11, a02, a20 can be found. The aberrations are corrected by subtracting the aberrated wavefront from each of the detected fields Ud(x,y).

The ability to correct target defocus is demonstrated by moving the target through a variety of Z values between 3.5 and 7.5 meters. Single aperture data taken at best focus (Z = 6.97 m) is shown in Fig. 6(a) , the residual, single aperture, wavefront error (0.19 waves peak to valley) captured through the current method is shown in Fig. 6(b), and synthesized, higher resolution, data is shown in Fig. 6(c). Figure 6(d) show a single aperture image taken at a distance of Z = 3.87 m, Fig. 6(e) shows the solved wavefront error (5.02 waves peak to valley), and Fig. 6(f) shows the final, synthesized image. The final, synthesized image is larger than the others of the set because of the shorter range. Note that the target is originally severely blurred, to the point of being indiscernible and that the image has been synthesized without knowledge of target range or of any other target information.

 figure: Fig. 6

Fig. 6 Initial incoherent average of four transmit realizations are in the left column, calculated aberrations for the transmit realizations are in the center column, and the coherently combined transmit realizations with aberrations corrected are in the right column. The first row was the best focus for the system, second row corresponds to the object being moved closer, the third and fourth rows are again at best focus with astigmatism added by rotating a pair of cylindrical lenses varying amounts.

Download Full Size | PDF

Figure 7(a) shows the relative defocus curvature found through both OSLO simulations and experiment vs. the target distance Z. Toric curvature is added to the system by rotating a 1000 mm focal length cylindrical lens relative to another −1000 mm focal length cylindrical lens. The two lenses are closely spaced with curved surfaces facing one another, so as to minimize aberrations when the two are aligned. Figure 6(g) shows an image (target at 6.97 m) taken through a single aperture with a relative rotation of 1.5 degrees. The image is barely perceptible due to the wavefront error (2.22 waves peak to valley) found by the system and shown in Fig. 6(h), however it is easily corrected and the resulting synthesized image is shown in Fig. 6(i). A relative rotation of 6 degrees is shown in Fig. 6(j-l) (target at 6.97 m, 8.79 waves peak to valley wavefront error), however here the matched pair has been rotated 45 degrees with respect to the optical system as compared with the results shown in Fig. 6(g-i). Figure 7(b) shows the relative toric curvature found through both OSLO simulations and experiment as a function of the rotation between the cylindrical lenses, where the target was placed at 6.97 m.

 figure: Fig. 7

Fig. 7 Peak-to-valley aberrations found through the experiment and OSLO raytraces for (a) defocus as a function of target distance and (b) toric curvature as a function of rotation within a matched pair of cylindrical lenses.

Download Full Size | PDF

While the data collect for Fig. 7 was designed to isolate defocus (a) and astigmatism (b) there was always some combination of both present, and the algorithm is always solving for both defocus and astigmatism. This is evident from the best focus of the system shown in Fig. 6(a-c), where the minimum system astigmatism from the co-aligned cylindrical lenses is visible.

4. Conclusion

Solving directly for the defocus curvature in the captured pupil plane field is an important step in performing aperture synthesis. It allows for targets to be imaged at any range without direct knowledge of the distance to the target. Higher-order system aberrations may also be present, and resolving them would require calculation of higher order terms of the sheared wavefronts. This could be done by unwrapping the sheared wavefronts, or by using techniques similar to those described in [5]. To avoid incorrect extrapolation of higher order terms smaller transmitter offsets may be required.

A method for utilizing multiple transmitters to improve the efficiency of multi-transmitter aperture synthesis has been derived. This method allows for aberrations, such as defocus and toric curvature, to be found and corrected without any knowledge of the true distance to the target. Furthermore, directly solving for the aberrations allows for improved efficiency in multi-transmitter image synthesis by lowering the dependency on image sharpening algorithms. The approach detailed would likely be significantly faster than conventional image sharpening algorithms as they can require numerous iterations, each similarly computationally intensive to the direct solution given here. An experiment has been designed and performed which demonstrates the aberration correction and resulting aperture synthesis image products for toric and defocus aberrations. The results show that the multi-transmitter aberration correction routine can accurately solve for multiple waves of defocus and toric curvature.

References and links

1. J. C. Marron and R. L. Kendrick, “Distributed aperture active imaging,” Proc. SPIE 6550, 65500A (2007). [CrossRef]  

2. D. J. Rabb, D. F. Jameson, A. J. Stokes, and J. W. Stafford, “Distributed aperture synthesis,” Opt. Express 18(10), 10334–10342 (2010). [CrossRef]   [PubMed]  

3. D. J. Rabb, D. F. Jameson, J. W. Stafford, and A. J. Stokes, “Multi-transmitter aperture synthesis,” Opt. Express 18(24), 24937–24945 (2010). [CrossRef]   [PubMed]  

4. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). [CrossRef]   [PubMed]  

5. R. A. Hutchin, “Sheared coherent interferometric photography, a technique for lensless imaging,” in Digital Image Recover and Synthesis, P. S. Idell, ed., Proc. Soc. Photo-Opt. Instrum. Eng. 2029, 161–168 (1993).

6. T. Kreis, Handbook of Holographic Interferometry: Optical and Digital Methods (Wiley, 2005).

7. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]   [PubMed]  

8. M. Guizar, “Efficient subpixel image registration by cross-correlation,” http://www.mathworks.com/matlabcentral/fileexchange/18401-efficient-subpixel-image-registration-by-cross-correlation

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Illustration of a concept multi-transmitter system.
Fig. 2
Fig. 2 x and y shears for a three transmitter system.
Fig. 3
Fig. 3 The multi-transmitter experiment utilizes focal plane holography with a calibrated aberration source.
Fig. 4
Fig. 4 USAF symbol target composed of absorptive tape on brushed aluminum.
Fig. 5
Fig. 5 An illustration of the front of the multi-transmitter imaging system. The target is sequentially imaged using the Tx locations to capture target and aberration field values. The optical aberration is inserted in front of the 1” lens.
Fig. 6
Fig. 6 Initial incoherent average of four transmit realizations are in the left column, calculated aberrations for the transmit realizations are in the center column, and the coherently combined transmit realizations with aberrations corrected are in the right column. The first row was the best focus for the system, second row corresponds to the object being moved closer, the third and fourth rows are again at best focus with astigmatism added by rotating a pair of cylindrical lenses varying amounts.
Fig. 7
Fig. 7 Peak-to-valley aberrations found through the experiment and OSLO raytraces for (a) defocus as a function of target distance and (b) toric curvature as a function of rotation within a matched pair of cylindrical lenses.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

U d ( x,y )=P( x,y )exp( j φ e ( x,y) ) ) U b ( x x T ,y y T ),
φ d ( x,y )=2π W e ( x,y )+2π W b ( x x T ,y y T ),
W d ( x+ x T ,y+ y T )= W e ( x+ x T ,y+ y T )+ W b ( x,y ),
ΔW( x,y )= W e ( x+ x T0 ,y+ y T0 )+ W b ( x,y ) ( W e ( x+ x T1 ,y+ y T1 )+ W b ( x,y ) ),
ΔW( x,y )= W e ( x+ x T0 ,y+ y T0 ) W e ( x+ x T1 ,y+ y T1 ).
W e ( x x T ,y y T )= i,j a ij ( x x T ) i ( y y T ) j .
ΔW( x,y )=x( a 11 Δy+2 a 20 Δx )+y( a 11 Δx+2 a 02 Δy ) + a 10 Δx+ a 01 Δy a 11 x T1 y T1 + a 11 x T0 y T0 + a 20 x T0 2 a 20 x T1 2 + a 02 y T0 2 a 02 y T1 2 ,
Δx= x T1 x T0 ,
Δy= y T1 y T0 ,
ΔW'( x,y )x( a 11 Δy+2 a 20 Δx )+y( a 11 Δx+2 a 02 Δy ).
Δ W 01 ( x,y )=2 a 20 x T1 x a 11 x T1 y
Δ W 02 ( x,y )= a 11 y T2 x2 a 02 y T2 y.
a 11 = γ 01y 2 x T1 γ 02x 2 y T2
a 02 = γ 02y 2 y T2
a 20 = γ 01x 2 x T1
a 11 = γ 01y + γ 02x + γ 23y + γ 13x 416mm
a 02 = γ 02y + γ 13y 416mm
a 20 = γ 01x + γ 23x 416mm
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.