Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical imaging with phase-coded aperture

Open Access Open Access

Abstract

Experimental results are shown for an integrated computational imaging system with a phase-coded aperture. A spatial light modulator works as a phase screen that diffracts light from a point object into a uniformly redundant array (URA). Excellent imaging results are achieved after correlation processing. The system has the same depth of field as a diffraction-limited lens. Potential applications are discussed.

© 2011 Optical Society of America

1. Introduction

The advances in computer speed and image processing make possible optical imaging without conventional lenses. In this paper we report experimental studies of a previously described incoherent imaging system without a focusing lens [1]. In our approach the optical-coded-aperture imager consists of a phase plate followed by a detector array, as shown in Fig. 1. It represents a novel extension of the X-ray coded aperture system [24] to optical wavelengths, where we have applied phase retrieval concepts [5, 6] to design the phase plate whose diffraction pattern for a point object is a bandlimited uniformly redundant array (bl-URA). Correlation processing is applied to the intermediate image on the detector array to recover the object. In Sec. 2, a general theory of linear integrated imaging is presented. Section 3 contains a description of the experimental setup; Section 4 has results for the coded-aperture system; and Section 5 includes conclusions and potential applications of this new imager.

 figure: Fig. 1

Fig. 1 The experimental setup for the coded aperture imaging system: O, Object; BS, Beam Splitter; A, Aperture; SLM, Spatial light modulator; D, Detector array; BP, Blackened metal plate.

Download Full Size | PDF

2. Linear system theory for integrated imaging

Consider a shift-invariant linear optical imaging system. The image i(x,y) can be expressed as a function of object o(x,y) and point spread function (PSF) h(x,y) in the following convolution form,

i(x,y)=o(ξ,η)h(xξ,yη)dξdη.

Herein, we would like to assert that Eq. (1) can be considered as a general form for optical imaging where h(x,y) can be any realizable function. More specifically, h(x,y) need not be a delta-like function for sharp imaging, as is seen below.

Assume there exists a linear shift invariant operator L{•} which, when applied to h(x,y), yields the following result,

L{h(x,y)}=fδ(x,y)+g(x,y),
where fδ(x,y) is a Dirac delta-like function such as an Airy disk function that is spatially separated from the function g(x,y). We can add a linear digital processing system by applying the optical image to the linear operator L{•}. The result is
L{i(x,y)}=o(ξ,η)fδ(xξ,yη)dξdη+o(ξ,η)g(xξ,yη)dξdη.

The first term in Eq. (3) is a recovered sharp image, and fδ(x,y) is the PSF of the overall system including both optics and image processing.

In the design process of an integrated imaging system, an important step is to find the function h(x,y) which (i) is realizable with optical system and (ii) can transform to a delta-like function under some linear operators. In application, the optical system generally has further constraints on optical material, size, number of elements etc. so the function must be further required to be realizable with the optical system satisfying these constraints.

The linear operator L{•} can take many forms. For example, it can be an identity operator, a differential operator, or a correlation operator.

The simplest example of integrated imaging is a diffraction limited lens with a full circular aperture, where h(x,y) is a delta-like Airy disk, and L{•} is an identity operator. Another example is an integrated system where the image is blurred by lens aberrations and a subsequent linear deconvolution algorithm is applied to the intermediate image to yield a sharp picture.

In an earlier paper [1] we presented one more example to expand the linear imaging system design possibilities, in which h(x,y) is a bl-URA, and linear operator L{•} is a correlation operator. The main purpose of this article is to present experimental results of such a system which we call a phase-coded-aperture imaging system.

Briefly, the phase-coded-aperture optical imager consists of a phase plate followed by a detector array. For a point source, the diffraction pattern caused by the phase plate is the PSF of the optical system. It is a bl-URA calculated in the following manner:

h(x,y)=t(x,y)*b(x,y),
where * means convolution, t(x,y) is a URA and b(x,y) is a bandlimited function. A phase retrieval method is used to calculate the phase function that yields such a bl-URA. The linear operator is defined as
L{h(x,y)}=h(x,y)tR(x,y),
where ⊗ is a correlation operator and tR(x, y) is a repeated URA with mean removed, i.e.,
tR(x,y)=[t(xξ,yη)t]comb(ξ/Dx,η/Dy)dξdη,
in which Dx, Dy are the sizes of URA in X and Y directions, respectively; and t̄ is the mean value of URA t(x,y).

Combining Eqs. (4)(6) yields the following result,

L{h(x,y)}=Ccomb(x/Dx,y/Dy)*Λ(x/Δx,y/Δy)*b(x,y),
where Λ(•) is a triangle function defined as Λ(x) = max{1 – |x|,0} ; C is a constant with exact value determined by URA; and Δx, Δy are the pixel size of URA in X and Y directions, respectively. Equation (7) consists of an array of delta-like functions as
fδ(x,y)=Λ(x/Δx,y/Δy)*b(x,y),
in which a normalization constant is omitted.

So by correlating i(x,y) with tR(x, y), we can recover a sharp image with the overall PSF fδ(x,y) shown in Eq. (8).

3. Experimental setup

The experimental setup is shown in Fig. 1. The object is illuminated by spatially incoherent light with a wavelength of 633nm. (This is realized by a He-Ne beam passing through a rotating diffuser). The incoming beam passes a beam splitter and then is reflected by a spatial light modulator phase screen (Holoeye HEO1080P). The phase profile of the reflected beam is modified by an extra phase delay from 0 to 2π, as shown in Fig. 1, where black means 0 and white means 2π. This phase modulated wave is then reflected by the beam splitter and received by a detector array. The phase profile is calculated using a Fresnel domain phase retrieval method. Parameters of the setup are the following: the spatial modulator has a pixel size of 8μm and a fill factor of 85% ; the detector array has a pixel size of 13μm and pixel number of 1024×1024; the distance between object plane and phase screen is 1275mm; the equivalent free space distance between phase screen and detector array is 204.4mm; and the square aperture in front of the phase screen has a dimension of 5.5mm ×5.5mm. In this paper as a proof of principle to our theoretical concept [1], a reflective type spatial light modulator and beam splitter configuration has been employed. In an actual application, a transmissive phase plate could be fabricated to improve the optical system. This is planned for our next generation system.

4. Experimental results

The linear operator processing steps are illustrated in Fig. 2. A point source located at a distance of 1275mm is imaged by the coded aperture system shown in Fig. 1. The intermediate image is cross correlated with a repeated URA and the central portion of the correlation picture is the recovery of the point source image.

 figure: Fig. 2

Fig. 2 Illustration of the correlation image processing using a point object located at a distance of 1275mm. (a) the intermediate image or bl-URA at the detector, D in Fig. 1, (also refer to Fig. 6a for a better view); (b) the repeated URA pattern; (c) the result of image cross-correlation between (a) and (b); (d) the center section of (c) or a point object recovery (also refer to Fig. 3a).

Download Full Size | PDF

For a point object, the cross correlation between the intermediate image and the repeat URA yields a periodically distributed point-like pattern shown in Fig. 2c. The period of the pattern is equal to the size of the URA. Similarly for a general object, the correlation result is a repeated object pattern with the same period. If the object (in image space) is larger than the square size in Fig. 2c, then the periodic objects will overlap. This constrains the maximum size of an object (field of view) allowed that can be faithfully imaged. Generally the linear size of URA is chosen to be about half the detector size D, then the angular field of view θ is θ = D/(2L), where L is the distance between the phase plate and the detector.

Figure 2d is the overall PSF of the coded aperture imaging system including optics and digital processing. To achieve a good image recovery, the URA used in correlation imaging should have the same orientation and size as the bl-URA at the detector plane. The recovered image quality deteriorates quickly for even a small mismatch. In this experiment, when the object is located at a distance of 1275mm, the bl-URA has a size of 6.3mm. From Fig. 3 one can observe the overall PSFs due to the mismatch of URA size to bl-URA at detector. When the URA size used for correlation processing is changed to 6.2mm, significant artifacts appeared in the combined PSF after digital processing. This can result in a poor imaging result especially for extended objects. Sometimes, resampling and interpolation of the URA are required before correlation processing. Studying the effect of interpolation on the artifacts is beyond the scope of this paper.

 figure: Fig. 3

Fig. 3 Image recovery result with a point source located at a distance of 1275mm using URAs of different sizes in correlation processing. URA size is (a) 6.3mm; (b) 6.2mm and (c) 6mm.

Download Full Size | PDF

Figure 4 shows the imaging results for a letter object located at a distance of 1275mm. The intermediate image in Fig. 4a is processed linearly using the same correlation method, and the recovery result is shown in Fig. 4b. In Fig. 4a we observe a general feature of the intermediate image for an extended object. The intermediate image has an overall envelop shape that is bright in the center and the intensity slowly drops to zero at the edge. There are also small scale intensity variations which reflect the details of the object. In Fig. 4a one can see regions of the CCD where detector response is low. This causes a big intensity drop (with specks in the image, some are indicated by the arrows) in comparison to the envelop. Despite this, a good recovery is still achieved as shown in Fig. 4b. In the image recovery, one can simply change these low values to bigger ones such that the envelop of the intermediate image is smooth. The exact value of these small regions has little effect on the recovery. In comparison with conventional imaging system, a small region of dead pixels in conventional lens system would cause a complete lose of image in that section.

 figure: Fig. 4

Fig. 4 Experimental result with a letter object located at a distance of 1275mm. (a) intermediate image (the arrows indicate regions where CCD pixel response is low. All of these are not labeled); (b) image recovery result by correlation processing. The linear size of recovery is about half the size of the intermediate image.

Download Full Size | PDF

Figure 5 shows the recovery when the object distance is changed over a range from 1275mm to 1000mm. This corresponds to defocus amounts up to 1.6λ. In the recovery the same repeated URA pattern is used for correlation processing of all the intermediate images at different distances. The depth of field of the coded aperture system is similar to that of a diffraction limited lens. i.e., the images within ±λ/4 defocus provide good quality. Interestingly, the defocused image quality degradation takes a different form in comparison to that with a conventional diffraction limited lens. For a coded aperture system, if there exists large defocus, then one observes the repetition of objects over the whole scene. The intensity of these repetitions increases as the defocus amount becomes larger.

 figure: Fig. 5

Fig. 5 Image recovery result for the object located at different distances. The same URA pattern as shown in Fig. 2 is used for correlation processing. The object is located at a distance of (a) 1275mm; (b) 1225mm; (c) 1175mm; (d) 1100mm; (e) 1050mm; (f) 1000mm. The corresponding defocus amounts are 0, λ/4, λ/2, λ, 1.3λ and 1.6λ, respectively.

Download Full Size | PDF

In order to understand the cause of the artifacts due to different object distances, we show in Fig. 6 the PSFs of the optical system for two object distances: 1275mm and 1100mm. The distance of 1100mm corresponds to a defocus of λ. Two differences are noticed: (i) the fine details of the PSF are different; (ii) the full sizes of the PSFs are different. The size of the PSF for a focused distance is 6.3mm, while the size of the defocused PSF is 6.42mm. (The sizes of PSFs are found by correlating the PSFs with repeated URA of different scales. The size of the URA which yields the best recovered point object is considered as the size of the PSF of the optical system.)

 figure: Fig. 6

Fig. 6 The intermediate images for a point object located at (a) 1275mm and (b) 1100mm.

Download Full Size | PDF

A much better recovery can be obtained for a defocused object if a correct size URA is used in recovery. This is shown in Fig. 7, where the URA with a size of 6.42mm is used to recover the object located at the defocused distance of 1100mm. We see a significant improvement of image quality.

 figure: Fig. 7

Fig. 7 Image recovery result for object located at 1100mm using repeated URA pattern of different scales. The size of URA array is (a) 6.3mm, (b) 6.42mm.

Download Full Size | PDF

5. Concluding remarks

In the literature there are many efforts to extend the X-ray coded aperture system to optical imaging [7, 8]. In our approach based on linear system theory, we have described an integrated computational imaging system in two parts: the optical system followed by a linear operator. The linear operator transforms the PSF of the optical system into a delta-like function, Eq. (2). For the illustration of the coded aperture in Eq. (5), the linear operator is a correlation operation. In correlation processing it is important to set the size of the URA the same as that of the PSF of the optical imager. Experimental results are presented in order to demonstrate the validity of the system concept in our earlier publication [1]. In our second generation experiments, considerable imaging capability is to be demonstrated by a use of transmissive phase plate fabricated with photolithographic techniques. Good imaging is obtained using a CCD with non-uniform pixel response. The depth of field of the coded aperture system is the same as that for a diffraction limited lens. i.e., ±λ/4 defocus is acceptable.

For objects closer than the focused plane, the PSF array of the coded aperture system becomes larger. While not shown in this paper, the PSF size is smaller if the object is farther away from the plane of focus. So one can distinguish the sign of defocus for a coded aperture system. Furthermore, the defocus amount can be found accurately by correlating the intermediate image (point or extended) with repeated URA of different scales. So besides the imaging application, the coded aperture system can be used in a ranging application.

Since there is no focusing element in the system, a bright laser beam in the object space will not form a point at the detector plane, instead light is diffracted over a large area of the detector. This provides damage protection for the detector. Unlike infrared imaging with lenses, there is no specular light reflected from the detector back to the object field for this coded aperture system; hence, this makes it useful in some applications.

Acknowledgments

This research is supported in part by the Army Research Office.

References and links

1. W. Chi and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun. 282, 2110–2117 (2009). [CrossRef]  

2. R. H. Dicke, “Scatter-hole cameras for X-rays and Gamma rays,” Astrophys. J. 153, L101 (1968). [CrossRef]  

3. E. E. Fenimore and T. M. Cannon, “Coded aperture imaging with uniformly redundant array,” Appl. Opt. 17, 337–347 (1978). [CrossRef]   [PubMed]  

4. R. G. Simpson and H. H. Barrett, “Coded aperture imaging,” in Imaging in Diagnositc Medicine, S. Nudel-man, (Ed.) (Plenum, 1980).

5. F. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

6. J. C. Dainty and F. R. Fienup, “Phase retrieval and image reconstruction for astronomy,” in Image Recovery: Theory and Application, H. Stark, (Ed.) (Academic, 1987).

7. D. P. Casasent and T. Clark (Ed.), “Adaptive Coded Aperture Imaging and Non-imaging Sensors,” Proc. SPIE 6714 (2007).

8. D. P. Casasent and S. Rogers (Ed.), “Adaptive Coded Aperture Imaging and Non-imaging Sensors II,” Proc. SPIE 7096 (2008).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The experimental setup for the coded aperture imaging system: O, Object; BS, Beam Splitter; A, Aperture; SLM, Spatial light modulator; D, Detector array; BP, Blackened metal plate.
Fig. 2
Fig. 2 Illustration of the correlation image processing using a point object located at a distance of 1275mm. (a) the intermediate image or bl-URA at the detector, D in Fig. 1, (also refer to Fig. 6a for a better view); (b) the repeated URA pattern; (c) the result of image cross-correlation between (a) and (b); (d) the center section of (c) or a point object recovery (also refer to Fig. 3a).
Fig. 3
Fig. 3 Image recovery result with a point source located at a distance of 1275mm using URAs of different sizes in correlation processing. URA size is (a) 6.3mm; (b) 6.2mm and (c) 6mm.
Fig. 4
Fig. 4 Experimental result with a letter object located at a distance of 1275mm. (a) intermediate image (the arrows indicate regions where CCD pixel response is low. All of these are not labeled); (b) image recovery result by correlation processing. The linear size of recovery is about half the size of the intermediate image.
Fig. 5
Fig. 5 Image recovery result for the object located at different distances. The same URA pattern as shown in Fig. 2 is used for correlation processing. The object is located at a distance of (a) 1275mm; (b) 1225mm; (c) 1175mm; (d) 1100mm; (e) 1050mm; (f) 1000mm. The corresponding defocus amounts are 0, λ/4, λ/2, λ, 1.3λ and 1.6λ, respectively.
Fig. 6
Fig. 6 The intermediate images for a point object located at (a) 1275mm and (b) 1100mm.
Fig. 7
Fig. 7 Image recovery result for object located at 1100mm using repeated URA pattern of different scales. The size of URA array is (a) 6.3mm, (b) 6.42mm.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

i ( x , y ) = o ( ξ , η ) h ( x ξ , y η ) d ξ d η .
L { h ( x , y ) } = f δ ( x , y ) + g ( x , y ) ,
L { i ( x , y ) } = o ( ξ , η ) f δ ( x ξ , y η ) d ξ d η + o ( ξ , η ) g ( x ξ , y η ) d ξ d η .
h ( x , y ) = t ( x , y ) * b ( x , y ) ,
L { h ( x , y ) } = h ( x , y ) t R ( x , y ) ,
t R ( x , y ) = [ t ( x ξ , y η ) t ] comb ( ξ / D x , η / D y ) d ξ d η ,
L { h ( x , y ) } = C comb ( x / D x , y / D y ) * Λ ( x / Δ x , y / Δ y ) * b ( x , y ) ,
f δ ( x , y ) = Λ ( x / Δ x , y / Δ y ) * b ( x , y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.