Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design architectures for optically multiplexed imaging

Open Access Open Access

Abstract

Optically multiplexed imaging is the process by which multiple images are overlaid on a single image surface. Uniquely encoding the discrete images allows scene reconstruction from multiplexed images via post processing. We describe a class of optical systems that can achieve high density image multiplexing through a novel division of aperture technique. Fundamental design considerations and performance attributes for this sensor architecture are discussed. A number of spatial and temporal encoding methods are presented including point spread function engineering, amplitude modulation, and image shifting. Results from a prototype five-channel sensor are presented using three different encoding methods in sparse-scene star tracking demonstration. A six-channel optically multiplexed prototype sensor is used to reconstruct imagery from information rich dense scenes through dynamic image shifting.

© 2015 Optical Society of America

1. Introduction

In many imaging systems, the detector under samples the optical point spread function producing an image resolution that is limited by the number of pixels on the focal plane array (FPA). This introduces a fundamental tradeoff between the field of view (FOV) of a sensor and its sampling resolution in object space. Methods have been developed to create wide FOV systems with enhanced pixel resolution by stitching together images from multiple sensors, by scanning a single sensor across the scene, and through super-resolution techniques that combine a series of images with sub-pixel shifts. Like the vast majority of sensors used today, these systems are based on the principle that at any instant in time each pixel views only a single point in object space. Therefore, increasing the spatial resolution requires either more pixels or more time.

Optically multiplexed imaging is based on the principle that a single pixel can observe multiple object points simultaneously. This has been investigated in design architectures that use multiple lenses to form images on a single FPA [1], a cascade of beam splitting elements to divert multiple fields of view into a single lens [2–5], and by placing an interleaved array of sub-aperture micro-prisms in front of a single lens [6,7]. In this paper we present results using a new optical design architecture based on a division of aperture technique to divide the pupil area of a single lens into a number of independent imaging channels. This method, first introduced in [12], offers advantages over prior approaches through its flexibility to individually direct and encode the optical channels, and it yields a significant volume advantage in systems with a high degree of multiplexing.

The image formed by an optically multiplexed system is the superposition of multiple images formed by discrete imaging channels. A scene reconstruction process is required to separate the individual channel images and reconstruct an estimate of the scene. One broad category of disambiguation strategies is to use temporal encoding to construct a fully determined system of equations from a number of uniquely encoded samples of the scene. This strategy is useful for dense information-rich scenes and has been previously achieved by using shutters to attenuate individual imaging channels [4] or by using a single moving element to shift one of two channels between samples [2]. Temporal encoding methods explored through our research include active shutter-based method, an active method of rapid and precise imaging shifting [12], and a passive method that uses image rotation to observe differential image shifts in a moving scene. Another broad category of disambiguation strategies has applications in sparse scenes. These spatial encoding methods use point spread function (PSF) engineering to uniquely encode each channel image. Spatial image encoding methods provide the advantage of disambiguation of a multi-layered image from a single measurement of the scene, which addresses sparse scene applications such as point source detection or star field mapping.

In the following sections of this paper, we examine the class of optically multiplexed imaging systems that combine multiple imaging channels into the entrance pupil of a single imaging sub-system. In Section 2, we describe the two fundamental methods of pupil division multiplexing, division of amplitude and division of aperture, and we discuss the relative performance implications of each method. Section 3 establishes the fundamental optical design considerations for the division of aperture method. Topics include encoding uniformity and its relationship to pupil sampling, reflective and refractive multiplexing assemblies. In Section 4 methods of image encoding and decoding are described. In Section 5, we describe our prototype sensors: a pair of 5-channel designs that employ three different encoding mechanisms (spatial image encoding, passive temporal encoding via image rotation and scene motion, and active temporal encoding via channel attenuation), and also a six-channel prototype that uses piezeoelectric actuators for rapid and precise image shifting. Section 6 presents experimental results for the prototypes in sparse-scene and dense-scene applications.

2. System architecture and performance attributes

2.1 Multiplexing architectures

Consider an optically multiplexed imaging system composed of a single image forming element, which we will refer to as the parent lens, and a multiplexing element that directs multiple discrete imaging channels into the aperture of this lens, which we will refer to as the multiplexing assembly. Through the design of the multiplexing assembly, these channels can be directed to observe overlapping, adjacent, or separated fields of view. Every optical system has a finite limit on the amount of power it can collect. This power (Φ) may be expressed as the product of the transmission between the object and image planes (τ), the average radiance of the scene (L), the area of the collection aperture (A), and the solid angle of the field of view (Ω).

Φ=τLAΩ

The product AΩ is known as the throughput or étendue of system, and is a conserved quantity that may be expressed as various combinations of the system’s focal length, aperture size, F/#, field of view, or detector area. The basic principle of an optically multiplexed imaging system is to increase the field of view or pixel count of an image by overlapping multiple images on the detector plane wherein the total power is the sum of power from each multiplexed channel. Under the condition of uniform scene radiance, the maximum power collected by the optical system becomes a conserved quantity, which implies that multiplexing may only be achieved through a tradeoff between the terms in Eq. (1). Thus, there are fundamentally two ways to achieve image multiplexing: division of amplitude and division of aperture, which are illustrated in Fig. 1.

 figure: Fig. 1

Fig. 1 Pupil division strategies. (a) division of amplitude using a cascade of beam splitters, (b) division of aperture using an array of prisms.

Download Full Size | PDF

We define division of amplitude as the multiplexing technique that increases the solid angle of the FOV by directing different portions of the pupil’s transmission function to view different regions of object space. By Eq. (2) we see that the number of additional FOV channels (N) used to increase the solid angle of the system is directly related to the transmission loss of each channel, assuming uniform division. Dividing the pupil transmission may be achieved by splitting the polychromatic signal intensity between channels, splitting it spectrally so that each channel observes a different wavelength band, or by splitting it polarimetrically such that a different polarization state is observed in each channel.

Φtotal=(τN)LA(NΩ)

A multiplexed imager based on division of amplitude might be designed using a series of cascaded beam splitters and mirrors or with birefringent prisms. This method holds the advantage that properly sized and coated elements can produce multiplexing with a high degree of irradiance uniformity between the channels and no vignetting. Additionally, each channel may retain its full aperture area and will therefore have the same F/# and optical resolution as the parent lens. However, a drawback of this architecture is that the multiplexing elements become very large. Not only must each element accommodate the full pupil of the parent lens, but the elements must also grow progressively larger as they move further away from the entrance pupil to avoid vignetting as ray bundles diverge across the FOV.

The multiplexing architecture we define as division of aperture is the primary focus of this paper. In this technique the field of view solid angle is extended by a factor of N by dividing the parent lens’ pupil area and redirecting each channel to different regions of object space as described in Eq. (3).

Φtotal=τL(AN)(NΩ)

Division of aperture holds a distinct volume advantage over division of pupil transmission because the entire multiplexing assembly may be as small as the pupil of the parent lens, and any additional splitting, folding, or encoding elements need only accommodate a sub-pupil region. A potential compromise is that the smaller aperture area (AN) increases the channel F-number (F/#N) causing a loss of optical resolution. Some degree of resolution loss may be mitigated with an interleaved multiplexing element that directs a spatially distributed sampling of the pupil to each channel [7]. However, this implementation presents the challenge of properly phasing the elements within each channel, and if secondary folding or encoding elements are required much of the volume advantage would be lost.

2.2 Multiplexed system performance attributes

An early decision in the design process is the selection of the number of channels to be multiplexed. Increasing the number of channels might increase the field of view and the number of pixels recovered in the final image, but there are also consequences related to image quality and design complexity. Here we describe a few of the basic effects pertaining to division of aperture architectures.

2.2.1 Aperture and field of view

Beginning with first-order optical design parameters we start with the assumption that each multiplexed channel shares the same parent lens. Thus, the effective focal length of each channel (flN) will be equivalent to the parent lens focal lens (flo):

flN=flo.

In a division of aperture system the aperture area of the parent lens (Ao) is divided equally among N channels

AN=AoN,
which can be used to approximate the effective channel aperture diameter (dN) and effective channel F-number (F#N) as
dNdoN,
and

F#NNF#o.

The fields of view of the individual imaging channels may be arranged in any configuration in object space to view either continuous or discontinuous regions of the scene. Each channel will have a FOV equal to the FOV of the parent lens

θxN,yN=θxo,yo2Tan1(nx,ydp2flo)
where θxo,yo are the full horizontal and vertical FOV of the parent lens, the width of each pixel in the focal plane is (dp) and the number horizontal or vertical pixels in the camera is nx and ny, respectively. The solid angle of the total field of view is directly related to the number of multiplexed channels.

2.2.2 Resolution

Whether or not multiplexing increases the resolution of the disambiguated image depends on whether the resolution is limited by the detector sampling or by the optical point spread function. In the under-sampled detector-limited regime the resolution of the multiplexed system is dictated by the number of pixels sampling the object space, Mpix, which increases directly with the number of multiplexed channels as shown in Eq. (9).

Mpix,N=Nnxny.

With optically-limited sampling the resolution can be described by the sensor’s space-bandwidth product, Mopt, which is defined by the solid angle of the field of view divided by the solid angle of the diffraction-limited Airy Disk

Mopt,N=Nθxoθyoπ(1.22λdN)2Mopt,o.
Thus, the resolution of the multiplexed system ranges from the resolution of the parent system (when optically limited) up to N-times the pixel-count of the camera when the sensor is detector limited. In the optically limited case multiplexing does not increase resolution; however, depending on the application the multiplexed system may benefit from a redistribution of the resolution elements via a discontinuous FOV or an extreme FOV aspect ratio. In the detector-limited case, the resolution gain of the multiplexed system may even exceed a factor of N if the multiplexing assembly provides precise dithering that enables super-resolved sampling as demonstrated in [12].

Examination of the detector and optical cutoff frequencies of the modulation transfer function (MTF) may be used to determine if the system is detector limited or optically limited. For square pixels, the detector MTF curve is a sinc function for which we define a cut-off frequency at its first zero

MTFcut,det=1dp,
which is equal to twice the Nyquist frequency. The optical cutoff frequency for incoherent illumination of a circular pupil occurs at

MTFcut,opt=1λF#N.

To first order, taking the ratio of Eqs. (11) and (12) provides an expression for the optical blur size relative to the detector size and can be used to assess whether the system is detector or optically limited [8]. A value greater than 2 implies that the optics MTF drops to zero at or below the Nyquist frequency, thereby defining an optically-limited system. A multiplexed system with a ratio less than 2 can obtain the full factor of N increase in system resolution relative to the number of pixels in the camera. Note that the N1/2 term in Eq. (13) applies specifically to division of aperture systems and is absent if the full pupil area is divided amongst the multiplexed channels.

λN1/2F#odp2,opticallylimited

2.2.3 SNR and dynamic range

The power in each channel image is inversely related to the number of multiplexed channels, as described in Eq. (3). Translating this effect into a channel signal-to-noise-ratio requires knowledge of the encoding method and the scene conditions. In some cases the background noise in each channel will be the sum of the background in every channel. However, this depends on whether the scene is signal or background limited, whether it is densely populated or sparse, and whether the encoding method involves periodic attenuation of the channels. Further, the signal level is affected by the relationship between the pixel size and the point spread function, which may or may not be encoded as described in Section 3.4. When selecting the number of multiplexed channels these effects should be evaluated in terms of the specific sensing application.

In the case where each channel simultaneously views multiple object points (i.e. encoding by means other than channel attenuation) the well-depth of the pixel is shared between N channels. In sparse signal-limited applications such as astronomical observation the power on each pixel may result predominantly from a single object source in which case every channel may retain the full dynamic range and bit-depth of the camera. Conversely, when observing rich scenes or in background limited situations the dynamic range and bit-depth is shared between the channels. In the limiting case where the flux on each pixel is uniformly distributed between the channels the resulting dynamic range (DRN) and bit-depth (BN) of the disambiguated image may be computed by Eqs. (14) and (15):

DRN=DNoN,
and
BN=Bolog2N,
where DNo and Bo are the dynamic range and bit depth of the camera used for image collection. This provides a simple approximation that for every 2 multiplexed channels the camera output is reduced by 1-bit.

3. Optical design considerations

We have identified a number of optical design considerations specific to optical systems that use division of aperture as the multiplexing architecture. In this section we describe issues related to the illumination uniformity of the multiplexed image that are influenced by the design of the multiplexing assembly. These include its location relative to the entrance pupil, pupil aberrations, and vignetting. Methods of dividing the pupil by refractive and reflective multiplexing assemblies are also described. A number of spatial and temporal encoding schemes are discussed along with methods to incorporate them into the optical design.

3.1 Multiplexed image illumination uniformity

Division of aperture presents the challenge of achieving a high degree of illumination uniformity across the FOV of each channel. In addition to the well know effects of image distortion and Cos4 roll off, the uniformity of each channel’s image is degraded by non-uniform sampling by the multiplexing assembly. Figure 2 illustrates three fundamental configurations for the optical system with a notional 9-channel multiplexing assembly. In panel (a) the multiplexing assembly serves as an aperture stop for the parent lens. In principle, this configuration can achieve the best image channel illumination uniformity provided that the sampling elements are sized with an equal area and the multiplexing assembly is sufficiently thin to fully divide the pupil near the plane of the aperture stop. In panel (b) of Fig. 2 the multiplexing assembly is positioned at a remote location with respect to the aperture stop. The illumination uniformity of each channel is degraded because the projection of ray bundles onto the multiplexing assembly varies across the FOV. The shift of the center of each ray bundle with respect to the center of the multiplexing assembly (δ) may be expressed as

δ=ztanθ,
where z is the distance from the entrance pupil to the multiplexing assembly and θ is the field of view angle. In this configuration it is important to keep δ small with respect to the size of the sub-pupil multiplexing elements. As δ increases differential vignetting is observed across the fields of view of the individual channels.

 figure: Fig. 2

Fig. 2 Configurations for division of aperture systems and their beam projections on the multiplexing assembly (MA). (a) the MA serves as the aperture stop, (b) a remotely located MA, and (c) a design that projects the entrance pupil to the MA with aberration of the pupil.

Download Full Size | PDF

A convenient layout uses an optical design that projects the entrance pupil to the multiplexing assembly. This is shown notionally with a reimaging design in panel (c) of Fig. 2. Pupil aberrations become a consideration in this layout because they can distort the shape of the entrance pupil as the field angle increases [9]. The results of excessive pupil aberration are multiplexing image non-uniformities that are similar to channel-dependant vignetting. If necessary, it is possible to reduce pupil aberration in an optical design by limiting higher-order aberrations induced by the elements [10]. However, this limits the ability to balance low order aberrations with higher orders, which may require additional optical elements to achieve satisfactory image quality.

3.2 Multiplexing assembly design

The multiplexing assembly (MA) may consist of an array of prisms or mirrors to direct multiple fields of view into the parent lens. The selection of the MA architecture has implications on a number of system properties including the packaging volume and encoding uniformity. A refractive MA has the benefit of compactness, but introduces dispersion and anamorphic distortion into the image. A reflective assembly will typically occupy a larger volume, but will not degrade or distort the image formed by the parent lens. Further considerations include the effects of image rotation caused by the MA layout.

Placement of the MA should be as close as possible to the aperture stop or a pupil image to achieve good illumination uniformity. Further, if planar surfaces are used the MA is ideally located in a collimated space where an angular deviation of rays will not induce decentered aberrations into the channels. For an infinite-conjugate lens an obvious location for the MA is in front of the parent lens, which can expand its FOV beyond the lens’ aberration limit. Finite-conjugate lenses can use an internal MA placed in a collimated space within the lens assembly.

3.2.1 Refractive multiplexing assemblies

A prism-based MA allows for compact in-line optical designs. The prisms can be easily placed at the aperture stop or entrance pupil of the optical design and encoding can be performed through a number of simple methods described in the following section. Figure 3 shows notional prism-based MA designs that divide the aperture of the entrance pupil in front of the lens.

 figure: Fig. 3

Fig. 3 (a) a notional 2-element single-prism MA for a LWIR camera. Encoding is performed by a simple motor assembly that rotates the MA about the optical axis of the lens. (b) optical layout for a 4-channel multiplexing assembly based on achromatic prisms.

Download Full Size | PDF

The chief limitation of refractive MAs is dispersion caused by the prisms, which can significantly limit the extent of the multiplexed field of view. By expressing a system’s tolerance for chromatic image blur in terms of its angular resolution a practical upper limit for MA deviation can be derived as follows. We begin by defining the dispersion of a prism in terms of its Abbe Number (or ‘V-number). Equation (17) defines the V-number for a material in terms of its refractive index (n) at the lower, middle, and highest wavelengths. Table 1 shows a selection of low dispersion materials in common visible and infrared wavebands. Using the thin prism approximation (i.e. small apex angle, α) the angular deviation at the center of the waveband (δ) and the dispersion (Δ) of a prism may be approximated by Eqs. (18) and (19). These quantities are shown in Fig. 4.

Tables Icon

Table 1. A selection of low-dispersion materials.

 figure: Fig. 4

Fig. 4 Beam deviation and dispersion. (a) a thin prism. (b) secondary dispersion in an achromatic prism pair.

Download Full Size | PDF

V=nλmid1nλlownλhigh
δ(nλmid1)α
ΔδV

The field of view of an individual pixel (φ) is commonly referred to as the instantaneous field of view (IFOV) and defines the angular resolution of the system. As the dispersion approaches or exceeds the IFOV there is a noticeable loss of resolution. The IFOV may be approximated as the ratio of the pixel pitch to the effective focal length,

φdpflo.
We can then derive a maximum deviation angle for the MA in terms of a factor, k, multiplied by the IFOV, where k is the allowance for chromatic image blur relative to the size of a pixel. For example, a k-factor of 0.25 permits a maximum chromatic image of ¼ pixel to be caused by the MA.

Δmax=kφ.

Equations (19) through (21) can then be combined to describe a maximum deviation angle (δmax) for any channel of the MA with respect to the optical axis of the parent lens:

δmax=kVφ.

The value kV places an upper limit on the size of the multiplexed image. For example, if k = 0.25 and V = 100 the MA can only divert a channel by 25 IFOVs with respect to the optical axis. It becomes evident that a single prism MA is insufficient for most broadband imaging applications because deviations of hundreds-to-thousands of IFOVs are required to fully separate the channels in object space.

An achromatic prism pair can greatly extend the useful range of a refractive MA. Analogous to an achromatic doublet, an achromatic prism pair deviates two wavelengths to the same angle and results in a smaller secondary dispersion (ε) as shown in Fig. 4. Secondary dispersion can be expressed in terms the difference between the V-Numbers and partial dispersions (P) of the two elements as

ε(P2P1V2V1)δ
where,
P=nλmidnλhighnλlownλhigh.
Then for a secondary dispersion of ε = kφ we derive a maximum deviation angle of:

δmax=|V2V1P2P1|kφ.

The leading factor of Eq. (25) is shown in Table 2 for a selection of achromatic prism pairs with low secondary dispersion. The data show a vast improvement in the maximum deviation angle which indicates that a refractive MA can multiplex widely spaced FOVs without an appreciable resolution loss.

Tables Icon

Table 2. Examples of achromatic prism pairs with small amounts of secondary dispersion.

Another consideration for refractive multiplexing assemblies is anamorphic distortion. The deviation and dispersion equations in this section were simplified using the small angle approximation for ‘thin prisms’. As the prism’s apex angle increases for larger deviations the deviation angle becomes more dependent on the angle of incidence into the prism. With an extended field of view ray angles are distorted producing a warped image. Anamorphic distortion can be minimized through the optical design of the MA and corrected in the final image through calibration and post-processing; however, significant levels of distortion may be unavoidable for large deviation angles and may alter the angular resolution, projected pupil area, and image illumination as a function of field of view angle.

3.2.2 Reflective multiplexing assemblies

A multiplexing assembly consisting of an array of mirrors has the advantage of unlimited deviation angles without degrading the image quality through dispersion or distortion. It can also utilize the pupil area more efficiently because the mirror elements can be mounted from behind or incorporated into a single monolithic element with knife edge transitions between the facets. The disadvantages of this architecture are that it typically occupies a larger volume than a refractive MA, and the MA must be tilted with respect to the optical axis of the lens. If the MA is tilted with respect to the aperture stop or pupil, illumination non-uniformities will also increase.

Figure 5 shows reflective MA layouts that the authors have found useful. Panel (a) shows a single element MA placed in front of a narrow field of view lens. The MA is tilted with respect to the optical axis and can be modeled as either a remote aperture stop or a vignetting surface. Panel (b) shows a multi-element MA. This architecture uses an additional fold mirror that allows for beam steering and can be used to increase the physical separation between channels for encoding or depth imaging applications.

 figure: Fig. 5

Fig. 5 Notional Reflective multiplexing assemblies. (a) a narrow FOV lens used with a single element MA acting as a remote tilted aperture stop, and (b) a multi-element MA shown with a reimaging lens that projects the pupil to the MA.

Download Full Size | PDF

3.3 Image rotation

The multiplexing assembly may introduce rotation or parity changes in the image. This occurs when multiplexing elements divert light at angles between the horizontal and vertical planes of the detector array, or with an odd number of reflections, respectively. The optical design of the MA can be used to control the image rotation to some degree, and additional elements may be introduced to deterministically control the image rotation to a prescribed value. The consequences of image rotation introducing gaps or overlap between adjacent FOV channels should be considered. In some cases this rotation can be used as a method of passive image encoding as described in the following section.

4. Image encoding and decoding methods

Image reconstruction from multiplexed measurements requires per channel image encoding and a subsequent decoding procedure. Notable exceptions to this requirement are when imaging a scene from a constrained domain in such applications as pattern recognition or star tracking, or when multiple multiplexed systems observe the same scene [11]. Encoding and decoding methods can be split into two fundamental categories: spatial and temporal.

4.1 Spatial image encoding

Spatial encoding is most effective when observing a sparse scene of point-like sources such as the night sky or in unresolved detection and tracking applications. If the point spread function of each channel is uniquely spatially encoded and the scene is sparse (potentially after some operation such as background subtraction is applied) a sparse approximation algorithm can robustly recover the sparse image. In this approach, we assert that the signal is sparse in an over-complete dictionary formed from multiple matrices, where each matrix is a convolution with a single channel’s point spread function. A Matching Pursuit [13] algorithm can find a sparse approximation for the measured signal in this dictionary. There are a number of alternative sparse approximation approaches that can be used, based on L1 regularization as discussed in [14, 15], message passing in graphical models [16], and extensions of matching pursuit [17].

Various methods of PSF engineering can be applied to encode the image including using diffractive or holographic elements or by applying a low order aberration term to elements in the multiplexing assembly. Aberrations in the parent lens itself may be sufficiently unique to encode the image in a division of aperture system, but variation across the field of view may require a spatially varying matched filter. Another spatial encoding method involves dividing the transmission of each channel into a pattern of two or more image points. Splitting the PSF into two image points is the most signal-efficient method of spatial image encoding and can be accomplished by such methods as using a weak birefringent prism or some other weakly deviating dichroic or intensity splitting element.

4.2 Temporal image encoding

In conventional photographic and video imaging applications that are densely populated with extended objects temporal image encoding methods may be used to disambiguate a multiplexed image. Temporal encoding may be accomplished by applying a series of image shifts, intensity modulations, or time-varying point spread functions to the channels. Active methods of implementation include tilting or rotating elements in the multiplexing assembly, applying a variable attenuation or shuttering to a sub-set of the channels, or using a spatial light modulator to actively encode the point spread function. Passive temporal encoding can be achieved if the multiplexing assembly imparts a unique rotation to each channel and a known relative motion exists between the sensor and the scene. In this case, observing the trajectory of features in the multiplexed image as the sensor or scene moves in a known way provides the method of disambiguation.

One method of image decoding is to construct a fully determined set of equations from a sequence of N uniquely shifted measurements. This process is described below as for an encoding method that involves known 2-dimensional per-channel image shifts in each measurement. Results obtained using this approach are shown in Section 6. We begin by expressing an imaging process as a linear transformation from object to image space with additive noise as

z=Ax+ε,
where zlis the measured image observed on the focal plane, Alxm is the imaging transformation matrix, xm is a discretized m-pixel representation of the scene, and εl represents the noise corrupting each pixel measurement. A multiplexing imaging process has a transformation matrix comprised of an encoding, a selection, a downsampling, and a multiplexing operation. Thus, the multiplexing imaging process can be written as
z=AmultiplexAdownsampleAselectionAencodingx+ε.
The encoding operation, AencodingNmxm, produces an encoded version of the underlying scene for each of the N channels. In this example the encoding matrix produces per-channel image shifts of an integer number of pixels in the reconstructed image space. The selection matrix, AselectionflNxNm, represents the mapping from the shifted scene coordinates to focal plane coordinates. The downsampling factor, f, is the ratio of the area of a focal plane pixel to that of the reconstructed pixel, andflm. Downsampling, AdownsamplelNxflN is the re-sampling from a higher resolution reconstructed image to that of the lower resolution focal plane. When f = 1, the resolutions are matched and Adownsample is simply an identity matrix. Finally, the multiplexing operation,AmultiplexlxlN, sums over the N channels which are physically superimposed on the l pixel focal plane. In general, solving for x given z is an ill-posed problem as l < m due to the multiplexing and downsampling operations. However, taking p images of a static scene with different image shifts can be written as
z˜=A˜x+ε˜
Where z˜=[z1z2z3zp]pl, A˜=[A1A2A3Ap]plxm, and ε˜=[ε1ε2ε3εp]pl. If plm then A˜ can be full rank for appropriately chosen shift matrices, which allows for solving for the scene by invertingA˜,
x^=A˜1z˜.
If pl < m thenA˜is undetermined, but with properly chosen shifts it contains sufficient structure to recover a restricted class of signals if the proper regularization is used. For example, if the signal is sparse in some basis, then the image can be recovered using standard nonlinear estimators used in compressed sensing [14, 15].

For large images (e.g. a multi-megapixel image) theA˜ matrix can become so large that a direct inverse is impractical, since computing an inverse scales cubically with the number of elements in the image. However, A˜ is inherently sparse since a shift corresponds to a sparse matrix, enabling a reduction in computational cost. By modeling the A˜ and A˜T operations (without having to explicitly compute the matrices) we can use an iterative solver such as LSQR [18] that can approximate rather than directly compute the inverse.

5. Experimental prototypes

In this section, we present test results for prototype multiplexed sensors in a star tracking application. Two experimental sensors were constructed that used identical shortwave infrared (SWIR, 0.9-1.7 μm waveband) cameras (Goodrich SU640KTS) and 50 mm SWIR lenses. Similar 5-channel multiplexing assemblies were used for both sensors. Figure 6 shows the design of the multiplexing assembly and the resulting rotated field of view arrangement. The pupil dividing element of the multiplexing assembly was a custom diamond-turned 5-surface monolithic faceted reflector with knife edge transitions between the facets to maximize pupil sampling efficiency. This element was made smaller than the pupil diameter of the lens and placed as close as possible to the lens to mitigate image irradiance non-uniformities caused by vignetting as illustrated in Fig. 2(b). A 3D-printed element provided the mounting structure for the multiplexing assembly. The two prototype configurations differed in the method of encoding and the design of the fold mirror element of the MA.

 figure: Fig. 6

Fig. 6 A five-channel optically multiplexed sensor and field of view projection.

Download Full Size | PDF

In SWIR prototype configuration 1 a simple metalized fold mirror was used and a 6-position filter wheel was placed between the pupil dividing element and the lens. The filter wheel consisted of an open aperture, and 5 unique aperture patterns each of which blocked 1 channel. The open configuration allowed for experiments using single frames of the 5-channel multiplexed image, and for passive temporal encoding experiments via rotated fields of view with relative scene motion. The remaining 5 filter positions allowed for temporal encoding experiments in which a sequence of images could be collected with a single channel attenuated.

SWIR prototype configuration 2 used a wedged dichroic fold mirror element for spatial encoding experiments. A commercially available dichroic window (Thor labs DMSP1500L) was placed in front of a metalized reflector with a small tilt between the elements. Each channel created a two-point image PSF by reflecting wavelengths above ~1.5 μm from the first surface and wavelengths below ~1.5 μm from the second surface at a slightly different angle. Rotation of this fold mirror assembly allowed for the two-point PSF orientation to be uniquely rotated for each channel. The exact wavelength of the transition varied from channel to channel because the band edge shifted with angle of incidence. Figure 7 shows the prototype systems along with a visible witness camera.

 figure: Fig. 7

Fig. 7 SWIR five-channel prototypes. Left: SWIR prototype 2 with a wedged dichroic fold mirror for spatial encoding. Center: a visible witness camera. Right: SWIR prototype 1 with a metalized fold mirror and filter wheel.

Download Full Size | PDF

Active temporal encoding experiments were performed using a six-channel visible wavelength prototype described in [12]. The multiplexing assembly was constructed from an array of six mirrors mounted on independent piezeoelectic tip/tilt platforms. A 200 mm parent lens (Nikon AF-S NIKKOR 200 mm F/2G ED VR II) was used with a 4.2 megapixel camera (Point Grey GS3-U3-41C6M-C) to produce six 3.2°x3.2° channels. Mirror sections were sized to equally divide the pupil area at an F/4 aperture setting. This produced channels with an effective F/# of F/9.8. This architecture represents an embodiment of a single reflective multiplexing assembly as shown in Fig. 5, with the multiplexing assembly serving as a remote aperture stop as shown in Fig. 2(a). The visible prototype allowed for temporal encoding and super-resolution experiments using rapid and precise image shifting. Spatial encoding could also be performed with this architecture by rapidly sweeping the mirrors during the camera’s integration period to encode the PSF with motion blur. Figure 8 shows the visible prototype.

 figure: Fig. 8

Fig. 8 Six-channel visible prototype.

Download Full Size | PDF

A summary of system parameters for the prototypes and their conventional parent lenses is provided in Table 3.

Tables Icon

Table 3. Prototype system parameters

6. Experimental demonstration

Sparse-scene experiments were performed using the SWIR prototypes to explore different methods of encoding and scene reconstruction. Observations the night sky were performed to demonstrate that multiplexed images could be used for point source detection, which can enable astronomical navigation and attitude determination by star tracking. The motivation of this experiment was to demonstrate and validate our aperture division and encoding concepts.

Multiplexed images from the SWIR prototypes are shown in Fig. 9. Panel (a) shows results from prototype 2. Dichroic splitting of the incident light produces a double image of each star. Each channel introduces a unique rotation orientation between the two points, which is used for disambiguation. A reconstruction of the sky from prototype 2 measurements is shown in panel (d), and was obtained by matched filtering of the multiplexed scene with the known point spread function of each channel. Passive and active temporal encoding was demonstrated by prototype 1. Panel (b) shows a long exposure image captured through the open aperture of the filter wheel. As the stars swept across the sky different streak orientations were observed in the image, which was used to indicate the channel position of each star. This demonstrated a method of passive temporal encoding through known scene motion. A method of active temporal encoding is shown in panel (c) of Fig. 9. In this experiment, the filter wheel was cycled through 5 positions during a long exposure image. Each position of the filter wheel blocked a single channel. The streaked image of each star showed a discontinuity indicating the channel from which it originated.

 figure: Fig. 9

Fig. 9 Multiplexed images of the night sky and a disambiguated image. (a) multiplexed image from SWIR prototype 2, (b) a long integration multiplexed image from SWIR prototype 1 demonstrating passive temporal encoding via image rotation, and (c) a multiplexed image from SWIR prototype 1 demonstrating active temporal encoding. (d) a reconstruction of the night sky prototype 2.

Download Full Size | PDF

Dynamic image shifting with the visible prototype allowed for reconstruction of an arbitrary information-rich scene. Figure 10 shows data collected with the visible prototype. The mirrors in the multiplexing assembly were arranged to cover a six-channel 19°x3.2° field of view with a small overlap between the channels to aid the image reconstruction. A multiplexed image is shown in Fig. 10(a). Panel (b) shows a 24 megapixel image obtained by inverting the matrix described in Section 4.2.

 figure: Fig. 10

Fig. 10 Images collected with the visible waveband prototype. (a) a single four megapixel multiplexed image, (b) a 24 megapixel image reconstructed from a sequence of uniquely encoded multiplexed frames.

Download Full Size | PDF

7. Conclusion

This paper has described new design architectures for optically multiplexed imaging systems with emphasis on the division of aperture implementation. Considerations relating to the initial selection of channels in the multiplexed image and the design of the multiplexing assembly have been provided to inform the design of future systems. Experimental results were presented using two optically multiplexed SWIR waveband prototypes each consisting of 5 multiplexed channels. A third visible waveband prototype used 6 imaging channels, which to the authors’ knowledge is the highest degree of multiplexing performed to date. Efficient techniques for spatial and temporal encoding have been described. It has been demonstrated that optically multiplexed systems can be used to reconstruct sparse night sky scenes, and to accurately reconstruct images of dense scenes via temporal encoding.

Practical division of aperture systems can be developed with a high number of multiplexed channels. The trade-off of aperture area for extended field of view provides a new dimension in optical design by allowing narrow FOV lenses to perform wide FOV tasks. Further, by decoupling the shape of the observed FOV from the shape of the detector multiplexing enables new high aspect ratio or discontinuous fields of view. It is the authors’ belief that optically multiplexed imaging systems have a high potential for many applications including navigation, machine vision, optical communication, photography, and surveillance.

Acknowledgments

This work is sponsored by the U. S. Department of the Air Force under Air Force Contract #FA8721-05-C-0002. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the United States Government.

References and links

1. M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-Field Feature-Specific Imaging,” in Frontiers in Optics 2007, OSA Technical Digest (Optical Society of America, 2007), paper FMJ2.

2. R. F. Marcia, C. Kim, C. Eldeniz, J. Kim, D. J. Brady, and R. M. Willett, “Superimposed video disambiguation for increased field of view,” Opt. Express 16(21), 16352–16363 (2008). [CrossRef]   [PubMed]  

3. S. Uttam, N. A. Goodman, M. A. Neifeld, C. Kim, R. John, J. Kim, and D. Brady, “Optically multiplexed imaging with superposition space tracking,” Opt. Express 17(3), 1691–1713 (2009). [CrossRef]   [PubMed]  

4. V. Treeaporn, A. Ashok, and M. A. Neifeld, “Increased field of view through optical multiplexing,” Opt. Express 18(21), 22432–22445 (2010). [CrossRef]   [PubMed]  

5. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef]   [PubMed]  

6. C. Y. Chen, T. T. Yang, and W. S. Sun, “Optics system design applying a micro-prism array of a single lens stereo image pair,” Opt. Express 16(20), 15495–15505 (2008). [CrossRef]   [PubMed]  

7. A. Mahalanobis, M. Neifeld, V. K. Bhagavatula, T. Haberfelde, and D. Brady, “Off-axis sparse aperture imaging using phase optimization techniques for application in wide-area imaging systems,” Appl. Opt. 48(28), 5212–5224 (2009). [CrossRef]   [PubMed]  

8. A. Daniels, “Infrared systems – technology & design,” SPIE Short Course SC835, 279 (2015).

9. J. Sasian, “Interpretation of pupil aberrations in imaging systems,” Proc. SPIE 6342, 634208 (2006). [CrossRef]  

10. D. Shafer, “Aberration Theory and the Meaning of Life,” Proc. SPIE 554, 25 (1986). [CrossRef]  

11. R. Gupta, P. Indyk, E. Price, and Y. Rachlin, “Compressive sensing with local geometric features,” Proc. of the 27th annual ACM symposium on computational geometry, 87–98, ACM (2011). [CrossRef]  

12. Y. Rachlin, V. Shah, R. H. Shepard, and T. Shih, “Dynamic optically multiplexed imaging,” Proc. SPIE 9600, 96003 (2015).

13. S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” Signal Processing, IEEE Transactions on 41(12), 3397–3415 (1993). [CrossRef]  

14. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006). [CrossRef]  

15. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

16. D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proc. Natl. Acad. Sci. U.S.A. 106(45), 18914–18919 (2009). [CrossRef]   [PubMed]  

17. Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” Proc. 27th Asilomar Conference on Signals, Systems and Computers. IEEE Computer Society Press, 40 (1993). [CrossRef]  

18. C. C. Paige and M. A. Saunders, “LSQR: An algorithm for sparse linear equations and sparse least squares,” TOMS 8(1), 43–71 (1982). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Pupil division strategies. (a) division of amplitude using a cascade of beam splitters, (b) division of aperture using an array of prisms.
Fig. 2
Fig. 2 Configurations for division of aperture systems and their beam projections on the multiplexing assembly (MA). (a) the MA serves as the aperture stop, (b) a remotely located MA, and (c) a design that projects the entrance pupil to the MA with aberration of the pupil.
Fig. 3
Fig. 3 (a) a notional 2-element single-prism MA for a LWIR camera. Encoding is performed by a simple motor assembly that rotates the MA about the optical axis of the lens. (b) optical layout for a 4-channel multiplexing assembly based on achromatic prisms.
Fig. 4
Fig. 4 Beam deviation and dispersion. (a) a thin prism. (b) secondary dispersion in an achromatic prism pair.
Fig. 5
Fig. 5 Notional Reflective multiplexing assemblies. (a) a narrow FOV lens used with a single element MA acting as a remote tilted aperture stop, and (b) a multi-element MA shown with a reimaging lens that projects the pupil to the MA.
Fig. 6
Fig. 6 A five-channel optically multiplexed sensor and field of view projection.
Fig. 7
Fig. 7 SWIR five-channel prototypes. Left: SWIR prototype 2 with a wedged dichroic fold mirror for spatial encoding. Center: a visible witness camera. Right: SWIR prototype 1 with a metalized fold mirror and filter wheel.
Fig. 8
Fig. 8 Six-channel visible prototype.
Fig. 9
Fig. 9 Multiplexed images of the night sky and a disambiguated image. (a) multiplexed image from SWIR prototype 2, (b) a long integration multiplexed image from SWIR prototype 1 demonstrating passive temporal encoding via image rotation, and (c) a multiplexed image from SWIR prototype 1 demonstrating active temporal encoding. (d) a reconstruction of the night sky prototype 2.
Fig. 10
Fig. 10 Images collected with the visible waveband prototype. (a) a single four megapixel multiplexed image, (b) a 24 megapixel image reconstructed from a sequence of uniquely encoded multiplexed frames.

Tables (3)

Tables Icon

Table 1 A selection of low-dispersion materials.

Tables Icon

Table 2 Examples of achromatic prism pairs with small amounts of secondary dispersion.

Tables Icon

Table 3 Prototype system parameters

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

Φ=τLAΩ
Φ total =( τ N )LA(NΩ)
Φ total =τL( A N )(NΩ)
f l N =f l o .
A N = A o N ,
d N d o N ,
F # N N F # o .
θ xN,yN = θ x o , y o 2 Tan 1 ( n x,y d p 2f l o )
M pix,N =N n x n y .
M opt,N = N θ xo θ yo π ( 1.22 λ d N ) 2 M opt,o .
MT F cut,det = 1 d p ,
MT F cut,opt = 1 λF # N .
λ N 1/2 F # o d p 2, optically limited
D R N = D N o N ,
B N = B o lo g 2 N,
δ=ztanθ,
V= n λ mid 1 n λ low n λ high
δ( n λ mid 1)α
Δ δ V
φ d p f l o .
Δ max =kφ.
δ max =kVφ.
ε( P 2 P 1 V 2 V 1 )δ
P= n λ mid n λ high n λ low n λhigh .
δ max =| V 2 V 1 P 2 P 1 |kφ.
z=Ax+ε,
z= A multiplex A downsample A selection A encoding x+ε.
z ˜ = A ˜ x+ ε ˜
x ^ = A ˜ 1 z ˜ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.