Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Increased field of view through optical multiplexing

Open Access Open Access

Abstract

Traditional approaches to wide field of view (FoV) imager design usually lead to overly complex optics with high optical mass and/or pan-tilt mechanisms that incur significant mechanical/weight penalties, which limit their applications, especially on mobile platforms such as unmanned aerial vehicles (UAVs). We describe a compact wide FoV imager design based on superposition imaging that employs thin film shutters and multiple beamsplitters to reduce system weight and eliminate mechanical pointing. The performance of the superposition wide FoV imager is quantified using a simulation study and is experimentally demonstrated. Here, a threefold increase in the FoV relative to the narrow FoV imaging optics employed imager design is realized. The performance of a superposition wide FoV imager is analyzed relative to a traditional wide FoV imager and we find that it can offer comparable performance.

© 2010 Optical Society of America

1. Introduction

Current reconnaissance and surveillance cameras include so-called “soda straw” narrow field of view (FoV) imagers that require mechanical pointing to achieve adequate coverage at high resolution. At each measurement instant a sub-FoV corresponding to a different part of the desired scene is sequentially captured. Once all of the sub-images are acquired in this pan and tilt system they are appropriately tiled to form a high resolution wide field of view image of the scene. While conceptually straightforward, only a partial section of the desired scene is acquired at each measurement instant. Moreover, the mechanical complexity of such a system leads to increased size and weight.

Note that a system that captures each sub-FoV sequentially in time is inherently photon inefficient. For example, assume the narrow FoV system contributes on average Q photons per pixel, following a Poisson distribution, to the imaging sensor per unit time. After an observation time τ a total of I′ sub-images are collected. Further, assume that the imaging sensor has a collection efficiency η and the measurement is corrupted by both Poisson shot noise (i.e., with variance equal to the signal) and signal-independent Gaussian detector noise with zero mean and variance Iσn2. Note that the detection noise scales with I′ because the noise bandwidth is linear in I′ for a fixed total observation time. In this case, the electrical signal-to-noise ratio (SNR) [1, 2] of each pixel measurement is given by

SNR=(ηQτ/I)2(ηQτ/I)+Iσn2.
When the system is shot noise limited, i.e. when σn → 0, then the SNR = ηQτ/I′ and when the system is detector (read) noise limited then the SNR=ηQτ/((I)3σn2). Also from Eq. (1) we observe that increasing the number of sub-images I′ collected decreases the SNR for a fixed observation time τ.

Another conventional wide field imaging approach would employ the same imaging sensor to directly image the desired scene at a lower resolution. This approach can be augmented with a more sophisticated optical system and a larger imaging sensor with a higher pixel count to produce a high resolution image. However, these augmentations again lead to increased system size and weight. Indeed, it is well-known that increasing the FoV brings greater than linearly growing costs in optical mass [3]. As a result of the various costs that accompany conventional wide FoV imaging techniques, these systems have limited applications, especially for mobile platforms like unmanned aerial vehicles (UAVs). To reduce some of these physical requirements and retain a high resolution image, here we describe a compact thin-film shuttered multi-beamsplitter superposition imaging solution and demonstrate its functionality through an experimental prototype.

2. Optical architecture

An alternative approach to wide FoV imaging without incurring significant size and weight penalties can be realized with spatial-multiplexing or superposition space imaging. This is accomplished by superimposing multiple sub-fields of view onto a common imaging sensor to form a composite image. In superposition space imaging, as the name implies, each pixel measurement in the composite image is a superposition of corresponding pixels from the individual sub-fields of view. A schematic depiction of superposition imaging is shown in Fig. 1 where each pixel in the jth sub-image Xj is denoted by xjp, and the ith resulting composite image is denoted by Mi with each pixel denoted by mip.

 figure: Fig. 1

Fig. 1 Diagram of superposition imaging, Σjxjp = mip

Download Full Size | PDF

A number of different techniques have been described in the literature to accomplish superposition imaging. One architecture that employs multiple lenses and mechanical shutters to achieve this goal is described in [4]. In that system, each lens images a portion of the scene onto a common imaging sensor and can emulate a conventional pan and tilt mode of operation by opening a single shutter at a time. An alternative architecture based on a beamsplitter and linearly translating mirror is described in [5]. By capturing a sequence of superimposed images while the mirror position is shifted, the overall field of view can be reconstructed. A similar approach described in [6] uses a beamsplitter and a rotating mirror to accomplish superposition imaging. Yet another approach employs a binary combiner arrangement that incorporates shutters to perform superposition imaging. The architecture described in [7] combines superposition imaging with point spread function (PSF) engineering using sparse apertures, micropistons and microprisms. In that work, the sparse aperture is implemented using an “eyelid array” where each eyelid is a voltage controlled electrostatic flap that can open and close a small aperture. Note that all of these implementations, however, involve some mechanical aspect.

To achieve a wide FoV using superposition space imaging, multiple non-redundant composite image measurements must be acquired. These multiple measurements are subsequently used to recover the original sub-images. Multiplexing and encoding a source is a traditional technique in multiplex spectroscopy [8, 9, 10]. Rather than use a mechanical shutter, the optical properties of a number of materials can be exploited to implement an optical modulator. Two common types are electrochromic materials, usually found on switchable window glass, and liquid crystal (LC) materials. In this work, an LC based multi-beamsplitter system that can superimpose three sub-fields of view will be considered. A diagram of such a 3-FoV system, which will serve as the basis of our experimental implementation, is shown in Fig. 2(a). The selection of LC for the shutter material also imposes an additional polarizer requirement at the entrance aperture of the system. Using a polarizer decreases the overall light collection ability of the system by a factor of two, however, it is required to enable operation of the LC shutters. While only three sub-fields of view are considered here, the described architecture can be extended to accommodate additional sub-images in the horizontal and/or vertical directions. Another compact architecture that multiplexes multiple sub-fields of view using a polarization based encoding scheme is described in [11].

 figure: Fig. 2

Fig. 2 Conceptual 3-FoV system top view: (a) diagram (b) system parameters

Download Full Size | PDF

In our system, plate beamsplitters are used to tilt and optically superimpose the sub-fields of view presented to the imager. Following each beamsplitter is an electronically controlled shutter where the thin-film material is deposited directly on each beamsplitter to form a compound dual purpose element. When a shutter is in the closed state the corresponding sub-fields of view are suppressed from the composite image.

For comparison to the conventional imager the same imaging sensor must be used. In this superposition imaging case, J sub-images are collected simultaneously as a composite image measurement. If I equal exposure measurements are taken within a total observation time τ and given that each sub-FoV again contributes to an average of Q photons/pixel/unit time to the imaging sensor then the composite pixel electrical measurement SNR is given by

SNR=(ηQτJ/I)2(ηQτJ/I)+Iσn2=J(ηQτ/I)2(ηQτ/I)+(Iσn2/J).
Under low-light conditions when the system is read noise limited then the SNR=ηQτJ2/(I3σn2) and a multiplex advantage [12] of J2 becomes apparent when compared with Eq. (1). When the system is shot noise limited then there is a multiplex advantage of J. Note that the use of spatial-multiplexing in this architecture also leads to increased photon collection efficiency by a factor of J improvement in the number of photons. In other words, a superposition image may occupy the full dynamic range of a sensor whereas a conventional image is constrained to only 1/J of the dynamic range for the same observation time. Filling a fraction of the dynamic range is clearly undesirable, however, it is conceivable that operational constraints may dictate this. For instance, the frame rate may be such that the available observation time may only allow partial utilization of the sensor dynamic range.

3. Imaging system model

For convenience, the object space is partitioned and indexed from left to right into J = 3 sub-fields of view. Thus, as shown in Fig. 2(b), the front beamsplitter is defined as the 3rd reflective surface providing the image X3 of FoV 3. The middle (j = 2) beamsplitter provides X2, and the back (j = 1) mirror provides X1. The system parameters for this 3-FoV system include the distance from the camera lens to the jth beamsplitter (mirror) dj, the angular field of view of the camera lens ϕ, and the rotation angle of the respective element θj relative to the optical axis of the camera. The remaining parameters shown include the jth beamsplitter transmission and reflection coefficients given by αjT and αjR, respectively, and the associated shutter open state transmission coefficients given by kj.

In order to successfully disambiguate the individual sub-images, multiple non-redundant composite image measurements of the wide FoV scene are needed. In the optical architecture described above, each sub-image is superimposed onto an imaging sensor with the ith composite pixel measurement given by

mip=j=1J(hijxjp)+n
where n is measurement noise that includes Poisson and thermal components, xjp corresponds to the pth pixel in the jth sub-image Xj, and the hij coefficients are directly related to the physical parameters of the optical multiplexer. These physical parameters include the beamsplitter transmission and reflection coefficients and shutter states. Therefore, by changing the shutter states different linear combinations of the sub fields of view can be measured. Since the ith composite image measurement consists of a P pixel image from the imaging sensor and the pixelwise measurement in Eq. (3) is repeated (in parallel) for each pixel, the p subscript will be dropped to simplify notation.

The dynamic range associated with each sub-image can be uniformly mapped into equally allocated portions within the full dynamic range of the composite image. By considering a measurement with all shutters in the open state and setting h11 = h12 = h13, the required beamsplitter coefficients αj to equally subdivide the full sensor dynamic range for this 3-FoV system can be computed from the following system of equations

h11=α3T2α2T2α1Rk22k32h12=α3T2α2Rk32h13=α3R
where α1R = 1 since the back surface is always a mirror. It is assumed that the beamsplitters are spatially uniform and do not have absorptive loss resulting in αjR + αjT = 1. The squared terms result from the folding in which the light travels through the same optical elements twice before reaching the imaging sensor. Solving Eq.(4) using kj = k = 0.62 for the liquid crystal material used in our particular implementation yields α2T ≈ 0.77 and α3T ≈ 0.92. It should be noted that the LC shutters are also assumed to be spatially uniform. Based on these values, a standard beamsplitter ratio of 1:3 (α2T = 0.75) was then chosen for the middle beamsplitter and plain glass with an approximate ratio of 1:11.5, which closely matches the computed transmission coefficient, was used for the front beamsplitter. Thus for some illumination level, a conventional imager measurement mconv,j = h1jxj for the jth FoV is allowed to completely utilize 1/J of the full sensor dynamic range.

The disambiguation problem to recover each Xj for j = 1 ... J can be formulated as an inverse problem. As there are three sub-images and two shutters, at least three non-redundant measurements from the four possible shutter combinations are required to ensure a well-conditioned inverse problem. The shutter combinations can be represented by a binary-valued vector 〈s3s2〉 where sj = 0 denotes an open shutter, sj = 1 denotes a closed shutter and the subscript identifies the shutter position. In particular, the three measurements m1, m2, and m3 corresponding to the 〈00〉 all open, 〈01〉 front open and middle closed, and 〈11〉 all closed shutter states, respectively, will be used. These measurements can be written as

[m1m2m3]=[h11h12h13h21h22h23h31h32h33][x1x2x3]+[n1n2n3]
or more compactly as
m=Hx+n
for each pixel position. As the beamsplitters and thin-film shutters are assumed to be spatially uniform H is spatially invariant. Thus the same H will be used for all pixel positions. It should be noted that, depending on the shutter material used, the light attenuating performance of the 〈11〉 shutter state may not necessarily be equivalent to the light attenuating performance of the 〈10〉 shutter state.

For simplicity, H is normalized such that unity is the maximum value for any element. Further, let a perfect shutter be a device in which there is lossless transmission in the open state and no transmission in the closed state. With perfect shutters the 〈01〉 shutter state associated with the second measurement results in only two sub-images m2 = x2 + x3 being superimposed. At each beamsplitter, as the thin-film material is placed following the reflective surface, the following conditions must be true h13 = h23 = h33 and h12 = h22. The former condition is due to the front surface always reflecting the third FoV X3 and the latter condition is due to the front shutter remaining in the same state during these two measurements. Thus, with perfect shutters the normalized ideal H is given by

H=[111011001].
When imperfect shutters are in the closed state there is a finite amount of transmission through the shutter. This results in non-negligible contributions to the composite image from the sub-fields of view that would otherwise be blocked. These contributions are represented by the leakage coefficients h21, h31, and h32.

Each measurement mi = Σj(hijxj) + n with i = 1 ... I and j = 1 ... J is a function of the shutter state, leading to a measurement SNR given by

mSNRi=(j=1JhijηQτ/I)2(j=1JhijηQτ/1)+Iσn2.
When this system is shot noise limited then the mSNRi = Σj hijηQτ/I and when read noise limited then the mSNRi=(jhijηQτ)/(I3σn2).

For comparison with the conventional SNR given in Eq. 1 the SNR of the reconstructed sub-images must be used. The composite image measurements m are used to estimate the individual sub-images pixel-by-pixel after applying a linear reconstruction. Here we use the linear minimum mean square error (LMMSE) estimator [13] (Wiener reconstruction) defined as

x^=CxHT(HCxHT+Cn)1m=Wm
where Cx = E[xxT] is the (pixelwise) object autocorrelation matrix and Cn = E[nnT] is the detector noise autocorrelation matrix. Because no correlation is expected between non-overlapping sub-fields of view, the object autocorrelation matrix becomes diagonal. Moreover, assuming the same average intensity σx2 for each sub-FoV yields Cx=σx2I where I is the identity matrix. Also, as the detector noise is i.i.d, the noise autocorrelation Cn=σn2I.

For any linear reconstruction operator W, in general, with elements wij and a measurement SNR of Eq. (8), a reconstruction SNR is given by

rSNRj=(i=1Iwjij=1JhijηQτ/I)2i=1Iwji2(j=1J(hijηQτ/I)+Iσn2).
When the measurement is shot noise limited then the reconstruction SNR is given by
rSNRj=(i=1Iwjij=1JhijηQτ/I)2i=1Iwji2j=1JhijηQτ/I.
Similarly, when the measurement is read noise limited then the reconstruction SNR is given by
rSNRj=(i=1Iwjij=1JhijηQτ)2I3i=1Iwji2σn2.
Again as the number of measurements I increases, the resulting measurement SNR and reconstruction SNR both decrease. Since this reconstruction SNR also depends on a reconstruction operator which in turn depends on H, if H is not well-conditioned then some of the multiplex advantage may not be preserved through reconstruction.

3.1. Forward model estimation

The coefficients of H are directly related to the optical properties of the elements comprising the optical multiplexer. Using the normalized forward model, the leakage coefficients can be directly estimated by taking control measurements. Individual sub-fields of view can be measured by physically blocking the other sub-fields of view. Allowing only a single sub-FoV with all of the shutters in an open state results in a measurement Xjexp which is a scaled version of the jth sub-image. Subsequently, a sub-FoV measurement with a scaling corresponding to a particular shutter state can be obtained. Thus, the leakage coefficients can be estimated using these control measurements.

For the 3-FoV system, the h21 coefficient can be determined by taking two measurements in which X2 and X3 are physically blocked. The first measurement X1exp=X^100 represents the maximum possible contribution of X1 to the imager for the 〈00〉 shutter state. The second measurement X^101 corresponds to the (leakage) contribution of X1 through a thin-film shutter in the closed state. Thus, the normalized h21 leakage coefficient is given by h21=x^1p01/x^1p00. If the image is not uniform then the pixel position p can be chosen as the location corresponding to max (X^100) or the brightest pixel. Similarly, the remaining leakage coefficients are given by h31=x^1p11/x^1p00 and h32=x^2p11/x^2p00.

Replacing the zero values in the ideal normalized H of Eq. (7) with these leakage coefficients leads to an initial estimate of the normalized forward model. Using this initial estimate, the coefficients can be further refined by using the Xjexp measurements. Since Xjexp is treated as the a truth image, this can be used to compute and minimize an estimated mean squared error (MSE) of the LMMSE reconstructed sub-images while optimizing over the coefficients of H, or

HMSE*=argminHj=1J{E[||XjexpX^j||22]}s.t.h13=h23=h33h12=h22
where E[·] is the expectation over all pixels in the image, Xjexp and X̂j are lexicographically ordered into vectors and the hij equality conditions are given for a 3-FoV system.

4. Results

4.1. Simulation study

In a conventional imaging system, each measurement is a direct representation of a sub-field of view. In contrast, a set of measurements in a superposition imaging system must be used to recover the individual sub-fields of view. A numerical simulation was performed for this 3-FoV system architecture using images of various buildings as non-overlapping parts of the object space. It should be noted that only a read noise limited system is considered. The simulated composite image measurements with additive zero mean Gaussian noise with σn = 0.01 for a pixel dynamic range of [0 – 1] and each leakage coefficient set to 30% are shown in Fig. 3. Using the simulated measurements, the optimization problem given by Eq. (11) was solved to find an estimate of the forward model HMSE*,sim. Together with the LMMSE operator, this forward model estimate was applied to the simulated composite image measurements to reconstruct the individual sub-images shown in Fig. 4. Here we employ an image SNR metric to quantify the quality of the reconstructed images. This image SNR metric is defined as SNR^=10log10(E[||X^j||22]/E[||XjexpX^j||22]). This results in an SNR^ of 18.7, 18.1, and 21.1 dB for X̂1, X̂2, and X̂3, respectively. For comparison, images corrupted by a relative noise strength of σn = 0.01 from a conventional imager are shown in Fig. 5. For this noise level, the reconstructed images from the superposition imager are comparable to the images from the conventional imager.

 figure: Fig. 3

Fig. 3 Simulated composite image measurements with σn=0.01; (a) 〈00〉 : X1 + X2 + X3, (b) 〈01〉 : X2 + X3 + leakage, (c) 〈11〉 : X3 + leakage

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Simulated LMMSE reconstruction; (a) X̂1: “ECE”, (b) X̂2: “Old Main”, (c) X̂3: “OSC”

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Simulated conventional imager with σn = 0.01; (a) “ECE”, (b) “Old Main”, (c) “OSC”

Download Full Size | PDF

This is consistent with a reconstruction SNR comparison between the conventional and superposition imagers. Note that for the conventional imager the measurement SNR is the same as the reconstruction SNR. Shown in Fig. 6 is the average reconstruction SNR for both the conventional, Eq. (1), and superposition, Eq. (10), cases as the number of sub-fields of view changes assuming a full well capacity=ηQτ = 100ke for three different read noise levels. Here it can be seen that the SNR for both approaches are comparable and it should be emphasized that the superposition imaging system has lower size and weight than a conventional wide FoV imaging system. In a conventional imaging system, as the number of fields of views (J) increases the signal power decreases while the noise power increases resulting in a downward trend. In this superposition imaging system as J increases the signal power, on average, remains the same while the noise power increases again resulting in a downward trend, although at a slower rate compared to the conventional imager. However, in the case of the superposition imager, the multiplex advantage becomes significant at low light levels or with high read noise levels. An interesting result from Fig. 6 is that at J = 14 and σn = 0.02 there is a crossover between the conventional imager SNR and superposition imager SNR. As the read noise increases the crossover occurs at a lower value of J and the superposition imager SNR is again higher than the conventional imager SNR at J = 9 and σn = 0.03. Selected numerical values from Fig. 6 for σn = 0.03 have been extracted into Table 1 to quantify this crossover.

 figure: Fig. 6

Fig. 6 Average reconstruction SNR vs number of fields of view

Download Full Size | PDF

Tables Icon

Table 1. Conventional and superposition SNR comparison for σn = 0.03

4.2. Experimental results

A prototype of the thin-film shuttered multi-beamsplitter architecture described above was constructed and the device is shown in Fig. 7. The system parameters for this device are d1=60mm, d2=51mm, d3=43mm, ϕhoriz = 7.3° and ϕvert = 5.5°, θ1 = 65°, θ2 = 55°, and θ3 = 45°. The beamsplitter assembly has dimensions of about 40mm × 40mm × 30mm. Also, the 50mm C-mount lens was operated with an aperture setting of f/2.8. Parts from readily available commercial LC 3-D shutter eyeglasses were used as our thin-film shutters. In this design, the front beamsplitter requires a high transmission coefficient to ensure sufficient light throughput. As a result, an LC shutter panel extracted from the eyeglass assembly was used directly as the front (j = 3) beamsplitter in the optical multiplexer device. For this prototype, an LC shutter panel was used in series with a standard beamsplitter for the second (j = 2) stage. As the second stage is relatively thick, we expect multiple reflections of the sub-field of view appearing in the composite image.

 figure: Fig. 7

Fig. 7 Prototype optical multiplexer

Download Full Size | PDF

A low resolution depiction of the object space (scene) placed 3.3 meters away from the optical multiplexer is shown in Fig. 8. This object space has a horizontal angular extent of about 45° and a vertical angular extent of about 17°. In this figure, red boxes have been added to demarcate the sub-fields of view seen by the optical multiplexer from the object space. The horizontal separation of the sub-fields of view are based on the rotation angles chosen for mirror and beamsplitters in the device. In contrast, high resolution tiles of the object space from a “soda-straw” imager that requires pointing are shown in Fig. 9. For the camera used, the observation time per measurement is τ/3 = 25 [frames] × 1/30 [frames/sec] ≈ 0.83 sec and the composite images after time-averaging 25 frames are shown in Fig. 10. Each composite image measurement has an equivalent horizontal angular extent ϕhoriz = 7.3° and a vertical angular extent ϕvert = 5.5°. The estimated equivalent noise standard deviation for these time-averaged images is σn ≈ 0.002. Solving Eq. (11) using the measured data results in the following estimate for the forward model

HMSE*=[0.85771.00000.99630.31641.00000.99630.30150.31090.9963].
Using the time-averaged measurements and applying the LMMSE operator in Eq. (9) with the forward model estimate in Eq. (12) yields the individual reconstructed sub-images shown in Fig. 11. In this figure the numerical markers from each sub-field of view can be readily identified and the effective horizontal angular extent is now 21.9°. Comparing the LMMSE reconstructions with (the measured) Xjexp results in the following SNR^ values of 4, 3.9, and 3.3 dB for X̂1, X̂2, and X̂3, respectively. Repeating the numerical simulation using the control measurements instead of buildings and Eq.(12) together with σn = 0.002 yields SNR^values of 17, 16.6, and 2.2 dB for X̂1, X̂2, and X̂3, respectively. One reason for this discrepancy is that the control measurements are acquired with only a single sub-image present with the remaining sub-images physically blocked. This results in low-illumination conditions coupled with the non-linear effects of the imaging sensor and associated electronics. Numerically summing the control measurements yields an estimated composite image M^=jX^j00 that has lower pixel intensity values than the corresponding optically summed measurement. Moreover, note that the simulation uses an estimate of the forward channel matrix which may differ from the actual experiment forward channel response. Thus contributing to the discrepancy between the image fidelity predicted by the simulation and the observed experimental performance.

 figure: Fig. 8

Fig. 8 Low resolution wide field view of the object space. The “1”, “2”, and “3” marker dimensions are 14 cm × 21.6 cm

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 High resolution “soda straw” view of the object space (a) FoV 1, (b) FoV 2, (c) FoV 3

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Prototype optical multiplexer composite image measurements: (a) 〈00〉 : X1 + X2 + X3, (b) 〈01〉 : X2 + X3+ leakage, (c) 〈11〉 : X3+ leakage

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 LMMSE reconstruction (a) X̂1 (b) X̂2 (c) X̂3

Download Full Size | PDF

4.3. Discussion

There are practical limitations to the number of sub-fields of view that can be collected with this architecture. For this LC based system, a primary consideration is the light throughput of the optical multiplexer. An input polarizer causes a factor of two decrease in the light collection ability of this system while the shutter open state transmission losses are squared. To mitigate these shutter transmission losses, the liquid crystal orientation should be aligned with the appropriate look direction to provide maximum transmission when in the open state and minimum transmission when in the closed state of the respective sub-field of view. This is a primary reason the leakage coefficients are relatively large for this experimental device. The LC shutters used are optimized to block normally incident light, however in this prototype, the LC shutters are rotated. That is, the shutter for the jth beamsplitter (mirror) is collocated at the j + 1 position. As the number of multiplexed sub-images contained in a composite image measurement increases, the dynamic range and quantization resolution of the imaging sensor and associated electronics becomes increasingly important. Since the dynamic range of each sub-image occupies a proportionately smaller region in the sensor dynamic range, quantization error further limits measurement fidelity and consequently reconstructed image fidelity.

Although some non-linear effects are unavoidable, it should be emphasized that the camera must operate in a linear regime for the valid shutter state combinations. Certain camera features such as automatic gain control and automatic exposure should be disabled as they can have an adverse effect on the post-processing procedure. Another feature that should be disabled is gamma-correction as this may introduce additional non-linearity to the measurement. In this experiment, another potential source of non-linearity is the analog frame grabber used to capture and digitize the camera analog video output.

While H is dependent on the physical properties of the optical multiplexer, many of these parameters can be difficult to obtain. Thus, there is some uncertainty identifying the actual system H. In the optimization based approach used to estimate H, it should be noted that solving Eq. (11) with three measurements and six free variables is an underdetermined problem. This mismatch between the actual H and HMSE* is evident in the linearly reconstructed sub-images shown in Fig. 11. A faint outline of the numerical marker of X2 and line patterns from X3 are visible in Fig. 11(a). In Fig. 11(b), the line patterns from X3 are visible. Lastly, in Fig. 11(c) artifacts from X1 and X2 can be noticed. Despite this mismatch, the sub-image reconstructions are readily identifiable. Further, it was assumed that H is independent of pixel position, however in an actual system implementation H can be spatially dependent. Careful characterization of the optical multiplexer may improve estimation of H. Another practical consideration is that the physical properties of the optical multiplexer may vary over time, e.g. due to temperature changes, thus affecting H and therefore reconstruction performance. A smaller mismatch between the actual and estimated forward models will result in fewer artifacts between the reconstructed sub-images.

The prototype system described here, reconstructs a wide field of view of a static scene. When there is motion, then the measurements should be “locally static”, i.e. the measurements should be taken faster than the motion present in the scene. If this is not the case, then ghosting (blurring) will appear in the reconstructed sub-images. Ultimately for a dynamic scene, the acquisition time will be eventually limited by the switching speed of the thin-film shutter and the scene illumination levels.

5. Conclusion

In this paper, a computational imaging system for providing a high resolution wide field of view image of a static scene using narrow field of view imaging optics has been presented and demonstrated. This compact thin-film shuttered optical multiplexer successfully adds sub-image diversity to the measurements and extends the field of view by applying a well known linear image reconstruction technique to disambiguate the composite images. This architecture is mechanically robust as the optical elements are fixed and pointing is not required leading to reduced system size and weight. Practical issues, however, such as transmission loss would eventually limit the total number of sub-images that can be combined. While the number of composite image measurements is equal to the number of sub-fields of view here, there is ongoing interest in compressive sensing approaches that exploit object priors such as sparsity to reduce the number of required measurements.

6. Acknowledgments

The authors gratefully acknowledge the financial support of the Lockheed Martin Corporation and the Defense Advanced Research Projects Agency (DARPA) under the Large Area Coverage Optical Search-while-Track and Engage (LACOSTE) program.

Footnotes

Approved for public release.Distribution unlimited.Distribution statement A.“The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.”
7. Disclaimer

Approved for public release.

Distribution unlimited.

Distribution statement A.

“The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.”

References and links

1. H. H. Barrett and K. J. Myers, Foundations of Image Science (Wiley, 2004).

2. R. D. Fiete and T. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001). [CrossRef]  

3. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28, 4996–4998 (1989). [CrossRef]   [PubMed]  

4. M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in Frontiers in Optics, (Optical Society of America, 2007), paper FMJ2.

5. R. F. Marcia, C. Kim, C. Eldeniz, J. Kim, D. J. Brady, and R. M. Willett, “Superimposed video disambiguation for increased field of view,” Opt. Express 16, 16352–16363 (2008). [CrossRef]   [PubMed]  

6. S. Uttam, N. A. Goodman, M. A. Neifeld, C. Kim, R. John, J. Kim, and D. Brady, “Optically multiplexed imaging with superposition space tracking,” Opt. Express 17, 1691–1713 (2009). [CrossRef]   [PubMed]  

7. A. Mahalanobis, M. Neifeld, V. K. Bhagavatula, T. Haberfelde, and D. Brady, “Off-axis sparse aperture imaging using phase optimization techniques for application in wide-area imaging systems,” Appl. Opt. 48, 5212–5224 (2009). [CrossRef]   [PubMed]  

8. John A. Decker Jr. and M. O. Harwitt, “Sequential encoding with multislit spectrometers,” Appl. Opt. 7, 2205–2209 (1968). [CrossRef]   [PubMed]  

9. John A. Decker Jr., “Experimental realization of the multiplex advantage with a Hadamard-transform spectrometer,” Appl. Opt. 10, 510–514 (1971). [CrossRef]   [PubMed]  

10. M. E. Gehm, S. T. McCain, N. P. Pitsianis, D. J. Brady, P. Potuluri, and M. E. Sullivan, “Static two-dimensional aperture coding for multimodal, multiplex spectroscopy,” Appl. Opt. 45, 2965–2974 (2006). [CrossRef]   [PubMed]  

11. K. M. Douglass, T. Kohlgraf-Owens, J. Ellis, C. Toma, A. Mahalanobis, and A. Dogariu, “Expanded field of view using polarization multiplexing,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CWA5.

12. D. J. Brady, “Multiplex sensors and the constant radiance theorem,” Opt. Lett. 27, 16–18 (2002). [CrossRef]  

13. S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice-Hall, 1993).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Diagram of superposition imaging, Σjxjp = mip
Fig. 2
Fig. 2 Conceptual 3-FoV system top view: (a) diagram (b) system parameters
Fig. 3
Fig. 3 Simulated composite image measurements with σn=0.01; (a) 〈00〉 : X1 + X2 + X3, (b) 〈01〉 : X2 + X3 + leakage, (c) 〈11〉 : X3 + leakage
Fig. 4
Fig. 4 Simulated LMMSE reconstruction; (a) X̂1: “ECE”, (b) X̂2: “Old Main”, (c) X̂3: “OSC”
Fig. 5
Fig. 5 Simulated conventional imager with σn = 0.01; (a) “ECE”, (b) “Old Main”, (c) “OSC”
Fig. 6
Fig. 6 Average reconstruction SNR vs number of fields of view
Fig. 7
Fig. 7 Prototype optical multiplexer
Fig. 8
Fig. 8 Low resolution wide field view of the object space. The “1”, “2”, and “3” marker dimensions are 14 cm × 21.6 cm
Fig. 9
Fig. 9 High resolution “soda straw” view of the object space (a) FoV 1, (b) FoV 2, (c) FoV 3
Fig. 10
Fig. 10 Prototype optical multiplexer composite image measurements: (a) 〈00〉 : X1 + X2 + X3, (b) 〈01〉 : X2 + X3+ leakage, (c) 〈11〉 : X3+ leakage
Fig. 11
Fig. 11 LMMSE reconstruction (a) X̂1 (b) X̂2 (c) X̂3

Tables (1)

Tables Icon

Table 1 Conventional and superposition SNR comparison for σn = 0.03

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

SNR = ( η Q τ / I ) 2 ( η Q τ / I ) + I σ n 2 .
SNR = ( η Q τ J / I ) 2 ( η Q τ J / I ) + I σ n 2 = J ( η Q τ / I ) 2 ( η Q τ / I ) + ( I σ n 2 / J ) .
m i p = j = 1 J ( h i j x j p ) + n
h 11 = α 3 T 2 α 2 T 2 α 1 R k 2 2 k 3 2 h 12 = α 3 T 2 α 2 R k 3 2 h 13 = α 3 R
[ m 1 m 2 m 3 ] = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] [ x 1 x 2 x 3 ] + [ n 1 n 2 n 3 ]
m = Hx + n
H = [ 1 1 1 0 1 1 0 0 1 ] .
mSNR i = ( j = 1 J h i j η Q τ / I ) 2 ( j = 1 J h i j η Q τ / 1 ) + I σ n 2 .
x ^ = C x H T ( H C x H T + C n ) 1 m = Wm
rSNR j = ( i = 1 I w j i j = 1 J h i j η Q τ / I ) 2 i = 1 I w j i 2 ( j = 1 J ( h i j η Q τ / I ) + I σ n 2 ) .
rSNR j = ( i = 1 I w j i j = 1 J h i j η Q τ / I ) 2 i = 1 I w j i 2 j = 1 J h i j η Q τ / I .
rSNR j = ( i = 1 I w j i j = 1 J h i j η Q τ ) 2 I 3 i = 1 I w j i 2 σ n 2 .
H MSE * = arg min H j = 1 J { E [ | | X j exp X ^ j | | 2 2 ] } s . t . h 13 = h 23 = h 33 h 12 = h 22
H MSE * = [ 0.8577 1.0000 0.9963 0.3164 1.0000 0.9963 0.3015 0.3109 0.9963 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.