Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quasi light fields: extending the light field to coherent radiation

Open Access Open Access

Abstract

Imaging technologies such as dynamic viewpoint generation are engineered for incoherent radiation using the traditional light field, and for coherent radiation using electromagnetic field theory. We present a model of coherent image formation that strikes a balance between the utility of the light field and the comprehensive predictive power of Maxwell’s equations. We synthesize research in optics and signal processing to formulate, capture, and form images from quasi light fields, which extend the light field from incoherent to coherent radiation. Our coherent cameras generalize the classic beamforming algorithm in sensor array processing and invite further research on alternative notions of image formation.

© 2009 Optical Society of America

1. INTRODUCTION

The light field represents radiance as a function of position and direction, thereby decomposing optical power flow along rays. The light field is an important tool used in many imaging applications in different disciplines, but is traditionally limited to incoherent light. In computer graphics, a rendering pipeline can compute new views at arbitrary camera positions from the light field [1]. In computational photography, a camera can measure the light field and later generate images focused at different depths, after the picture is taken [2]. In electronic displays, an array of projectors can present multiple viewpoints encoded in the light field, enabling 3D television [3]. Many recent incoherent imaging innovations have been made possible by expressing image pixel values as appropriate integrals over light field rays.

For coherent imaging applications, the value of decomposing power by position and direction has long been recognized without the aid of a light field, since the complex-valued scalar field encodes direction in its phase. A hologram encodes multiple viewpoints, but in a different way than the light field [4]. An ultrasound machine generates images focused at different depths, but from air pressure instead of light field measurements [5]. A Wigner distribution function models the operation of optical systems in simple ways, by conveniently inferring direction from the scalar field instead of computing non negative light field values [6]. Comparing these applications, coherent imaging uses the scalar field to achieve results similar to those that incoherent imaging obtains with the light field.

Our goal is to provide a model of coherent image formation that combines the utility of the light field with the comprehensive predictive power of the scalar field. The similarities between coherent and incoherent imaging motivate exploring how the scalar field and light field are related, which we address by synthesizing research across three different communities. Each community is concerned with a particular Fourier transform pair and has its own name for the light field. In optics, the pair is position and direction, and Walther discovered the first generalized radiance function by matching power predictions made with radiometry and scalar field theory [7]. In quantum physics, the pair is position and momentum, and Wigner discovered the first quasi-probability distribution, or phase-space distribution, as an aid to computing the expectation value of a quantum operator [8]. In signal processing, the pair is time and frequency, and while instantaneous spectra were used as early as 1890 by Sommerfeld, Ville is generally credited with discovering the first nontrivial quadratic time–frequency distribution by considering how to distribute the energy of a signal over time and frequency [9]. Walther, Wigner, and Ville independently arrived at essentially the same function, which is one of the ways to express a light field for coherent radiation in terms of the scalar field.

The light field has its roots in radiometry, a phenomenological theory of radiative power transport that began with Herschel’s observations of the sun [10], developed through the work of astrophysicists such as Chandrasekhar [11], and culminated with its grounding in electromagnetic field theory by Friberg et al. [12]. The light field represents radiance, which is the fundamental quantity in radiometry, defined as power per unit projected area per unit solid angle. Illuminating engineers would integrate radiance to compute power quantities, although no one could validate these calculations with the electromagnetic field theory formulated by Maxwell. Gershun was one of many physicists who attempted to physically justify radiometry, and who introduced the phrase light field to represent a three-dimensional vector field analogous to the electric and magnetic fields [13]. Gershun’s light field is a degenerate version of the one we discuss, and more closely resembles the time-averaged Poynting vector that appears in a rigorous derivation of geometric optics [14]. Subsequently, Walther generalized radiometry to coherent radiation in two different ways [7, 15], and Wolf connected Walther’s work to quantum physics [16], ultimately leading to the discovery of many more generalized radiance functions [17] and a firm foundation for radiometry [12].

Meanwhile, machine vision researchers desired a representation for all the possible pictures a pinhole camera might take in space–time, which led to the current formulation of the light field. Inspired by Leonardo da Vinci, Adelson and Bergen defined a plenoptic function to describe “everything that can be seen” as the intensity recorded by a pinhole camera parametrized by position, direction, time, and wavelength [18]. Levoy and Hanrahan tied the plenoptic function more firmly to radiometry by redefining Gershun’s phrase light field to mean radiance parametrized by position and direction [1]. Gortler et al. introduced the same construct, but instead called it the lumigraph [19]. Light field is now the dominant terminology used in incoherent imaging contexts.

Our contribution is to describe and characterize all the ways to extend the light field to coherent radiation, and to interpret coherent image formation using the resulting extended light fields. We call our extended light fields quasi light fields, which are analogous to the generalized radiance functions of optics, the quasi-probability and phase-space distributions of quantum physics, and the quadratic class of time–frequency distributions of signal processing. Agarwal et al. have already extended the light field to coherent radiation [17], and the signal processing community has already classified all of the ways to distribute power over time and frequency [20]. Both have traced their roots to quantum physics. But to our knowledge, no one has connected the research to show (i) that the quasi light fields represent all the ways to extend the light field to coherent radiation, and (ii) that the signal processing classification informs which quasi light field to use for a specific application. We further contextualize the references, making any unfamiliar literature more accessible to specialists in other areas.

Our paper is organized as follows. We describe the traditional light field in Section 2. We formulate quasi light fields in Section 3 by reviewing and relating the relevant research in optics, quantum physics, and signal processing. In Section 4, we describe how to capture quasi light fields, discuss practical sampling issues, and illustrate the impact of light field choice on energy localization. In Section 5, we describe how to form images with quasi light fields. We derive a light-field camera, demonstrate and compensate for diffraction limitations in the near zone, and generalize the classic beamforming algorithm in sensor array processing. We conclude the paper in Section 6, where we remark on the utility of quasi light fields and future perspectives on image formation.

2. TRADITIONAL LIGHT FIELD

The light field is a useful tool for incoherent imaging because it acts as an intermediary between the camera and the picture, decoupling information capture and image production: the camera measures the light field, from which many different traditional pictures can be computed. We define a pixel in the image of a scene by a surface patch σ and a virtual aperture (Fig. 1 ). Specifically, we define the pixel value as the power P radiated by σ toward the aperture, just as an ideal single-lens camera would measure. According to radiometry, P is an integral over a bundle of light field rays [21]:

P=σΩrL(r,s)cosψd2sd2r,
where L(r,s) is the radiance at position r and in unit direction s, ψ is the angle that s makes with the surface normal at r, and Ωr is the solid angle subtended by the virtual aperture at r. The images produced by many different conventional cameras can be computed from the light field using Eq. (1) [22].

The light field has an important property that allows us to measure it remotely: the light field is constant along rays in a lossless medium [21]. To measure the light field on the surface of a scene, we follow the rays for the images we are interested in, and intercept those rays with our camera hardware (Fig. 1). However, our hardware must be capable of measuring the radiance at a point and in a specific direction; a conventional camera that simply measures the irradiance at a point is insufficient. We can discern directional power flow using a lens array, as is done in a plenoptic camera [2].

In order to generate coherent images using the same framework described above, we must overcome three challenges. First, we must determine how to measure power flow by position and direction to formulate a coherent light field. Second, we must capture the coherent light field remotely and be able to infer behavior at the scene surface. Third, we must be able to use integral (1) to produce correct power values, so that we can form images by integrating over the coherent light field. We address each challenge in a subsequent section.

3. FORMULATING QUASI LIGHT FIELDS

We motivate, systematically generate, and characterize quasi light fields by relating existing research. We begin in Subsection 3A with research in optics that frames the challenge of extending the light field to coherent radiation in terms of satisfying a power constraint required for radiometry to make power predictions consistent with scalar field theory. While useful in developing an intuition for quasi light fields, the power constraint does not allow us to easily determine the quasi light fields. We therefore proceed in Subsection 3B to describe research in quantum physics that systematically generates quasi light fields satisfying the power constraint and that shows how the quasi light fields are true extensions that reduce to the traditional light field under certain conditions. While useful for generating quasi light fields, the quantum physics approach does not allow us to easily characterize them. Therefore, in Subsection 3C we map the generated quasi light fields to the quadratic class of time–frequency distributions, which has been extensively characterized and classified by the signal processing community. By relating research in optics, quantum physics, and signal processing, we express all the ways to extend the light field to coherent radiation, and provide insight on how to select an appropriate quasi light field for a particular application.

We assume a perfectly coherent complex scalar field U(r) at a fixed frequency ν for simplicity, although we comment in Section 6 on how to extend the results to broadband, partially coherent radiation. The radiometric theory we discuss assumes a planar source at z=0. Consequently, although the light field is defined in three-dimensional space, much of our analysis is confined to planes z=z0 parallel to the source. Therefore, for convenience, we use r=(x,y,z) and s=(sx,sy,sz) to indicate three-dimensional vectors and r=(x,y) and s=(sx,sy) to indicate two-dimensional projected versions.

3A. Intuition from Optics

An extended light field must produce accurate power transport predictions consistent with rigorous theory; thus the power computed from the scalar field using wave optics determines the allowable light fields via the laws of radiometry. One way to find extended light fields is to guess a light field equation that satisfies this power constraint, which is how Walther identified the first extended light field [7]. The scenario involves a planar source at z=0 described by U(r), and a sphere of large radius ρ centered at the origin. We use scalar field theory to compute the flux through part of the sphere, and then use the definition of radiance to determine the light field from the flux.

According to scalar field theory, the differential flux dΦ through a portion of the sphere subtending differential solid angle dΩ is given by integrating the radial component of the energy flux density vector F. From diffraction theory, the scalar field in the far zone is

U(ρs)=2πikszexp(ikρ)ρa(s),
where k=2πλ is the wave number, λ is the wavelength, and
a(s)=(k2π)2U(r)exp(iksr)d2r
is the plane wave component in direction s [23]. Now
F(ρs)=(2πk)2a(s)a*(s)sz2ρ2s,
so that
dΦ=(2πk)2sz2a(s)a*(s)dΩ.

According to radiometry, radiant intensity is flux per unit solid angle

I(s)=dΦdΩ=(2πk)2sz2a(s)a*(s).
Radiance is I(s) per unit projected area [21], and this is where the guessing happens: there are many ways to distribute Eq. (6) over projected area by factoring out sz and an outer integral over the source plane, but none yield light fields that satisfy all the traditional properties of radiance [24]. One way to factor Eq. (6) is to substitute the expression for a(s) from Eq. (3) into Eq. (6) and change variables:
I(s)=sz[(k2π)2szU(r+12r)U*(r12r)exp(iksr)d2r]d2r.
The bracketed expression is Walther’s first extended light field
LW(r,s)=(k2π)2szW(r,sλ),
where
W(r,s)=U(r+12r)U*(r12r)exp(i2πsr)d2r
is the Wigner distribution [25]. We may manually factor Eq. (6) differently to obtain other extended light fields in an ad hoc manner, but it is hard to find and verify the properties of all extended light fields this way, and we would have to individually analyze each light field that we do manage to find. So instead, we pursue a systematic approach to exhaustively identify and characterize the extended light fields that guarantee the correct radiant intensity in Eq. (6).

3B. Explicit Extensions from Quantum Physics

The mathematics of quantum physics provides us with a systematic extended light field generator that factors the radiant intensity in Eq. (6) in a structured way. Walther’s extended light field in Eq. (8) provides the hint for this connection between radiometry and quantum physics. Specifically, Wolf recognized the similarity between Walther’s light field and the Wigner phase-space distribution [8] from quantum physics [16]. Subsequently, Agarwal et al. repurposed the mathematics behind phase-space representation theory to generate new light fields instead of distributions [17]. We summarize their approach, define the class of quasi light fields, describe how quasi light fields extend traditional radiometry, and show how quasi light fields can be conveniently expressed as filtered Wigner distributions.

The key insight of Agarwal et al. was to introduce a position operator r̂ and a direction operator ŝ that obey the commutation relations [26]

[x̂,ŝx]=iλ2π,[ŷ,ŝy]=iλ2π,
and to map the different ways of ordering the operators to different extended light fields. This formulation is valuable for two reasons. First, relations (10) are analogous to the quantum-mechanical relations for position and momentum, allowing us to exploit the phase-space distribution generator from quantum physics for our own purposes, thereby providing an explicit formula for extended light fields. Second, in the geometric optics limit as λ0, the operators commute per relations (10), so that all of the extended light fields collapse to the same function that can be related to the traditional light field. Therefore, the formulation of Agarwal et al. not only provides us with different ways of expressing the light field for coherent radiation, but also explains how these differences arise as the wavelength becomes nonnegligible.

We now summarize the phase-space representation calculus that Agarwal and Wolf invented [27] to map operator orderings to functions, which Agarwal et al. later applied to radiometry [17], culminating in a formula for extended light fields. The phase-space representation theory generates a function L̃Ω from any operator L̂ for each distinct way Ω of ordering collections of r̂ and ŝ. So by choosing a specific L̂ defined by its matrix elements using the Dirac notation [26],

rR|L̂|rC=U(rR)U*(rC),
and supplying L̂ as input, we obtain the extended light fields
LΩ(r,s)=(k2π)2szL̃Ω(r,s)
as outputs. The power constraint from Subsection 3A translates to a minor constraint on the allowed orderings Ω, so that LΩ can be factored from Eq. (6). Finally, there is an explicit formula for LΩ [27], which in the form of Fri berg et al. [12] reads
LΩ(r,s)=k2(2π)4szΩ̃(u,kr)exp[iu(rr)]exp(iksr)U(r+12r)U*(r12r)d2ud2rd2r,
where Ω̃ is a functional representation of the ordering Ω.

Previous research has related the extended light fields LΩ to the traditional light field by examining how the LΩ behave for globally incoherent light of a small wavelength, an environment technically modeled by a quasi-homogeneous source in the geometric optics limit where λ0. As λ0, r̂ and ŝ commute per relations (10), so that all orderings Ω are equivalent and all of the extended light fields LΩ collapse to the same function. Since, in the source plane, Foley and Wolf showed that one of those light fields behaves like traditional radiance [28] for globally incoherent light of a small wavelength, all of the LΩ behave like traditional radiance for globally incoherent light of a small wavelength. Furthermore, Friberg et al. showed that many of the LΩ are constant along rays for globally incoherent light of a small wavelength [12]. The LΩ thereby subsume the traditional light field, and globally incoherent light of a small wavelength is the environment in which traditional radiometry holds.

To more easily relate LΩ to the signal processing literature, we conveniently express LΩ as a filtered Wigner distribution. We introduce a function Π and substitute

Ω̃(u,v)=Π(a,b)exp[i(au+bv)]d2ad2b
into Eq. (13), integrate first over u, then over a, and finally substitute b=ss:
LΩ(r,s)=(k2π)2szΠ(rr,ss)W(r,sλ)d2rd2s=(k2π)2szΠ(r,s)W(r,sλ).
The symbol ⊗ in Eq. (15) denotes convolution in both r and s. Each filter kernel Π yields a different light field. There are only minor restrictions on Π, or equivalently on Ω̃. Specifically, Agarwal and Wolf’s calculus requires that [27]
1Ω̃beanentireanalyticfunctionwithnozerosontherealcomponentaxes.
The derivation additionally requires that
Ω̃(0,v)=1forallv,
so that LΩ satisfies the laws of radiometry and is consistent with Eq. (6) [17].

We call the functions LΩ, the restricted class of extended light fields that we have systematically generated, quasi light fields, in recognition of their connection with quasi-probability distributions in quantum physics.

3C. Characterization from Signal Processing

Although we have identified quasi light fields and justified how they extend the traditional light field, we must still show that we have found all possible ways to extend the light field to coherent radiation, and we must indicate how to select a quasi light field for a specific application. We address both concerns by relating quasi light fields to bilinear forms of U and U* that are parameterized by position and direction. First, such bilinear forms reflect all the different ways to represent the energy distribution of a complex signal in signal processing, and therefore contain all possible extended light fields, allowing us to identify any unaccounted for by quasi light fields. Second, we may use the signal processing classification of bilinear forms to characterize quasi light fields and guide the selection of one for an application.

To relate quasi light fields to bilinear forms, we must express the filtered Wigner distribution in Eq. (15) as a bilinear form. To this end, we first express the filter kernel ∏ in terms of another function K:

Π(a,b)=K(a+λ2v,aλ2v)exp(i2πbv)d2v.
We substitute Eq. (18) into Eq. (15), integrate first over s, then over v, and finally substitute
rR=r+12r,rC=r12r
to express the quasi light field as
L(r,s)=(k2π)2szU(rR){K(rRr,rCr)×exp[iks(rRrC)]}U*(rC)d2rRd2rC.
We recognize that Eq. (20) is a bilinear form of U and U*, with kernel indicated by the braces.

The structure of the kernel of the bilinear form in Eq. (20) limits L to a shift-invariant energy distribution. Specifically, translating the scalar field in Eq. (20) in position and direction orthogonal to the z-axis according to

U(r)U(rr0)exp(iks0r)
results in a corresponding translation in position and direction in the light field, which after rearranging terms, becomes
L(r,s)L(rr0,ss0).
Such shift-invariant bilinear forms compose the quadratic class of time–frequency distributions, which is sometimes misleadingly referred to as Cohen’s class [20].

The quasi light fields represent all possible ways of extending the light field to coherent radiation. This is because any reasonably defined extended light field must be shift-invariant in position and direction, as translating and rotating coordinates should modify the scalar field and light field representations in corresponding ways. Thus, on the one hand, an extended light field must be a quadratic time–frequency distribution. On the other hand, Eq. (20) implies that quasi light fields span the entire class of quadratic time–frequency distributions, apart from the constraints on ∏ described at the end of Subsection 3B. Constraint (17) is necessary to satisfy the power constraint implied by Eq. (6), which any extended light field must satisfy. In contrast, constraint (16) is a technical detail concerning analyticity and the location of zeros; extended light fields strictly need not satisfy this mild constraint, but the light fields that are ruled out are well-approximated by light fields that satisfy it.

We obtain a concrete sensor array processing interpretation of quasi light fields by grouping the exponentials in Eq. (20) with U instead of K:

L(r,s)=(k2π)2sz{U(rR)exp[iks(rrR)]}K(rRr,rCr)×{U(rC)exp[iks(rrC)]}*d2rRd2rC.
The integral in Eq. (23) is the expected value of the energy of the output of a spatial filter with impulse response exp(iksr) applied to the scalar field, when using K to estimate the correlation E[U(rR)U*(rC)] by
U(rR)K(rRr,rCr)U*(rC).
That is, the choice of quasi light field corresponds to a choice of how to infer coherence structure from scalar field measurements. In adaptive beamforming, the spatial filter exp(iksr) focuses a sensor array on a particular plane wave component, and K serves a similar role as the covariance matrix taper that gives rise to design features such as diagonal loading [29]. But for our purposes, the sensor array processing interpretation in Eq. (23) allows us to cleanly separate the choice of quasi light field in K from the plane wave focusing in the exponentials.

Several signal processing texts meticulously classify the quadratic class of time–frequency distributions by their properties and discuss distribution design and use for various applications [20, 25]. We can use these resources to design quasi light fields for specific applications. For example, if we desire a light field with fine directional localization, we may first try the Wigner quasi light field in Eq. (8), which is a popular starting choice. We may then discover that we have too many artifacts from interfering spatial frequencies, called cross terms, and therefore wish to consider a reduced interference quasi light field. We might try the modified B-distribution, which is a particular reduced interference quasi light field that has a tunable parameter to suppress interference. Or, we may decide to design our own quasi light field in a transformed domain using ambiguity functions. The resulting tradeoffs can be tailored to specific application requirements.

4. CAPTURING QUASI LIGHT FIELDS

To capture an arbitrary quasi light field, we sample and process the scalar field. In incoherent imaging, the traditional light field is typically captured by instead making intensity measurements at a discrete set of positions and directions, as is done in the plenoptic camera [2]. While it is possible to apply the same technique to coherent imaging, only a small subset of quasi light fields that exhibit poor localization properties can be captured this way. In comparison, all quasi light fields can be computed from the scalar field, as in Eq. (15). We therefore sample the scalar field with a discrete set of sensors placed at different positions in space and subsequently process the scalar field measurements to compute the desired quasi light field. We describe the capture process for three specific quasi light fields in Subsection 4A and demonstrate the different localization properties of these quasi light fields via simulation in Subsection 4B.

4A. Sampling the Scalar Field

To make the capture process concrete, we capture three different quasi light fields. For simplicity, we consider a two-dimensional scene and sample the scalar field with a linear array of sensors regularly spaced along the y-axis (Fig. 2 ). With this geometry, the scalar field U is parameterized by a single position variable y, and the discrete light field l is parameterized by y and the direction component sy. The sensor spacing is d2, which we assume is fine enough to ignore aliasing effects. This assumption is practical for long-wavelength applications such as millimeter-wave radar. For other applications, aliasing can be avoided by applying an appropriate prefilter. From the sensor measurements, we compute three different quasi light fields, including the spectrogram and the Wigner.

Although the spectrogram quasi light field is attractive because it can be captured like the traditional light field by making intensity measurements, it exhibits poor localization properties. Zhang and Levoy explain [30] how to capture the spectrogram by placing an aperture stop specified by a transmission function T over the desired position y before computing a Fourier transform to extract the plane wave component in the desired direction sy. Previously Ziegler et al. used the spectrogram as a coherent light field to represent a hologram [4]. The spectrogram is an important quasi light field because it is the building block for the quasi light fields that can be directly captured by making intensity measurements, since all nonnegative quadratic time–frequency distributions, and therefore all nonnegative quasi light fields, are sums of spectrograms [20]. Ignoring constants and sz, we compute the discrete spectrogram from the scalar field samples by

lS(y,sy)=|nT(nd)U(y+nd)exp(ikndsy)|2.

The Wigner quasi light field is a popular choice that exhibits good energy localization in position and direction [20]. We already identified the Wigner quasi light field in Eq. (8); the discrete version is

lW(y,sy)=nU(y+nd2)U*(ynd2)exp(ikndsy).
Evidently, the spectrogram and Wigner distribute energy over position and direction in very different ways. Per Eq. (25), the spectrogram first uses a Fourier transform to extract directional information and then computes a quadratic energy quantity, while the Wigner does the reverse, per Eq. (26). On the one hand, this reversal allows the Wigner to better localize energy in position and direction, since the Wigner is not bound by the classical Fourier uncertainty principle as the spectrogram is. On the other hand, the Wigner’s nonlinearities introduce cross-term artifacts by coupling energy in different directions, thereby replacing the simple uncertainty principle with a more complicated set of tradeoffs [20].

We now introduce a third quasi light field for capture, in order to help us understand the implications of requiring quasi light fields to exhibit traditional light field properties. Specifically, the traditional light field has real nonnegative values that are zero where the scalar field is zero, whereas no quasi light field behaves this way [24]. Although the spectrogram has nonnegative values, the support of both the spectrogram and Wigner spills over into regions where the scalar field is zero. In contrast, the conjugate Rihaczek quasi light field, which can be obtained by substituting Eq. (3) for a*(s) in Eq. (6) and factoring, is identically zero at all positions where the scalar field is zero and for all directions in which the plane wave component is zero:

LR(r,s)=szU*(r)exp(iksr)a(s).
However, unlike the nonnegative spectrogram and the real Wigner, the Rihaczek is complex-valued, as each of its discoverers independently observed: Walther in optics [15], Kirkwood in quantum physics [31], and Rihaczek in signal processing [32]. The discrete conjugate Rihaczek quasi light field is
lR(y,sy)=U*(y)exp(ikysy)nU(nd)exp(ikndsy).

4B. Localization Tradeoffs

Different quasi light fields localize energy in position and direction in different ways, so that the choice of quasi light field affects the potential resolution achieved in an imaging application. We illustrate the diversity of behavior by simulating a plane wave propagating past a screen edge and computing the spectrogram, Wigner, and Rihaczek quasi light fields from scalar field samples (Fig. 3 ). This simple scenario stresses the main tension between localization in position and direction: each quasi light field must encode the position of the screen edge as well as the downward direction of the plane wave. The quasi light fields serve as intermediate representations used to jointly estimate the position of the screen edge and the orientation of the plane wave.

Our simulation accurately models diffraction using our implementation of the angular spectrum propagation method, which is the same technique used in commercial optics software to accurately simulate wave propagation [33]. We propagate a plane wave with wavelength λ=3mm a distance R=50m past the screen edge, where we measure the scalar field and compute the three discrete light fields using Eqs. (25, 26, 28). To compute the light fields, we set d=λ10, run the summations over |n|10λ, and use a rectangular window function of width 10cm for T. We plot lS, |lW|, and |lR| in terms of the two-plane parameterization of the light field [1], so that each ray is directed from a point u in the plane of the screen toward a point y in the measurement plane, and so that sy=(yu)[R2+(yu)2]12.

We compare each light field’s ability to estimate the position of the screen edge and the orientation of the plane wave (Fig. 3). Geometric optics provides an ideal estimate: we should ideally see only rays pointing straight down (u=y) past the screen edge, corresponding to a diagonal line in the upper-right quadrant of the light field plots. Instead, we see blurred lines with ringing. The ringing is physically accurate and indicates the diffraction fringes formed on the measurement plane. The blurring indicates localization limitations. While the spectrogram’s window T can be chosen to narrowly localize energy in either position or direction, the Wigner narrowly localizes energy in both, depicting instantaneous frequency without being limited by the classical Fourier uncertainty principle [20].

It may seem that the Wigner light field is preferable to the others and the clear choice for all applications. While the Wigner light field possesses excellent localization properties, it exhibits cross-term artifacts due to interference from different plane wave components. An alternative quasi light field such as the Rihaczek can strike a balance between localization and cross-term artifacts, and therefore may be a more appropriate choice, as discussed at the end of Subsection 3C. If our goal were only to estimate the position of the screen edge, we might prefer the spectrogram; to jointly estimate both position and plane wave orientation, we prefer the Wigner, and if there were two plane waves instead of one, we might prefer the Rihaczek. One thing is certain, however: we must abandon nonnegative quasi light fields to achieve better localization tradeoffs, as all nonnegative quadratic time–frequency distributions are sums of spectrograms and hence exhibit poor localization tradeoffs [20].

5. IMAGE FORMATION

We wish to form images from quasi light fields for coherent applications similarly to how we form images from the traditional light field for incoherent applications, by using Eq. (1) to integrate bundles of light field rays to compute pixel values (Fig. 1). However, simply selecting a particular captured quasi light field L and evaluating Eq. (1) raises three questions about the validity of the resulting image. First, is it meaningful to distribute coherent energy over surface area by factoring radiant intensity in Eq. (6)? Second, does the far-zone assumption implicit in radiometry and formalized in Eq. (2) limit the applicability of quasi light fields? And third, how do we capture quasi light field rays remotely if, unlike the traditional light field, quasi light fields need not be constant along rays?

The first question is a semantic one. For incoherent light of small wavelength, we define an image in terms of the power radiating from a scene surface toward an aperture, and physics tells us that this uniquely specifies the image (Section 3), which may be expressed in terms of the traditional light field. If we attempt to generalize the same definition of an image to partially coherent, broadband light, and specifically to coherent light at a nonzero wavelength, we must ask how to isolate the power from a surface patch toward the aperture, according to classical wave optics. But there is no unique answer; different isolation techniques correspond to different quasi light fields. Therefore, to be well-defined, we must extend the definition of an image for coherent light to include a particular choice of quasi light field that corresponds to a particular factorization of radiant intensity.

The second and third questions speak of assumptions in the formulation of quasi light fields and in the image formation from quasi light fields that can lead to coherent imaging inaccuracies when these assumptions are not valid. Specifically, unless the scene surface and aperture are far apart, the far-zone assumption in Eq. (2) does not hold, so that quasi light fields are incapable of modeling near-zone behavior. Also, unless we choose a quasi light field that is constant along rays, such as an angle-impact Wigner function [34], remote measurements might not accurately reflect the light field at the scene surface [35], resulting in imaging inaccuracies. Therefore, in general, integrating bundles of remotely captured quasi light field rays produces an approximation of the image we have defined. We assess this approximation by building an accurate near-zone model in Subsection 5A, simulating imaging performance of several coherent cameras in Subsection 5B, and showing how our image formation procedure generalizes the classic beamforming algorithm in Subsection 5C.

5A. Near-Zone Radiometry

We take a new approach to formulating light fields for coherent radiation that avoids making the assumptions that (i) the measurement plane is far from the scene surface and (ii) light fields are constant along rays. The resulting light fields are accurate in the near zone, and may be compared with quasi light fields to understand the latter’s limitations. The key idea is to express a near-zone light field L(r,s) on the measurement plane in terms of the infinitesimal flux at the point where the line containing the ray (r,s) intersects the scene surface (Fig. 4 ). First we compute the scalar field at the scene surface, next we compute the infinitesimal flux, and then we identify a light field that predicts the same flux using the laws of radiometry. In contrast to Walther’s approach (Subsection 3A), (i) we do not make the far-zone approximation as in Eq. (2), and (ii) we formulate the light field in the measurement plane instead of in the source plane at the scene surface. Therefore, in forming an image from a near-zone light field, we are not limited to the far zone and we need not relate the light field at the measurement plane to the light field at the scene surface.

The first step in deriving a near-zone light field L for the ray (r,s) is to use the scalar field on the measurement plane to compute the scalar field at the point rP where the line containing the ray intersects the scene surface. We choose coordinates so that the measurement plane is the xy-plane, the scene lies many wavelengths away in the negative z<0 half-space, and r is at the origin. We denote the distance between the source rP on the scene surface and the point of observation r by ρ. Under a reasonable bandwidth assumption, the inverse diffraction formula expresses the scalar field at rP in terms of the scalar field on the measurement plane [36]:

U(rP)=ik2πU(rM)zP|rPrM|exp(ik|rPrM|)|rPrM|d2rM.

Next, we compute the differential flux dΦ through a portion of a sphere at rP subtending differential solid angle dΩ. We obtain dΦ by integrating the radial component of the energy flux density vector

F(rP)=14πkν[U*tU+UtU*].
To keep the calculation simple, we ignore amplitude decay across the measurement plane, approximating
|rPrM||rP|
outside the exponential in Eq. (29), and
|rP||rPrM|1,
when evaluating Eq. (30), resulting in
F(ρs)=(2πk)2ã(ρs)ã*(ρs)sz2ρ2s,
where
ã(ρs)=(k2π)2U(rM)exp(ik|ρsrM|)d2rM.
Thus,
dΦ=(2πk)2sz2ã(ρs)ã*(ρs)dΩ.

Finally, we factor out sz and an outer integral over surface area from dΦdΩ to determine a near-zone light field. Unlike in Subsection 3A, the nonlinear exponential argument in ã complicates the factoring. Nonetheless, we obtain a near-zone light field that generalizes the Rihaczek by substituting Eq. (34) for ã* in Eq. (35). After factoring and freeing r from the origin by substituting rρs for ρs, we obtain

LρR(r,s)=szU*(r)exp(ikρ)ã(rρs)=(k2π)2szU*(r)exp(ikρ)U(rM)exp(ik|rρsrM|)d2rM,
where the subscript ρ reminds us of this near-zone light field’s dependence on distance.

LρR is evidently neither the traditional light field nor a quasi light field, as it depends directly on the scene geometry through an additional distance parameter. This distance parameter ρ is a function of r, s, and the geometry of the scene; it is the distance along s between the scene surface and r. We may integrate LρR over a bundle of rays to compute the image pixel values just like any other light field, as long as we supply the right value of ρ for each ray. In contrast, quasi light fields are incapable of modeling optical propagation in the near zone, as it is insufficient to specify power flow along rays: we must also know the distance between the source and point of measurement along each ray.

We can obtain near-zone generalizations of all quasi light fields through the sensor array processing interpretation in Subsection 3C. Recall that each quasi light field corresponds to a particular choice of the function K in Eq. (23). For example, setting K(a,b)=δ(b), where δ is the Dirac delta function, yields the Rihaczek quasi light field LR in Eq. (27). To generalize quasi light fields to the near zone, we focus at a point instead of a plane wave component by using a spatial filter with impulse response exp(ik|rρs|) instead of exp(iksr) in Eq. (23). Then, choosing K(a,b)=δ(b) yields LρR, the near-zone generalization of the Rihaczek in Eq. (36), and choosing other functions K yields near-zone generalizations of the other quasi light fields.

5B. Near-Zone Diffraction Limitations

We compute and compare image pixel values using the Rihaczek quasi light field LR and its near-zone generalization LρR, demonstrating how all quasi light fields implicitly make the Fraunhofer diffraction approximation that limits accurate imaging to the far zone. First, we construct coherent cameras from LR and LρR. For simplicity, we consider a two-dimensional scene and sample the light fields, approximating the integral over a bundle of rays (Fig. 1) by the summation of discrete rays directed from the center rP of the scene surface patch to each sensor on a virtual aperture of diameter A, equally spaced every distance d in the measurement plane [Fig. 5a ]. Ignoring constants and sz, we compute the pixel values for a far-zone camera from the Rihaczek quasi light field in Eq. (27),

PR=|nd|<A2{[U(nd)exp(ikndsyn)]*mU(md)exp(ikmdsyn)},
and for a near-zone camera from the near-zone generalization of the Rihaczek in Eq. (36),
PρR=[|nd|<A2U(nd)exp(ikΔn)]*[mU(md)exp(ikΔm)].
In Eq. (37), sn denotes the unit direction from rP to the nth sensor, and in Eq. (38), Δn denotes the distance between rP and the nth sensor.

By comparing the exponentials in Eq. (37) with those in Eq. (38), we see that the near-zone camera aligns the sensor measurements along spherical wavefronts diverging from the point of focus rP, while the far-zone camera aligns measurements along plane wavefront approximations [Fig. 5b]. Spherical wavefront alignment makes physical sense in accordance with the Huygens–Fresnel principle of diffraction, while approximating spherical wavefronts with plane wavefronts is reminiscent of Fraunhofer diffraction. In fact, the far-zone approximation in Eq. (2) used to derive quasi light fields follows directly from the Rayleigh–Sommerfeld diffraction integral by linearizing the exponentials, which is precisely Fraunhofer diffraction. Therefore, all quasi light fields are valid only for small Fresnel numbers, when the source and point of measurement are sufficiently far away from each other.

We expect the near-zone camera to outperform the far-zone camera in near-zone imaging applications, which we demonstrate by comparing their ability to resolve small targets moving past their field of view. As a baseline, we introduce a third camera with nonnegative pixel values PρB by restricting the summation over m in Eq. (38) to |md|<A2, which results in the beamformer camera used in sensor array processing [5, 37]. Alternatively, we could extend the summation over n in Eq. (38) to the entire array, but this would average anisotropic responses over a wider aperture diameter, resulting in a different image. We simulate an opaque screen containing a pinhole that is backlit with a coherent plane wave (Fig. 6 ). The sensor array is D=2m wide and just R=1m away from the screen. The virtual aperture is A=10cm wide and the camera is focused on a fixed 1mm pixel straight ahead on the screen. The pinhole has width 1mm, which is smaller than the wavelength λ=3mm, so the plane wavefronts bend into slightly spherical shapes via diffraction. We move the pinhole to the right, recording pixel values |PR|, |PρR|, and PρB for each camera at each pinhole position. Because of the nature of the coherent combination of the sensor measurements that produces the pixel values, each camera records a multilobed response. The width of the main lobe indicates the near-zone resolution of the camera.

The near-zone camera is able to resolve the pinhole down to its actual size of 1mm, greatly outperforming the far-zone camera, which records a blur 66cm wide, and even outperforming the beamformer camera. Neither comparison is surprising. First, with a Fresnel number of D2Rλ1333, the Fraunhofer approximation implicitly made by quasi light fields does not hold for this scenario, so we expect the far-zone camera to exhibit poor resolution. Second, the near-zone camera uses the entire D=2m array instead of just the sensors on the virtual aperture that the beamformer camera is restricted to, and the extra sensors lead to improved resolution.

5C. Generalized Beamforming

We compare image formation from light fields with traditional perspectives on coherent image formation by relating quasi light fields and our coherent cameras to the classic beamforming algorithm used in many coherent imaging applications, including ultrasound [5] and radar [37]. The beamforming algorithm estimates a spherical wave diverging from a point of focus rP by delaying and averaging sensor measurements. When the radiation is narrowband, the delays are approximated by phase shifts. With the sensor array geometry from Subsection 5B, the beamformer output is

g=mT(md)U(md)exp(ikΔm),
where the T(md) are amplitude weights used to adjust the beamformer’s performance. As rP moves into the far zone,
ΔmΔ0mdsymmdsy0,
so that apart from a constant phase offset, Eq. (39) becomes a short-time Fourier transform
g=mT(md)U(md)exp(ikmdsy0).
Evidently, |g|2 is a spectrogram quasi light field, and we may select T to be a narrow window about a point r to capture LS(r,s0). We have already seen how quasi light fields generalize the spectrogram.

Beamformer applications instead typically select T to be a wide window to match the desired virtual aperture and assign the corresponding pixel value to the output power |g|2. We can decompose the three cameras in Subsection 5B into such beamformers. First, we write PρR in Eq. (38) in terms of two different beamformers,

PρR=g1*g2,
where
g1=|nd|<A2U(nd)exp(ikΔn)
and
g2=mU(md)exp(ikΔm),
so that the windows for g1 and g2 are rectangular with widths matching the aperture A and sensor array D, respectively. Next, by construction
PρB=|g1|2.
Finally, in the far zone, sns0 in Eq. (37) so that
PR(g1)*g2,
where g1 and g2 are given by Eq. (41) with the windows T used in Eqs. (43, 44). In other words, the near-zone camera is the Hermitian product of two different beamformers and is equivalent to the far-zone camera in the far zone.

We interpret the role of each component beamformer from the derivation of Eq. (38). Beamformer g1* aggregates power contributions across the aperture using measurements of the conjugate field U* on the aperture, while beamformer g2 isolates power from the point of focus using all available measurements of the field U. In this manner, the tasks of aggregating and isolating power contributions are cleanly divided between the two beamformers, and each beamformer uses the measurements from those sensors appropriate to its task. In contrast, the beamformer camera uses the same set of sensors for both the power aggregation and isolation tasks, thereby limiting its ability to optimize over both tasks.

The near-zone camera achieves a new tradeoff between resolution and anisotropic sensitivity. We noted that the near-zone camera exhibits better resolution than the beamformer for the same virtual aperture (Fig. 6). This is not an entirely fair comparison because the near-zone camera is using sensor measurements outside the aperture, and indeed, a beamformer using the entire array would achieve comparable resolution. However, extending the aperture to the entire array results in a different image, as anisotropic responses are averaged over a wider aperture diameter. We interpret the near-zone camera’s behavior by computing the magnitude

|PρR|=|g1|2|g2|2.
Evidently, the pixel magnitude of the near-zone camera is the geometric mean of the two traditional beamformer output powers. |PρR| has better resolution than |g1|2 and better anisotropic sensitivity than |g2|2.

Image formation with alternative light fields uses the conjugate field and field measurements to aggregate and isolate power in different ways. In general, image pixel values do not neatly factor into the product of beamformers, as they do with the Rihaczek.

6. CONCLUDING REMARKS

We enable the use of existing incoherent imaging tools for coherent imaging applications by extending the light field to coherent radiation. We explain how to formulate, capture, and form images from quasi light fields. By synthesizing existing research in optics, quantum physics, and signal processing, we motivate quasi light fields, show how quasi light fields extend the traditional light field, and characterize the properties of different quasi light fields. We explain why capturing quasi light fields directly with intensity measurements is inherently limiting, and demonstrate via simulation how processing scalar field measurements in different ways leads to a rich set of energy localization tradeoffs. We show how coherent image formation using quasi light fields is complicated by an implicit far-zone (Fraunhofer) assumption and the fact that not all quasi light fields are constant along rays. We demonstrate via simulation that a pure light field representation is incapable of modeling near-zone diffraction effects, but that quasi light fields can be augmented with a distance parameter for greater near-zone imaging accuracy. We show how image formation using light fields generalizes the classic beamforming algorithm, allowing for new tradeoffs between resolution and anisotropic sensitivity.

Although we have assumed perfectly coherent radiation, tools from partial coherence theory (i) allow us to generalize our results and (ii) provide an alternative perspective on image formation. First, our results extend to broadband radiation of any state of partial coherence by replacing U(rR)U*(rC) with the cross-spectral density W(rR,rC,ν). W provides a statistical description of the radiation, indicating how light at two different positions rR and rC is correlated at each frequency ν [38]. Second, W itself may be propagated along rays in an approximate asymptotic sense [39, 40], which forms the basis of an entirely different framework for using rays for image formation, using the cross-spectral density instead of the light field as the core representation.

We present a model of coherent image formation that strikes a balance between utility and comprehensive predictive power. On the one hand, quasi light fields offer more options and tradeoffs than their traditional, incoherent counterpart. In this manner, the connection between quasi light fields and quasi-probability distributions in quantum physics reminds us of the potential benefits of forgoing a single familiar tool in favor of a multitude of useful yet less familiar ones. On the other hand, compared with Maxwell’s equations, quasi light fields are less versatile. Therefore, quasi light fields are attractive to researchers who desire more versatility than traditional energy-based methods, yet a more specialized model of image formation than Maxwell’s equations.

Quasi light fields illustrate the limitations of the simple definition of image formation that is ubiquitous in incoherent imaging. An image is the visualization of some underlying physical reality, and the energy emitted from a portion of a scene surface toward a virtual aperture is not a physically precise quantity when the radiation is coherent, according to classical electromagnetic wave theory. Perhaps a different image definition may prove more fundamental for coherent imaging, or perhaps a quantum optics viewpoint is required for precision. Although we have borrowed the mathematics from quantum physics, our entire discussion has been classical. Yet if we introduce quantum optics and the particle nature of light, we may unambiguously speak of the probability that a photon emitted from a portion of a scene surface is intercepted by a virtual aperture.

ACKNOWLEDGMENTS

This work was supported, in part, by Microsoft Research, MIT Lincoln Laboratory, and the Focus Center Research Program (FCRP) Center for Circuit & System Solutions (C2S2) of Semiconductor Research Corporation.

 figure: Fig. 1

Fig. 1 We can compute the value of each pixel in an image produced by an arbitrary virtual camera, defined as the power emitted from a scene surface patch toward a virtual aperture, by integrating an appropriate bundle of light field rays that have been previously captured with remote hardware.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 We capture a discrete quasi light field l by sampling the scalar field at regularly spaced sensors and processing the resulting measurements. We may optionally apply an aperture stop T to mimic traditional light field capture, but this restricts us to capturing quasi light fields with poor localization properties.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 The spectrogram does not resolve a plane wave propagating past the edge of an opaque screen as well as other quasi light fields, such as the Wigner and Rihaczek. We capture all three quasi light fields by sampling the scalar field with sensors and processing the measurements according to Eqs. (25, 26, 28). The ringing and blurring in the light field plots indicate the diffraction fringes and energy localization limitations.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 To ensure that integrating bundles of remote light field rays in the near zone results in an accurate image, we derive a light field LρR(r,s) in the measurement plane from the infinitesimal flux dΦ at the point rP where the ray originates from the scene surface patch. We thereby avoid making the assumptions that the measurement plane is far from the scene and that the light field is constant along rays.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Near-zone light fields result in cameras that align spherical wavefronts diverging from the point of focus rP, in accordance with the Huygens–Fresnel principle of diffraction, while quasi light fields result in cameras that align plane wavefront approximations in accordance with Fraunhofer diffraction. Quasi light fields are therefore accurate only in the far zone. (a) We derive both cameras by approximating the integral over a bundle of rays by the summation of discrete light field rays, and (b) we interpret the operation of each camera by how they align sensor measurements along wavefronts from rP.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Images of nearby objects formed from pure quasi light fields are blurry. In the scene, a small backlit pinhole moves across the field of view of a sensor array that implements three cameras, each computing one pixel value for each pinhole position, corresponding to a fixed surface patch. As the pinhole crosses the fixed scene surface patch, the near-zone camera resolves the pinhole down to its actual size of 1mm, while the far-zone camera records a blur 66cm wide.

Download Full Size | PDF

1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH 96 (ACM, 1996), pp. 31–42. [CrossRef]  

2. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford University, Calif. (2005).

3. W. Chun and O. S. Cossairt, “Data processing for three-dimensional displays,” United States Patent 7,525,541 (April 28, 2009).

4. R. Ziegler, S. Bucheli, L. Ahrenberg, M. Magnor, and M. Gross, “A bidirectional light field-hologram transform,” Comput. Graph. Forum 26, 435–446 (2007). [CrossRef]  

5. T. L. Szabo, Diagnostic Ultrasound Imaging: Inside Out (Elsevier, 2004).

6. M. J. Bastiaans, “Application of the Wigner distribution function in optics,” in The Wigner Distribution—Theory and Applications in Signal Processing, W. Mecklenbräuker and F. Hlawatsch, eds. (Elsevier Science B.V., 1997), pp. 375–426.

7. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 58, 1256–1259 (1968). [CrossRef]  

8. E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749–759 (1932). [CrossRef]  

9. J. Ville, “Théorie et applications de la notion de signal analytique,” Cables Transm. 2A, 61–74 (1948).

10. K. D. Stephan, “Radiometry before World War II: Measuring infrared and millimeter-wave radiation 1800–1925,” IEEE Antennas Propag. Mag. 47, 28–37 (2005). [CrossRef]  

11. S. Chandrasekhar, Radiative Transfer (Dover, 1960).

12. A. T. Friberg, G. S. Agarwal, J. T. Foley, and E. Wolf, “Statistical wave-theoretical derivation of the free-space transport equation of radiometry,” J. Opt. Soc. Am. B 9, 1386–1393 (1992). [CrossRef]  

13. P. Moon and G. Timoshenko, “The light field,” J. Math. Phys. 18, 51–151 (1939). [Translation of A. Gershun, The Light Field (Moscow, 1936)].

14. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge Univ. Press, 1999).

15. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 63, 1622–1623 (1973). [CrossRef]  

16. E. Wolf, “Coherence and radiometry,” J. Opt. Soc. Am. 68, 6–17 (1978). [CrossRef]  

17. G. S. Agarwal, J. T. Foley, and E. Wolf, “The radiance and phase-space representations of the cross-spectral density operator,” Opt. Commun. 62, 67–72 (1987). [CrossRef]  

18. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

19. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH 96 (ACM, 1996), pp. 43–54. [CrossRef]  

20. B. Boashash, ed. Time Frequency Signal Analysis and Processing (Elsevier, 2003).

21. R. W. Boyd, Radiometry and the Detection of Optical Radiation (Wiley, 1983).

22. A. Adams and M. Levoy, “General linear cameras with finite aperture,” in Proc. Eurographics Symposium on Rendering (Eurographics, 2007).

23. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1968).

24. A. T. Friberg, “On the existence of a radiance function for finite planar sources of arbitrary states of coherence,” J. Opt. Soc. Am. 69, 192–198 (1979). [CrossRef]  

25. P. Flandrin, Time-Frequency/Time-Scale Analysis (Academic, 1999).

26. D. J. Griffiths, Introduction to Quantum Mechanics (Pearson Education, 2005).

27. G. S. Agarwal and E. Wolf, “Calculus for functions of noncommuting operators and general phase-space methods in quantum mechanics. I. Mapping theorems and ordering of functions of noncommuting operators,” Phys. Rev. D 2, 2161–2186 (1970). [CrossRef]  

28. J. T. Foley and E. Wolf, “Radiometry as a short-wavelength limit of statistical wave theory with globally incoherent sources,” Opt. Commun. 55, 236–241 (1985). [CrossRef]  

29. J. R. Guerci, “Theory and application of covariance matrix tapers for robust adaptive beamforming,” IEEE Trans. Signal Process. 47, 977–985 (1999). [CrossRef]  

30. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of ICCP 09 (IEEE, 2009).

31. J. G. Kirkwood, “Quantum statistics of almost classical assemblies,” Phys. Rev. 44, 31–37 (1933). [CrossRef]  

32. A. Rihaczek, “Signal energy distribution in time and frequency,” IEEE Trans. Inf. Theory 14, 369–374 (1968). [CrossRef]  

33. ZEMAX Development Corporation, Bellevue, Wash., Optical Design Program User’s Guide (2006).

34. M. A. Alonso, “Radiometry and wide-angle wave fields. I. Coherent fields in two dimensions,” J. Opt. Soc. Am. A 18, 902–909 (2001). [CrossRef]  

35. R. G. Littlejohn and R. Winston, “Corrections to classical radiometry,” J. Opt. Soc. Am. A 10, 2024–2037 (1993). [CrossRef]  

36. J. R. Shewell and E. Wolf, “Inverse diffraction and a new reciprocity theorem,” J. Opt. Soc. Am. 58, 1596–1603 (1968). [CrossRef]  

37. H. L. Van Trees, Optimum Array Processing (Wiley, 2002). [CrossRef]  

38. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge Univ. Press, 1995).

39. A. M. Zysk, P. S. Carney, and J. C. Schotland, “Eikonal method for calculation of coherence functions,” Phys. Rev. Lett. 95, 043904 (2005). [CrossRef]   [PubMed]  

40. R. W. Schoonover, A. M. Zysk, P. S. Carney, J. C. Schotland, and E. Wolf, “Geometrical optics limit of stochastic electromagnetic fields,” Phys. Rev. A 77, 043831 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 We can compute the value of each pixel in an image produced by an arbitrary virtual camera, defined as the power emitted from a scene surface patch toward a virtual aperture, by integrating an appropriate bundle of light field rays that have been previously captured with remote hardware.
Fig. 2
Fig. 2 We capture a discrete quasi light field l by sampling the scalar field at regularly spaced sensors and processing the resulting measurements. We may optionally apply an aperture stop T to mimic traditional light field capture, but this restricts us to capturing quasi light fields with poor localization properties.
Fig. 3
Fig. 3 The spectrogram does not resolve a plane wave propagating past the edge of an opaque screen as well as other quasi light fields, such as the Wigner and Rihaczek. We capture all three quasi light fields by sampling the scalar field with sensors and processing the measurements according to Eqs. (25, 26, 28). The ringing and blurring in the light field plots indicate the diffraction fringes and energy localization limitations.
Fig. 4
Fig. 4 To ensure that integrating bundles of remote light field rays in the near zone results in an accurate image, we derive a light field L ρ R ( r , s ) in the measurement plane from the infinitesimal flux d Φ at the point r P where the ray originates from the scene surface patch. We thereby avoid making the assumptions that the measurement plane is far from the scene and that the light field is constant along rays.
Fig. 5
Fig. 5 Near-zone light fields result in cameras that align spherical wavefronts diverging from the point of focus r P , in accordance with the Huygens–Fresnel principle of diffraction, while quasi light fields result in cameras that align plane wavefront approximations in accordance with Fraunhofer diffraction. Quasi light fields are therefore accurate only in the far zone. (a) We derive both cameras by approximating the integral over a bundle of rays by the summation of discrete light field rays, and (b) we interpret the operation of each camera by how they align sensor measurements along wavefronts from r P .
Fig. 6
Fig. 6 Images of nearby objects formed from pure quasi light fields are blurry. In the scene, a small backlit pinhole moves across the field of view of a sensor array that implements three cameras, each computing one pixel value for each pinhole position, corresponding to a fixed surface patch. As the pinhole crosses the fixed scene surface patch, the near-zone camera resolves the pinhole down to its actual size of 1 mm , while the far-zone camera records a blur 66 cm wide.

Equations (47)

Equations on this page are rendered with MathJax. Learn more.

P = σ Ω r L ( r , s ) cos ψ d 2 s d 2 r ,
U ( ρ s ) = 2 π i k s z exp ( i k ρ ) ρ a ( s ) ,
a ( s ) = ( k 2 π ) 2 U ( r ) exp ( i k s r ) d 2 r
F ( ρ s ) = ( 2 π k ) 2 a ( s ) a * ( s ) s z 2 ρ 2 s ,
d Φ = ( 2 π k ) 2 s z 2 a ( s ) a * ( s ) d Ω .
I ( s ) = d Φ d Ω = ( 2 π k ) 2 s z 2 a ( s ) a * ( s ) .
I ( s ) = s z [ ( k 2 π ) 2 s z U ( r + 1 2 r ) U * ( r 1 2 r ) exp ( i k s r ) d 2 r ] d 2 r .
L W ( r , s ) = ( k 2 π ) 2 s z W ( r , s λ ) ,
W ( r , s ) = U ( r + 1 2 r ) U * ( r 1 2 r ) exp ( i 2 π s r ) d 2 r
[ x ̂ , s ̂ x ] = i λ 2 π , [ y ̂ , s ̂ y ] = i λ 2 π ,
r R | L ̂ | r C = U ( r R ) U * ( r C ) ,
L Ω ( r , s ) = ( k 2 π ) 2 s z L ̃ Ω ( r , s )
L Ω ( r , s ) = k 2 ( 2 π ) 4 s z Ω ̃ ( u , k r ) exp [ i u ( r r ) ] exp ( i k s r ) U ( r + 1 2 r ) U * ( r 1 2 r ) d 2 u d 2 r d 2 r ,
Ω ̃ ( u , v ) = Π ( a , b ) exp [ i ( a u + b v ) ] d 2 a d 2 b
L Ω ( r , s ) = ( k 2 π ) 2 s z Π ( r r , s s ) W ( r , s λ ) d 2 r d 2 s = ( k 2 π ) 2 s z Π ( r , s ) W ( r , s λ ) .
1 Ω ̃ be an entire analytic function with no zeros on the real component axes .
Ω ̃ ( 0 , v ) = 1 for all v ,
Π ( a , b ) = K ( a + λ 2 v , a λ 2 v ) exp ( i 2 π b v ) d 2 v .
r R = r + 1 2 r , r C = r 1 2 r
L ( r , s ) = ( k 2 π ) 2 s z U ( r R ) { K ( r R r , r C r ) × exp [ i k s ( r R r C ) ] } U * ( r C ) d 2 r R d 2 r C .
U ( r ) U ( r r 0 ) exp ( i k s 0 r )
L ( r , s ) L ( r r 0 , s s 0 ) .
L ( r , s ) = ( k 2 π ) 2 s z { U ( r R ) exp [ i k s ( r r R ) ] } K ( r R r , r C r ) × { U ( r C ) exp [ i k s ( r r C ) ] } * d 2 r R d 2 r C .
U ( r R ) K ( r R r , r C r ) U * ( r C ) .
l S ( y , s y ) = | n T ( n d ) U ( y + n d ) exp ( i k n d s y ) | 2 .
l W ( y , s y ) = n U ( y + n d 2 ) U * ( y n d 2 ) exp ( i k n d s y ) .
L R ( r , s ) = s z U * ( r ) exp ( i k s r ) a ( s ) .
l R ( y , s y ) = U * ( y ) exp ( i k y s y ) n U ( n d ) exp ( i k n d s y ) .
U ( r P ) = i k 2 π U ( r M ) z P | r P r M | exp ( i k | r P r M | ) | r P r M | d 2 r M .
F ( r P ) = 1 4 π k ν [ U * t U + U t U * ] .
| r P r M | | r P |
| r P | | r P r M | 1 ,
F ( ρ s ) = ( 2 π k ) 2 a ̃ ( ρ s ) a ̃ * ( ρ s ) s z 2 ρ 2 s ,
a ̃ ( ρ s ) = ( k 2 π ) 2 U ( r M ) exp ( i k | ρ s r M | ) d 2 r M .
d Φ = ( 2 π k ) 2 s z 2 a ̃ ( ρ s ) a ̃ * ( ρ s ) d Ω .
L ρ R ( r , s ) = s z U * ( r ) exp ( i k ρ ) a ̃ ( r ρ s ) = ( k 2 π ) 2 s z U * ( r ) exp ( i k ρ ) U ( r M ) exp ( i k | r ρ s r M | ) d 2 r M ,
P R = | n d | < A 2 { [ U ( n d ) exp ( i k n d s y n ) ] * m U ( m d ) exp ( i k m d s y n ) } ,
P ρ R = [ | n d | < A 2 U ( n d ) exp ( i k Δ n ) ] * [ m U ( m d ) exp ( i k Δ m ) ] .
g = m T ( m d ) U ( m d ) exp ( i k Δ m ) ,
Δ m Δ 0 m d s y m m d s y 0 ,
g = m T ( m d ) U ( m d ) exp ( i k m d s y 0 ) .
P ρ R = g 1 * g 2 ,
g 1 = | n d | < A 2 U ( n d ) exp ( i k Δ n )
g 2 = m U ( m d ) exp ( i k Δ m ) ,
P ρ B = | g 1 | 2 .
P R ( g 1 ) * g 2 ,
| P ρ R | = | g 1 | 2 | g 2 | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.