Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial and axial resolution limits for mask-based lensless cameras

Open Access Open Access

Abstract

One of the open challenges in lensless imaging is understanding how well they resolve scenes in three dimensions. The measurement model underlying prior lensless imagers lacks special structures that facilitate deeper analysis; thus, a theoretical study of the achievable spatio-axial resolution has been lacking. This paper provides such a theoretical framework by analyzing a generalization of a mask-based lensless camera, where the sensor captures z-stacked measurements acquired by moving the sensor relative to an attenuating mask. We show that the z-stacked measurements are related to the scene’s volumetric albedo function via a three-dimensional convolutional operator. The specifics of this convolution, and its Fourier transform, allow us to fully characterize the spatial and axial resolving power of the camera, including its dependence on the mask. Since z-stacked measurements are a superset of those made by previously-studied lensless systems, these results provide an upper bound for their performance. We numerically evaluate the theory and its implications using simulations.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Resolving a scene’s texture and depth is one of the classical problems in imaging. Perhaps, its most challenging incarnation occurs in the context of a lensless camera, where the traditional lens in front of a sensor is replaced with a modulating element—an amplitude [1,2], phase [3,4] or diffractive [5] mask. A scene point casts a “shadow” or point spread function (PSF) on the sensor that encodes its angular location as well as depth, and the sensor observes a linear sum of such PSFs arising from all scene points. Numerous prior work in this space [2,3,69] have successfully demonstrated 3D reconstructions by solving an inverse problem.

From a theoretical perspective, 3D imaging with lensless cameras is poorly understood. While prior works have produced ample empirical observations for specific prototypes, they are difficult to generalize and interpret. In particular, the measurement operators associated with lensless imagers are hard to analyze for scenes with extended depth. As a consequence, there has been little work in analyzing the fundamental limits in achievable spatial and axial resolutions with a lensless camera; nor is there a clear understanding of how the various parameters of the imager—including the mask pattern and the scene/sensor to mask distances—affect its ability to recover texture and depth.

One of our primary observation is that the analysis of performance of lensless cameras is complicated by the dimensionality gap between the scene, which is three-dimensional (3D), and the lensless measurements, which are invariably two-dimensional (2D) or, at best, a collection of such 2D images. To alleviate this gap, we draw inspiration from z-stacking or focus stacking and consider, as a thought experiment, the 3D space of measurements formed by moving the image sensor axially, i.e., by changing the mask to sensor distance. We show that, under assumptions that are commonplace in prior work, the z-stacked measurements for an amplitude mask-based lensless camera are the result of a 3D convolution of the scene, represented as a volumetric albedo function, with a 3D kernel that is dependent of the mask. This convolutional structure of the measurement operator is immensely consequential, since it provides the foundation for characterizing the lensless camera’s spatial and axial resolving power by simply computing the modulation transfer function (MTF) of the associated PSF. This result further enables us to compare and evaluate various masks and parameters of the lensless camera, thereby answering the previously elusive questions. Finally, since z-stacked measurements encompass those made by a traditional system with fixed sensor-to-mask distance, our results provide an upper bound on their performance.

Contributions. We make the following contributions.

  • Analysis via lens-free z-stacks. Our main technical result shows that the measurements obtained with z-stacking are related to the 3D scene, described as a volumetric albedo function, with a convolutional measurement operator.
  • Derivation of the MTF. We show that the 3D MTF of the convolutional operator has a closed-form expression in terms of the attenuating mask pattern of the lensless camera.
  • Dependence of the mask on achievable spatio-axial resolution. As a consequence of the MTF derivation, we connect the parameters of the mask to its achievable spatio-axial resolution. In particular, we derive an upper bound to the axial or depth resolution given the spatial resolution of the camera and the spatial extent of the mask.

The analysis in this paper is meaningful for typical lensless cameras with static sensors. Since the static sensor measurement is a subset of the z-stack measurement, the 3D resolution analysis derived in this paper serves as an upper bound to the performance of a typical static sensor lensless cameras.

Limitations. From a theoretical perspective, the derivation of the main results require a number of assumptions on the scene and the image formation that, in principle, reduce their applicability. These include the use of a volumetric model that ignores occlusion, shading, and specularities and the use of a ray tracing approach that ignores diffraction caused by the small features in the mask. Volumetric modeling is a commonly-made assumption in this literature (for example, see FlatCam [1], FlatScope [2] or SweepCam [7]); hence, these results characterize their performance with the caveat that there is an additional mismatch between the actual measurements and the assumed measurement model. Lastly, the MTF analysis only characterizes the effectiveness of the measurement operator (or the imaging system), but does not consider the use of sophisticated computational techniques for solving the inverse problem; here, the use of scene priors can potentially offer reconstruction that is better than what our analysis predicts.

2. Related work

The theory and analysis proposed here builds upon two largely independent topics: lensless imaging, and focus stack photography.

2.1 Lensless measurement operators

For a lensless camera with an amplitude mask [1,1012], its measurement operator is convolutional when all scene points are restricted to lie on a single front-parallel plane. Hence, for a scene not restricted to a single depth, the measurement operator can be described as a sum of 2D convolutions; see Fig. 1(a). To stabilize the reconstruction process, previous work use priors in the form of sparsity [2] and data-driven models [1315].

 figure: Fig. 1.

Fig. 1. Structure of 3D lensless imaging operators for a scene with two depth planes with albedo $t_{z_1}, t_{z_2}$ and lensless measurements $i$. (a) Single measurement; (b) multiple measurements; (c) Hua et al. [7] computationally focus measurements on specific depths to obtain systems that approximate single 2D convolution with low-frequency residual. (d) Zheng et al. [8] represents the Fourier transform of multiple measurement matrix as a block-diagonal matrix. (e) Z-stack measurements, after re-parameterization, can be obtained by a 3D convolution between the 3D scene volume and a 3D kernel defined by the mask pattern.

Download Full Size | PDF

One approach to improve the conditioning of the operator is to capture multiple measurements with different mask patterns. The measurements corresponding to the different masks can simply be concatenated, as in Fig. 1(b), to obtain a joint system with improved conditioning and invertibility. However, the computational burden in implementing this operator can be quite formidable, especially for high-resolution sensors. Hua et al. [7] capture multiple measurements with a translating mask to facilitate computational refocusing to different depths in the measurement space; the resulting operation approximates imaging points only from focused depth with the rest in severe defocus, resulting in the measurement operator shown in Fig. 1(c). Ignoring boundary effects of the convolution, [8] formulate the measurement operator in the frequency domain of the measurement; here, the sum of convolution operator reduces to a block diagonal structure, as seen in Fig. 1(d), which can be implemented very efficiently.

In this paper, we show that the z-stacked measurements, under an appropriate re-parameterization, are convolutional in the scene’s volumetric albedo. Antipa et al. [3] make a similar observation, implementing the sum of 2D convolution as a 3D convolution. However, there are notable differences including their use of a phase mask, which has a different image formation from ours. Further, our use of z-stacked measurements introduces important re-parameterizations of the scene and measurements that are critical to the derivation of the convolutional model. Finally, we detail a number of important consequences of the convolution model, which goes significantly beyond prior work.

2.2 Focus stack photography

Focus stack photography acquires multiple images of a scene by sweeping the focus plane of the imaging system [16,17]; typically, this is achieved via axial movement of the sensor with respect to the imaging lens or by using focus-tunable optics [18]. Focus stacks have been studied for 3D scene estimation, using focus [19] and defocus [20] cues, as well as obtaining extended depth-of-field images [21]—the latter being useful in microscopy [22] where the extremely shallow depth of field is often a challenge.

Sundaram and Nayar [23] study recoverability of depth of a textureless scene from a focus stack, and present a theoretical analysis of MTFs that is similar to this work. Specifically, they show that the volumetric measurement formed by a scene point in a tele-centric system is shift-invariant. They derive the optical transfer function associated with this system and establish conditions when the depth of a textureless scene is resolvable. The theory presented in this paper can be interpreted as the lensless counterpart to their work; in spite of the conceptual similarity between the two results, the differences in image formation between a lens-based and lens-less imager lead to distinct results and consequences.

3. Z-stacking and the convolutional model

We begin with a recap of the image formation model for imaging a 3D scene with a lensless camera, followed by derivation for obtaining the z-stacked measurements with a 3D convolutional model, and finally address its implication with static sensor prototypes.

3.1 Measurement model for a 3D scene

Consider an image sensor placed behind an amplitude mask defined with an attenuation function $m({\bf x})$ with ${\bf x} = (x, y)$; we assume that the mask is aligned with the $z=0$ plane (see Fig. 2). Following prior work [7], we model the scene as a volumetric albedo function $t({\bf x}, z)$. When the sensor is placed on the plane $z=d < 0$ (i.e., a distance $d$ behind the mask), the intensity observed at a point ${\bf p} = (p, q)$ on the sensor is given as

$$i({\bf p}, d) = \int\limits_{z=z_{\min}}^{z_{\max}} \iint_{{\bf x}={-}\infty}^{\infty} t\left({\bf x}, z\right) m\left({\bf p} + \frac{-d}{z-d}({\bf x} - {\bf p}) \right) d{\bf x}\ dz$$
where the scene occupies the depth range $[z_{\min }, z_{\max }]$, with $0 < z_{\min } < z_{\max } < \infty$. Equation (1) suggests that a scene point produces a sensor measurement that is a scaled and translated copy of the mask; in particular, the scaling parameter is depth dependent. This image formation ignores the effects of light fall-off, occlusion between scene points, shading, specular reflections, and the sensor’s angular response; it also ignores the effects of diffraction. In practice, the effects of diffraction causes the PSF to no longer be a scaled copy of the mask; this changes Eq. (1) by changing the mask function $m(\cdot )$ to its diffracted counterpart. Supplement 1 provides validation for this, including captured PSFs from a real hardware prototype; we show that the correlation between scaled PSFs across a reasonable depth range is high. This validates the shift-invariance in space and scaling in depth holds for the diffracted mask, as has been commonly used in most lensless camera prototypes [3,4,79], to which this work provides an analysis for.

 figure: Fig. 2.

Fig. 2. Z-stacked lensless measurements. The sensor is translated axially, along $z$, to different sensor-to-mask distances $d$ to obtain a z-stack. Figure adapted from [7].

Download Full Size | PDF

3.2 3D convolution model for a z-stack

We now consider a lensless camera where the sensor captures multiple measurements while moving axially (i.e., with varying sensor-to-mask distances). An important distinction is that these z-stacked measurements are volumetric; specifically, we can extend Eq. (1) to write the measurements, $i({\bf p}, d)$, to be explicitly dependent on the spatial locations ${\bf p}$ as well as axial location $d$.

Change of variables. We now show that the z-stack $i({\bf p}, d)$ is related to a re-parameterized volumetric scene albedo $t({\bf x}, z)$ via a convolution operator. Starting from Eq. (1), we can rewrite $i({\bf p}, d)$ by rearranging the terms in $m(\cdot )$ as follows:

$$i({\bf p}, d) = \int\limits_{z=z_{\min}}^{z_{\max}} \iint_{{\bf x}={-}\infty}^{\infty} t({\bf x}, z) m\left( \frac{\frac{{\bf p}}{d} - \frac{{\bf x}}{z}}{\frac{1}{d} - \frac{1}{z}} \right) d{\bf x} dz.$$

The equation above is significantly simplified if we change the variables from Cartesian coordinates to the following choice:

$${\widetilde{\bf x}} = \frac{{\bf x}}{z}, \,\,\, {\widetilde{z}} = \frac{1}{z}, \,\,\, {\widetilde{\bf p}} = \frac{{\bf p}}{d}, \,\,\, {\widetilde{d}} = \frac{1}{d}.$$

This new parameterization changes $(x, y)$ to the tangent of the angle at the origin, and depth $z$ to its reciprocal. Supplement 1 contains an illustration of this parameterization. Finally, we use ${\widetilde {i}}({\widetilde {\bf p}}, {\widetilde {d}})$ and ${\widetilde {t}}({\widetilde {\bf x}}, {\widetilde {z}})$ to denote the z-stack measurements and the scene albedo, respectively, in their new variables. With this, we can rewrite Eq. (2) as

$${\widetilde{i}}({\widetilde{\bf p}}, {\widetilde{d}}) = \int\limits_{{\widetilde{z}}=z_{\max}^{{-}1}}^{z_{\min}^{{-}1}} \iint_{{\widetilde{\bf x}}={-}\infty}^{\infty} \frac{1}{{\widetilde{z}}^4} {\widetilde{t}}({\widetilde{\bf x}}, {\widetilde{z}}) m\left(\frac{{\widetilde{\bf p}}-{\widetilde{\bf x}}}{{\widetilde{d}}-{\widetilde{z}}}\right) d{\widetilde{\bf x}} d{\widetilde{z}}.$$

Here, the $1/{\widetilde {z}}^4$ is the modulus of determinant of the Jacobian underlying the change of variables. Defining the 3D kernel $k({\widetilde {\bf x}}, {\widetilde {z}})$ and depth-normalized texture ${\widetilde {t}}'({\widetilde {\bf x}}, {\widetilde {z}})$ as

$$k({\widetilde{\bf x}}, {\widetilde{z}}) = m\left( \frac{{\widetilde{\bf x}}}{{\widetilde{z}}} \right), \quad {\widetilde{t}}'({\widetilde{\bf x}}, {\widetilde{z}}) = \frac{1}{{\widetilde{z}}^4} {\widetilde{t}}({\widetilde{\bf x}}, {\widetilde{z}}),$$
we can express Eq. (4) as
$${\widetilde{i}}({\widetilde{\bf p}}, {\widetilde{d}}) = \int\limits_{{\widetilde{z}}=z_{\max}^{{-}1}}^{z_{\min}^{{-}1}} \iint_{{\widetilde{\bf x}}} {\widetilde{t}}'({\widetilde{\bf x}}, {\widetilde{z}})\ k({\widetilde{\bf p}}-{\widetilde{\bf x}}, {\widetilde{d}}-{\widetilde{z}}) d{\widetilde{\bf x}} d{\widetilde{z}}.$$

To proceed further and obtain the desired convolutional model, we need to make an additional assumption pertaining to the limits of integration. Specifically, the integral in Eq. (6) is physically meaningful—i.e., consistent with what a sensor would measure—only when the sensor and the scene are on opposite sides of the mask. After all, the mask plays no role when the sensor and scene are on its same side. We can implicitly enforce the sensor-mask-scene configuration with the following two conditions: first, the scene albedo ${\widetilde {t}}'({\widetilde {\bf x}}, {\widetilde {z}}) = 0$ when ${\widetilde {z}} \le 0$; and second, the output ${\widetilde {i}}({\widetilde {\bf p}}, {\widetilde {d}})$ is evaluated only for ${\widetilde {d}} < 0$. We can now change the limits of integration to get

$${\widetilde{i}}({\widetilde{\bf p}}, {\widetilde{d}}) = \iiint_{\{{\widetilde{z}}, {\widetilde{\bf x}}\} ={-}\infty}^\infty {\widetilde{t}}'({\widetilde{\bf x}}, {\widetilde{z}})\ k({\widetilde{\bf p}}-{\widetilde{\bf x}}, {\widetilde{d}}-{\widetilde{z}}) d{\widetilde{\bf x}} d{\widetilde{z}} = ( {\widetilde{t}}' {\ \ast_{3D}\ } k)({\widetilde{\bf p}}, {\widetilde{d}}).$$

Hence, the z-stack of measurements is related to the scene’s volumetric albedo, both under their respective re-parameterizations, via a 3D convolution operator whose kernel is dependent only on the mask pattern.

The convolutional model presented in Eq. (7) is the centerpiece of our contributions. As we show in the next section, it can be leveraged to characterize basic properties of the imager, including the modulation transfer function (MTF) that is defined as the modulus of the Fourier transform of the convolutional kernel. Before we delve into this analysis, a few observations are in order.

Connection to non-line-of-sight imaging. The interested reader is referred to a recent work in non-line-of-sight imaging that formed the inspiration for the derivation above. The measurement models in non-line-of-sight imaging are similar to lensless imaging in their lack of structures that facilitate fast implementation or analysis. O’Toole et al. [24] show that an appropriate re-parameterization of the variables describing the scene and measurements lead to a convolution operator. A notable difference is that non-line-of-sight operator is 5D and [24] select a 3D subset to match the dimensionality of the scene and measurements; in contrast, our work matches the dimensionality by enhancing the space of the measurements by z-stacking.

4. Analysis

We now derive an expression for the Fourier transform of the PSF, and use it to characterize the lateral and axial resolution of the device in terms of its mask.

4.1 Derivation of the MTF

A key feature of an imaging system whose measurement operator is convolutional is that we can easily characterize the fundamental limits of achievable resolution. This is often done by computing the MTF of the system, which is the magnitude of the Fourier transform of the PSF. Note the resolution is derived for the scene with parameterization as described in Section 3.2, but the resolution limits can be mapped back to the original space. We show an example in Section 4.2.0.1.1.

In our case, the PSF is a 3D function $k({\widetilde {\bf x}}, {\widetilde {z}})$ that is defined in Eq. (5). Let $K(f_x, f_y, f_z)$ be the 3D Fourier transform of 3D kernel $k({\widetilde {x}}, {\widetilde {y}}, {\widetilde {z}})$.

$$K(f_x, f_y, f_z) = \iiint k({\widetilde{x}}, {\widetilde{y}}, {\widetilde{z}}) e^{- j 2\pi \left({\widetilde{x}} f_x + {\widetilde{y}} f_y + {\widetilde{z}} f_z \right)} d{\widetilde{x}}\ d{\widetilde{y}}\ d{\widetilde{z}}$$

Plugging in definition of $k(\cdot )$ from Eq. (5), we can simplify $K(\cdot )$ as

$$\small K(f_x, f_y, f_z) = \frac{-1}{4\pi^2 (f_x^2+f_y^2)^{\frac{3}{2}}} r_\ell\left( -\frac{f_z}{\sqrt{f_x^2+f_y^2}}, \tan^{{-}1}\left(\frac{f_y}{f_x}\right) \right).$$

Here, $r_\ell (\cdot, \cdot )$ is the Radon transform of $\ell (x, y)$, the Laplacian of the mask, defined as

$$ \ell(x, y) = \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) m(x, y). $$

A detailed derivation is provided in Supplement 1.

Subsequent analysis is simplified if we define $\rho = \sqrt {f_x^2+ f_y^2}$ as the magnitude of angular frequency and $\psi = \tan ^{-1}(f_y/f_x)$ as its directionality. The Fourier transform of the 3D PSF in polar coordinates for $(f_x, f_y)$, denoted as $K^P$, can now be written as

$$K^P(\rho, \psi, f_z) = \frac{-1}{4\pi^2 \rho^3} r_\ell\left( -\frac{f_z}{\rho}, \psi \right).$$

Numerical verification. We observe from Eq. (10) that if $\rho$ is fixed to some value, say $\rho = \rho _0$, then

$$K^P(\rho = \rho_0, \psi, f_z) \propto r_\ell({-}f_z/\rho_0, \psi).$$

Hence, the 2D slice of $K^P$ for a fixed $\rho$ is a scaled copy of $r_\ell$, the Radon transform of the Laplacian of the mask pattern. Figure 3 provides a numerical verification of this; for a collection of masks commonly used in lensless imaging, we compute the Laplacian of the mask, its Radon transform, and a $\psi$-$f_z$ slice of the magnitude spectra $|K^P(\rho, \psi, f_z)|$, for a fixed value of $\rho$. We observe that the Laplacian matches well with the 2D slice of the magnitude spectra with two notable sets of artifacts: aliasing of the DFT which results in repeating copies, and the sinc-like decay, especially along the $f_z$ axis, which we attribute to the windowing of the kernel; we elaborate on this in the next section.

 figure: Fig. 3.

Fig. 3. Comparison of mask patterns and their MTFs. Multiple mask patterns (a, inset) are shown with their Laplacian (a) and the Radon transform of Laplacian (b). (c) shows $|K^P(\rho =0.4, \psi, f_z)|$, a $\psi$-$f_z$ slice, obtained numerically; the similarity between row (b) and (c) verifies our analytical expression of $K$. (d) shows $\log \int _{\psi }|K^P(\rho, \psi, f_z)|d\psi$ obtained numerically. The red lines marked in (d) correspond to the butterfly structure defined in Eq. (12); this is a consequence of the compact spread of the mask, indicated by the red dotted circles in (a). The structures of the MTF in (d) are constrained within the butterfly structure, except for the leakage due to the windowing of the PSF in the spatial domain.

Download Full Size | PDF

4.2 Dependence of the MTF on the mask

The expressions for the Fourier transform of the PSF also provide a direct way to understand how the mask affects the MTF of the resulting system. We study this next.

Spatial extent of the mask. Many of the masks that we use have a finite spatial extent. The MTFs of such masks exhibit an important symmetry. Specifically, suppose that the mask $m(x, y)$ is zero outside of a disc of radius $R_m$, i.e., $m(x, y) = 0, \ \forall (x, y) \textrm { s.t. } x^2 + y^2 \ge R_m^2$, then the Radon transform of Laplacian satisfies

$$ r_\ell(\alpha, \psi) = 0, \ \forall |\alpha| > R_m. $$

This implies that the Fourier transform can be non-zero only when $|f_z/\rho | \le R_m$, or

$$-R_m \rho \le f_z \le R_m \rho.$$

Hence, if we visualize a different 2D cross-section of $K^P$, one corresponding to a fixed $\psi$, then we expect to see non-zero values only with in the “butterfly” shape defined by the set defined above. Figure 3(d) visualizes this for a number of masks.

The butterfly structure places an important constraint on achievable spatio-axial resolution. Similar structures also arise in the analysis of spatio-temporal resolutions in videos [25]; as in our analysis, such structures serve to strongly couple achievable resolutions across the two domains. Given measurements that resolve in tangent of angle subtended with a resolution of $\delta _p$ (the ratio of sensor pixel pitch to sensor-to-mask distance) , we can only resolve frequencies corresponding to $\rho \in [-\frac {1}{2\delta _p}, \frac {1}{2\delta _p} ]$. Hence, for a mask with support restricted within a disc of radius $R_m$, the maximum resolvable axial resolution is

$$|f_z| \le \frac{R_m}{2\delta_p}.$$

This naturally explains the lack of depth resolution in a pinhole mask and the improvement in performance with multiple pinholes, as well as larger-sized masks based on M-sequences or random constructions, since they have a larger $R_m$ than pinholes. It is worth noting that the bound discussed above is expected to be loose, since it only considers the diameter of the mask and ignores the specific pattern within.

Example. This analysis allows us to compute the upper bound of 3D resolution for lensless camera prototypes. For example, in the FlatScope prototype [2], the spatial extent of mask is contained within a disc of radius $R_m={1.84}\;\textrm{mm}$. The spatial resolution is bounded by $\delta _p = \frac {\Delta p}{d} = {1.12E-2}$, where $\Delta p = {2.24}\;\mathrm{\mu}\textrm{m}$ is the effective pixel pitch and $d={0.2}\;\textrm{mm}$ is the mask-to-sensor distance. FlatScope reports lateral resolution of less than ${2}\;\mathrm{\mu}\textrm{m}$. After converting the sensor angular resolution ($\delta _p$) to the scene spatial resolution ($\delta _p \times z$), our analysis predicts the ${2}\;\mathrm{\mu}\textrm{m}$ spatial resolution holds true for scenes that are farther than $z = {178}\;\mathrm{\mu}\textrm{m}$ from the mask. This is the depth range in which FlatScope have shown experimental results ($z > {170}\;\mathrm{\mu}\textrm{m}$). The axial resolution of the prototype has an upper bound from Eq. (13): $f_{zmax} = \frac {R_m}{2 \delta _p} = {82.1}\textrm{mm}$ in frequency of diopters. FlatScope shows an axial resolution of less than $ {15}\;\mathrm{\mu}\textrm{m}$ at depth $z= {270}\;\mathrm{\mu}\textrm{m}$ from experiments. Our analysis predicts that diopter resolution is bounded by $\Delta \tilde z = \frac {1}{2 f_{zmax}} = \frac {\delta _p}{R_m}$. The resulting depth resolution at $z={270}\;\mathrm{\mu}\textrm{m}$ can be computed as $\Delta z = \vert \frac {\partial z}{ \partial {\widetilde {z}}} \Delta {\widetilde {z}} \vert = \frac {\Delta {\widetilde {z}}}{{\widetilde {z}}^2} = z^2 \frac {\delta _p}{R_m} = {0.44}\;\mathrm{\mu}\textrm{m}$. The predicted theoretical upper bound of axial resolution is better than the reported empirical axial resolution, as it does not account of the limitation further imposed by the subsampling operator discussed in Section 4.4 or sensor non-idealities. This suggest further improvements to the axial resolution may be possible with obtaining z-stack and better handling of measurement noise and quantization.

A counter-intuitive consequence of the butterfly structure is that the depth resolving power of a pinhole, modeled as a disc, seems to change when we increase its radius. We discuss this next.

Pitch of the mask. Given a mask pattern—for example, a pinhole—changing its pitch scales the mask pattern, and consequently, the Radon transform of its Laplacian only along its shift axis. This suggests that we can get better depth resolution simply by scaling a mask pattern. Intuitively, this makes sense as the defocus blur is enlarged and hence, given a sensor pitch, we can distinguish smaller changes in depth better.

Sparsity of the Laplacian. Another significant factor that detrimentally affects performance of reconstruction is the presence of nulls in the MTF. We observe such nulls in the simple masks consisting of one or a few pinholes; the Laplacian of such masks have positive and negative intensities which cancel out during the line-integrals, and so their Radon transform has sparse structures as well (as seen in Fig. 3(c)). As a consequence, while increasing the size of the pinhole enlarges the butterfly structure, the sparsity of Radon transform results in the same amount of frequency measurements, albeit those that can reach higher axial resolutions.

DC term and the effect of windowing. The expressions in Eq. (9) and Eq. (10) also indicate that the Fourier transform tends to infinity when we approach the origin (i.e., the DC term). This is not surprising since the DC term is equal to the integral area under the curve (AUC) of the PSF. Since the PSF $k({\widetilde {\bf x}}, {\widetilde {z}})$ is simply a projection of the mask $m$ along the depth dimension, its AUC is infinite. Practical considerations, however, force us to work with a windowed kernel when we do the simulations of Fig. 3. In particular, when we seek to recover a scene and sensor measurements that both have finite extents in tangent of the angle and reciprocal of the depth, then the effective kernel that we see is a windowed version of the theoretical kernel.

Windowing has two distinct effects on $K$. The term at the origin is now finite; $K$ approaches the AUC of the windowed PSF as we approach the origin. Next, windowing in spatial domain results in convolution with a sinc function in the frequency domain; this results in the vertical and horizontal spread of the MTF, especially in regions outside the butterfly structure in Fig. 3(d).

Effect of axial sampling density and range. When the z-stack is uniformly sampled within some range in the diopter space, the effect of sampling density follows the traditional trade-offs characterized by the Nyquist–Shannon sampling theorem [26]; specifically, to avoid aliasing, this model would only be able to handle scenes which are band-limited along depth as determined by the sampling rate. The effect of the sampling range is to be interpreted as windowing of the ideal sampling of the z-stack, which has been discussed in the previous paragraph.

4.3 Lateral and axial resolution

We are interested in expressions for the MTF as functions of $\rho$ and $f_z$, respectively, corresponding to characterizing the lateral and axial resolution. To get such expressions, we start with the modulus of $K^P(\rho, \psi, f_z)$ and integrate/marginalize the variables that we want to exclude. Such a marginalization assumes that all frequency components are equally important, and can be interpreted as an average over an ensemble of 2D signals, without any priors.

Lateral resolution. For lateral resolution, we are interested in characterizing the MTF, purely as a function of $\rho$, which we can obtain by summing over the modulus of $K$ over $f_z$ and $\psi$.

$$\begin{aligned} \textrm{MTF}(\rho) &= \frac{1}{4 \pi^2 \rho^3} \int_{f_z} \int_\psi \left|r_\ell \left( -\frac{f_z}{\rho}, \psi \right) \right| df_z d\psi &= \frac{1}{\rho^2 } \underbrace{\left[\frac{1}{4\pi^2} \int_{f_z} \int_\psi | r_\ell(f_z, \psi) | df_z d\psi \right]}_{\textrm{constant that is mask dependent}}. \end{aligned}$$

This suggests that the lateral resolution for different masks is similar, except for a constant. Further, it also suggest that the tail of $\textrm {MTF}(\rho )$ decays as $1/\rho ^2$; therefore, visualizing $\textrm {MTF}(\rho )$ in a log-log plot should produce linear profiles with a slope of -2. We verify this in Fig. 4 for the masks shown earlier in Fig. 3.

Axial resolution. To obtain MTF as a function of $f_z$, we can marginalize the modulus of $K(f_x, f_y, f_z)$ over $f_x$ and $f_y$, or equivalently, $\rho$ and $\psi$.

$$\textrm{MTF}(f_z) = \frac{1}{4\pi^2} \int_\rho \int_\psi \frac{1}{\rho^3} \left| r_\ell\left( -\frac{f_z}{\rho}, \psi \right) \right| \rho d\rho d\psi.$$

Suppose we define $h(f_z)$ as

$$ h(f_z) = \frac{1}{4 \pi^2} \int_\psi |r_\ell(f_z, \psi) |d\psi, $$
then
$$\textrm{MTF}(f_z) = \int_\rho \frac{1}{\rho^2} h\left(\frac{f_z}{\rho}\right) d\rho = \frac{1}{f_z} \hspace{-8mm} \underbrace{ \left[ \int_\tau h\left({\tau} \right) d\tau \right]}_{\textrm{constant that is mask dependent}}.$$

This suggests that the axial MTF has a tail decay of $1/f_z$, or linear with a slope of $-1$ in a log-log plot. We confirm this using simulations in Fig. 4.

 figure: Fig. 4.

Fig. 4. Lateral and axial MTF of different masks. The lateral and axial MTF of different masks are numerically obtained via FFT of the 3D kernel. The slopes of the lines agree with that of $\rho ^{-2}$ and $f_z^{-1}$ respectively, as shown in black dotted lines. This validates the analytical expression in Eq. (14) and Eq. (16).

Download Full Size | PDF

Remarks. The expressions on the slices of the MTF in Eq. (14) and Eq. (16) are obtained by integrating out the variables that are not of immediate interest. Implicitly, this assumes that all frequencies are equally important; in reality, natural scenes have their energy concentrating on low-frequencies and hence, the plots have to be interpreted with this distinction in mind. This also explains why the open aperture appears at an higher value in the $\textrm {MTF}(\rho )$ plot; Fig. 3(d) shows that the open aperture samples higher depth frequencies compared to the pinhole so de-emphasizing the high frequencies – to match their distribution in natural scenes – will likely result in an aperture MTF curve that is worse off as compared to the pinhole.

4.4 Reduction to the static sensor scenario

The analysis of the MTF of the z-stacked measurements have immediate consequences for traditional lensless cameras, such as FlatCam [1], FlatScope [2], and SweepCam [7], that rely on a single (or multiple) image measurements without sensor movement; after all, we can simply choose to retain a single image in the z-stack, which would correspond to a static sensor scenario. The measurement operator associated with a static sensor can be written as a sub-sampling operator applied to that of the z-stacked measurements; specifically, if we denote the measurement operators associated with the z-stacked sensor and the static sensor as $\mathcal {A}_{\textrm {zcam}}$ and $\mathcal {A}_{\textrm {static}}$, respectively, then they are related as follows:

$$ \mathcal{A}_{\textrm{static}} = \mathcal{S}_z \circ \mathcal{A}_{\textrm{zcam}}, $$
where $\mathcal {S}_z$ is the subsampling operator along the $z$-axis that selects the measurements corresponding to the sensor-to-mask distance associated with the static case. As a consequence, the operator in the static scenario $\mathcal {A}_{\textrm {static}}$ inherits all the disadvantages of the z-stacked operator $\mathcal {A}_{\textrm {zcam}}$; for example, the null space of the latter is necessarily a subset of the former. In the case of SweepCam, the mask is translated laterally (i.e., along $x-y$); each of those masks is associated with a different z-stacked system. So the nulls in the MTF in one mask can potentially be alleviated by its translation.

Remarks. The predictions we make are in the reparameterized space, where the spatial coordinates are represented as tangent of angles subtended and the axial coordinates are in reciprocal of depth. These transformed coordinates have natural intepretability in the context of imaging. In traditional lens-based camera, spatial coordinates on the sensor are linear in the tangent of the angle subtended by scene points. Similarly, the size of the defocus blur, with or without a lens, is linear in the reciprocal of depth. Thus, an analysis in the transformed space is meaningful once we evaluated it through this perspective.

5. Simulation results

Setup. Since the discussion above was heavily based on a volumetric representation of the scene, we aim to test the proposed method in realistic situations that involve light fall-off, occlusion, foreshortening, and sensor angular response; such effects can be significant for scenes very close to the mask. Specifically, we render the simulated measurements by ray-tracing a mesh-based scene so that it captures the above mentioned effects. However, the renderer does not model diffraction and other wave effects. Details of the renderer and its comparison to the volumetric modeling can be found in Supplement 1. Additionally, we add Gaussian noise based on the dynamic range of a typical machine vision sensor (71.95 dB). For z-stacking, we translate a 13.13mm$\times$8.75mm sensor from 5mm to 10mm to the mask, in 64 steps linear in $\frac {1}{d}$, and render a measurement at each location. The dimension of our measurements is given by $114\times 171\times 64$. The dimension of reconstructed volume is the same. Each of the reconstructions were finished within 5 minutes. For single measurement results, we use the measurement captured at the furthest location in the z-stack.

Performance of different masks. We experimentally validate our resolution analysis of mask patterns in Fig. 5. We image a scene with different mask patterns under the proposed z-stacking. The scene contains five points of diameter $ {200}\;\mathrm{\mu}\textrm{m}$ evenly spaced on a line between point (-2mm, 2mm, 5mm) and (4mm, -4mm, 10mm). Figure 5 visualizes the 3D kernels for a set of masks, and reconstructions using regularized least squares ($\ell _2$) and a sparse $\ell _1$ prior (with fista [27] ). The results reflect the observations made in Sec. 4.3 about the spatial and axial resolution of the masks. As expected, pinhole has almost no depth resolution, and the result reflects this by producing long streaks instead of points. Stereo masks have better depth resolution, and results in shorter streaks in the $\ell _1$ reconstructions. Note the stereo mask $\ell _2$ regularization results show long streaks; this suggests typical stereo systems observe depth with strong dependence on sparsity of the scene. The longer M-sequence mask has the best depth resolution, and results in sparse points. It is also instructive to note that using a sparse prior always produces isolated points in the volume; however, for the masks with poor depth resolution, the location of those points are incorrectly reconstructed. This highlights the importance of theoretical analysis, as the use of strong priors makes it difficult to analyze the performance of lensless cameras.

 figure: Fig. 5.

Fig. 5. Different masks and their reconstructions. The 3D kernel and reconstructed volumes are all plotted in re-parameterized space as given in 3.2. The scene consists of five points aligned on a diagonal line; the ground truth location of points are overlayed with green asterisks in the reconstructions. Mask pattern with larger spread have higher depth resolution – mseq-2,8 reconstruct points as points instead of line streaks in the other masks under $\ell _2$ regularization. While a sparse prior ($\ell _1$) results in sparse points in the reconstruction, they can be located at the wrong depth for masks with poor depth resolution (pinhole); thus it is essential to characterize the cameras’ resolution theoretically in addition to empirical observations.

Download Full Size | PDF

6. Discussions

This paper provides a theoretical characterization of the performance of lensless images, in terms of spatio-axial resolution, and the central role played by the mask. Our primary result relies on the construction of a measurement operator that is convolutional; this involves two steps: using z-stacked measurements to obtain a 3D space of measurements, and a reparameterization of the space. This construction connects the MTF of the system to a simple transformation of the mask. More importantly, it makes a concrete set of predictions on the achievable spatial and axial resolutions: a butterfly structure that limits the depth resolution based on sensor pitch and the mask extend, and tail decays on the marginal MTFs over spatial and axial frequencies. Finally, we verify these predictions on a set of commonly-used masks. We envision the impact of this work and its relevance to the imaging community to be two-fold. First, it analyzes spatial and axial resolutions for prior art in lensless imaging [2,79,28]. Second, it provides a pathway for the design of lensless cameras, in mask design and in acquiring z-stacked measurements, which we detail in the following paragraphs.

Design of masks. The theory developed in this work, especially Eq. (9) and Eq. (10), provide a crisp connection between the mask used and the MTF of the resulting system; specifically, that the MTF is a resampling of the Radon transform of the Laplacian of the mask used. Radon transforms are invertible, and basic Fourier analysis suggests that the Laplacian of the mask specifies the mask completely, except for the DC offset and slope. This suggests that mask design can be formulated as an optimization problem in $r_\ell$ under some desired cost function.

Acquiring z-stacked measurements. The analysis in this paper also raises the intriguing possibility of building lensless cameras with axial sensor motion, so as to provide a richer space of measurements. Perhaps, the most significant among these stems from the challenges in acquiring z-stack images. Axial motion of a sensor invariably results in lateral motion as well, which needs to be accounted for, via careful calibration. Axial motion can potentially be in conflict with the primary motivation of lensless cameras, namely the need for imaging with a compact footprint. Yet, such mechanisms for translation are routinely used in cellphone lenses for autofocusing, and so there is the possibility that such a feature can be implemented for the sensor as well. Finally, the resampling of sensor measurements required to enable the convolutional model is non-uniform and, hence, results in a loss in sensor resolution.

Funding

National Science Foundation (1652569, 1730147, 2046293, 2133084).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data and code used for our numerical experiments are available at Code 1 (Ref. [29]).

Supplemental document

See Supplement 1 for supporting content.

References

1. M. S. Asif, A. Ayremlou, A. C. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “Flatcam: Thin, lensless cameras using coded aperture and computation,” IEEE Transactions on Computational Imaging 3(3), 384–397 (2017). [CrossRef]  

2. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3d fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017). [CrossRef]  

3. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: Lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

4. V. Boominathan, J. Adams, J. Robinson, and A. Veeraraghavan, “Phlatcam: Designed phase-mask based thin lensless camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence 42(7), 1618–1629 (2020). [CrossRef]  

5. P. R. Gill and D. G. Stork, “Lensless ultra-miniature imagers using odd-symmetry spiral phase gratings,” in Imaging and Applied Optics (2013).

6. J. K. Adams, V. Boominathan, S. Gao, A. V. Rodriguez, D. Yan, C. Kemere, A. Veeraraghavan, and J. T. Robinson, “In vivo fluorescence imaging with a flat, lensless microscope,” Nat. Biomed. Eng. 6(5), 617–628 (2022). [CrossRef]  

7. Y. Hua, S. Nakamura, S. Asif, and A. C. Sankaranarayanan, “Sweepcam – depth-aware lensless imaging using programmable masks,” IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).

8. Y. Zheng, Y. Hua, A. C. Sankaranarayanan, and M. S. Asif, “A simple framework for 3d lensless imaging with programmable masks,” in ICCV (2021).

9. Y. Zheng and M. S. Asif, “Joint image and depth estimation with mask-based lensless cameras,” IEEE Transactions on Computational Imaging 6, 1167–1178 (2020). [CrossRef]  

10. R. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” The Astrophysical Journal 153, L101 (1968). [CrossRef]  

11. E. Fenimore and T. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17(3), 337–347 (1978). [CrossRef]  

12. A. Busboom, H. Elders-Boll, and H. Schotten, “Uniformly redundant arrays,” Experimental Astronomy 8(2), 97–123 (1998). [CrossRef]  

13. S. S. Khan, V. Sundar, V. Boominathan, A. Veeraraghavan, and K. Mitra, “Flatnet: Towards photorealistic scene reconstruction from lensless measurements,” IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).

14. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27(20), 28075–28090 (2019). [CrossRef]  

15. J. D. Rego, K. Kulkarni, and S. Jayasuriya, “Robust lensless image reconstruction via psf estimation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (2021), pp. 403–412.

16. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern analysis and machine intelligence 16(8), 824–831 (1994). [CrossRef]  

17. K. Kutulakos and S. W. Hasinoff, “Focal stack photography: High-performance photography with a conventional camera,” in MVA (2009).

18. D. Miau, O. Cossairt, and S. K. Nayar, “Focal Sweep Videography with Deformable Optics,” in IEEE International Conference on Computational Photography (2013).

19. S. K. Nayar and Y. Nakagawa, “Shape from focus: An effective approach for rough surfaces,” in IEEE Intl. Conf. Robotics and Automation (1990).

20. P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Transactions on Pattern Analysis and Machine Intelligence 27(3), 406–417 (2005). [CrossRef]  

21. R. Pieper and A. Korpel, “Image processing for extended depth of field,” Appl. Opt. 22(10), 1449–1453 (1983). [CrossRef]  

22. N. Streibl, “Three-dimensional imaging by a microscope,” JOSA A 2(2), 121–127 (1985). [CrossRef]  

23. H. Sundaram and S. Nayar, “Are textureless scenes recoverable?” in IEEE Conf. Comput. Vis. Pattern Recog. (1997), pp. 814–820.

24. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

25. J. Y. Park and M. B. Wakin, “Multiscale algorithm for reconstructing videos from streaming compressive measurements,” Journal of Electronic Imaging 22(2), 021001 (2013). [CrossRef]  

26. C. Shannon, “Communication in the presence of noise,” Proc. IRE 37(1), 10–21 (1949). [CrossRef]  

27. A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Transactions on Image Processing 18(11), 2419–2434 (2009). [CrossRef]  

28. M. S. Asif, “Lensless 3D imaging using mask-based cameras,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2018).

29. Y. Hua, S. Asif, and A. C. Sankaranarayanan, “Code and data for numerical experiments with lensless z-stack,” GitHub (2022), https://github.com/Image-Science-Lab-cmu/Lens-free-Z-Stack.

Supplementary Material (2)

NameDescription
Code 1       Code and data used in numerical experiments - Lensless Z-Stack measurements and their 3D reconstructions.
Supplement 1       Details of proofs; details of simulation; additional experiments.

Data availability

The data and code used for our numerical experiments are available at Code 1 (Ref. [29]).

29. Y. Hua, S. Asif, and A. C. Sankaranarayanan, “Code and data for numerical experiments with lensless z-stack,” GitHub (2022), https://github.com/Image-Science-Lab-cmu/Lens-free-Z-Stack.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Structure of 3D lensless imaging operators for a scene with two depth planes with albedo $t_{z_1}, t_{z_2}$ and lensless measurements $i$. (a) Single measurement; (b) multiple measurements; (c) Hua et al. [7] computationally focus measurements on specific depths to obtain systems that approximate single 2D convolution with low-frequency residual. (d) Zheng et al. [8] represents the Fourier transform of multiple measurement matrix as a block-diagonal matrix. (e) Z-stack measurements, after re-parameterization, can be obtained by a 3D convolution between the 3D scene volume and a 3D kernel defined by the mask pattern.
Fig. 2.
Fig. 2. Z-stacked lensless measurements. The sensor is translated axially, along $z$, to different sensor-to-mask distances $d$ to obtain a z-stack. Figure adapted from [7].
Fig. 3.
Fig. 3. Comparison of mask patterns and their MTFs. Multiple mask patterns (a, inset) are shown with their Laplacian (a) and the Radon transform of Laplacian (b). (c) shows $|K^P(\rho =0.4, \psi, f_z)|$, a $\psi$-$f_z$ slice, obtained numerically; the similarity between row (b) and (c) verifies our analytical expression of $K$. (d) shows $\log \int _{\psi }|K^P(\rho, \psi, f_z)|d\psi$ obtained numerically. The red lines marked in (d) correspond to the butterfly structure defined in Eq. (12); this is a consequence of the compact spread of the mask, indicated by the red dotted circles in (a). The structures of the MTF in (d) are constrained within the butterfly structure, except for the leakage due to the windowing of the PSF in the spatial domain.
Fig. 4.
Fig. 4. Lateral and axial MTF of different masks. The lateral and axial MTF of different masks are numerically obtained via FFT of the 3D kernel. The slopes of the lines agree with that of $\rho ^{-2}$ and $f_z^{-1}$ respectively, as shown in black dotted lines. This validates the analytical expression in Eq. (14) and Eq. (16).
Fig. 5.
Fig. 5. Different masks and their reconstructions. The 3D kernel and reconstructed volumes are all plotted in re-parameterized space as given in 3.2. The scene consists of five points aligned on a diagonal line; the ground truth location of points are overlayed with green asterisks in the reconstructions. Mask pattern with larger spread have higher depth resolution – mseq-2,8 reconstruct points as points instead of line streaks in the other masks under $\ell _2$ regularization. While a sparse prior ($\ell _1$) results in sparse points in the reconstruction, they can be located at the wrong depth for masks with poor depth resolution (pinhole); thus it is essential to characterize the cameras’ resolution theoretically in addition to empirical observations.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

i ( p , d ) = z = z min z max x = t ( x , z ) m ( p + d z d ( x p ) ) d x   d z
i ( p , d ) = z = z min z max x = t ( x , z ) m ( p d x z 1 d 1 z ) d x d z .
x ~ = x z , z ~ = 1 z , p ~ = p d , d ~ = 1 d .
i ~ ( p ~ , d ~ ) = z ~ = z max 1 z min 1 x ~ = 1 z ~ 4 t ~ ( x ~ , z ~ ) m ( p ~ x ~ d ~ z ~ ) d x ~ d z ~ .
k ( x ~ , z ~ ) = m ( x ~ z ~ ) , t ~ ( x ~ , z ~ ) = 1 z ~ 4 t ~ ( x ~ , z ~ ) ,
i ~ ( p ~ , d ~ ) = z ~ = z max 1 z min 1 x ~ t ~ ( x ~ , z ~ )   k ( p ~ x ~ , d ~ z ~ ) d x ~ d z ~ .
i ~ ( p ~ , d ~ ) = { z ~ , x ~ } = t ~ ( x ~ , z ~ )   k ( p ~ x ~ , d ~ z ~ ) d x ~ d z ~ = ( t ~   3 D   k ) ( p ~ , d ~ ) .
K ( f x , f y , f z ) = k ( x ~ , y ~ , z ~ ) e j 2 π ( x ~ f x + y ~ f y + z ~ f z ) d x ~   d y ~   d z ~
K ( f x , f y , f z ) = 1 4 π 2 ( f x 2 + f y 2 ) 3 2 r ( f z f x 2 + f y 2 , tan 1 ( f y f x ) ) .
( x , y ) = ( 2 x 2 + 2 y 2 ) m ( x , y ) .
K P ( ρ , ψ , f z ) = 1 4 π 2 ρ 3 r ( f z ρ , ψ ) .
K P ( ρ = ρ 0 , ψ , f z ) r ( f z / ρ 0 , ψ ) .
r ( α , ψ ) = 0 ,   | α | > R m .
R m ρ f z R m ρ .
| f z | R m 2 δ p .
MTF ( ρ ) = 1 4 π 2 ρ 3 f z ψ | r ( f z ρ , ψ ) | d f z d ψ = 1 ρ 2 [ 1 4 π 2 f z ψ | r ( f z , ψ ) | d f z d ψ ] constant that is mask dependent .
MTF ( f z ) = 1 4 π 2 ρ ψ 1 ρ 3 | r ( f z ρ , ψ ) | ρ d ρ d ψ .
h ( f z ) = 1 4 π 2 ψ | r ( f z , ψ ) | d ψ ,
MTF ( f z ) = ρ 1 ρ 2 h ( f z ρ ) d ρ = 1 f z [ τ h ( τ ) d τ ] constant that is mask dependent .
A static = S z A zcam ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.