Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Robustness of single random phase encoding lensless imaging with camera noise

Open Access Open Access

Abstract

In this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera. Then, we used various statistical measures to look at how the shared information content between the noise-free and noisy PSF is affected as the camera-noise becomes stronger. We have run identical simulations by replacing the diffuser in the lensless SRPE imaging system with lenses for comparison with lens-based imaging. Our results show that SRPE lensless imaging systems are better at retaining information between corresponding noisy and noiseless PSFs under high camera noise than lens-based imaging systems. We have also looked at how physical parameters of diffusers such as feature size and feature height variation affect the noise robustness of an SRPE system. To the best of our knowledge, this is the first report to investigate noise robustness of SRPE systems as a function of diffuser parameters and paves the way for the use of lensless SRPE systems to improve imaging in the presence of image sensor noise.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

A cost-effective imaging device that can capture high-fidelity images in the presence of substantial camera noise will be of interest in degraded environments such as low light imaging. Generally, the cost of imaging systems is dictated by the cost of their optical elements (e.g., lenses). On top of that, imaging devices tailored for noisy environments may require sophisticated electronic hardware. Moreover, the fact that lenses relay incoming information on small localities on image sensors makes them vulnerable to noise. Eliminating lenses [18] and replacing them with diffusers [17] makes imaging devices more compact, more portable, and less expensive. Also, as lensless diffuser-based sensors spread the incoming information widely over the image sensor, we aim to investigate whether they are able to retain information even under the presence of substantial noise.

Diffuser-based lensless imaging systems [27] have recently emerged as attractive alternatives to conventional lens-based imaging. In the domain of microscopy, single and double diffuser-based microscopic imaging systems, called single random phase encoding (SRPE) [2,46] and double random phase encoding (DRPE) [3] systems respectively, have been shown to be successful at automated disease classification. In such systems, light transmitted through biological cells get modulated by one or more diffusers and the resulting speckle intensity patterns get recorded at the image sensor as the optobiological signatures of the cells. These signatures are thenceforth fed (without computational reconstruction) to a convolutional neural network (CNN) based classifier to identify diseased cells with impressive accuracy [4,5]. The classification performance of such systems has been shown to be robust to partial obstruction of the optobiological signatures, additive Gaussian noise [4], and reduction of number of pixels of the captured optobiological signatures by orders of magnitude [5]. Recently, the lateral resolution of SRPE systems have been shown to exhibit robustness to pixel size variations and number of pixels on the image sensor [6].

In this paper, we analyze lensless SRPE systems in the context of macroscopic imaging. This, in principle, is like DiffuserCam [7] which has been shown to be very successful at recovering a large number of voxels from a small number of pixels (compressive sensing). However, unlike DiffuserCam that captures intensity patterns before they become fully formed speckles, we keep the diffuser to sensor distance long enough to capture fully formed speckles. We also use diffusers with much larger scattering angles to keep this distance short. Figure 1 shows a schematic diagram of our system.

 figure: Fig. 1.

Fig. 1. A schematic diagram of our macroscopic lensless single random phase encoding imaging system.

Download Full Size | PDF

This paper aims to investigate the effect of camera noise on macroscopic SRPE systems. Assuming the illumination to be coherent (for ease of analysis), we have obtained the point spread function (PSF) of an SRPE system using numerical simulations. Afterwards, we have simulated an ideal noiseless camera and a series of noisy cameras with increasingly stronger noise to image the PSF. Using various statistical measures of dependence, we have assessed how much information lensless SRPE systems are losing due to camera noise. We have run identical simulations for lens-based systems with various focal lengths. We have also analyzed the effect of the physical parameters of the diffuser on the noise robustness of lensless SRPE systems. These results allow us to optimize the performance of a macroscopic SRPE system under degraded conditions.

The rest of the paper is organized as follows. In section 2, we briefly introduce our lensless SRPE imaging system, the mathematical model of a noisy camera, and the statistical measures of dependence used in this study. In section 3, we report and discuss the results obtained through our simulations based on the theory discussed in section 2. Finally, section 4 presents the conclusion of this study.

2. Methodology

2.1 Macroscopic single random phase encoding lensless imaging system

Our macroscopic SRPE lensless imaging system consists of a strong diffuser with $80^\circ $ scattering angle, and a CMOS image sensor. Light from the object plane propagates a distance ${z_1}$ to reach the diffuser in the imaging system, gets spatially modulated by the diffuser and, the modulated field propagates a distance ${z_2}$ to reach the image sensor where its intensity is recorded. Although this system is not shift-invariant, we analyze the point spread function (PSF) of our system as the noise susceptibility of the PSF would give us an idea about that of the imaging system. We assume the light to be coherent for the ease of analysis.

Following [6], we have used angular spectrum propagation [9] to formulate a mathematical model for our lensless SRPE system. As shown in Fig. 1, the co-ordinates on the object plane are denoted as $({x,y} )$, those on the diffuser plane have been denoted as $({\zeta ,\eta } )$ and, $({\alpha ,\beta } )$ represents the points on the image sensor plane. For all the numerical simulations, our input ${u_0}({x,y} )$ is a point-source centered on the origin of the object plane, i.e.,

$${u_0}({x,y} )= \delta ({x,y} ), $$
where, $\delta ({x,y} )$ is a dirac delta function centered at co-ordinates $({0,0} )$. The field emanating from the point source propagates a distance ${z_1}$ to reach the diffuser. Throughout this paper, ${\times} $ would denote scalar multiplication and ${\ast} $ would denote convolution. The field immediately before the diffuser can be written as:
$${u_1}({\zeta ,\eta } )= {u_0}({\zeta ,\eta } )\ast {h_D}({\zeta ,\eta } )\ast {p_{{z_1}}}({\zeta ,\eta } ), $$
where ${p_{{z_1}}}({\zeta ,\eta } )$ is the angular spectrum propagation kernel corresponding to a propagation distance of ${z_1}$ and ${h_D}({\zeta ,\eta } )$ is a filter that eliminates the spatial frequencies that the diffuser would not be able to capture due to its finite dimension $({{D_\zeta },{D_\eta }} )$. The cut-off frequencies $({{f_{\zeta m}},{f_{\eta m}}} )$ of this filter can be given as follows:
$${f_{\zeta m}} = \frac{{\frac{{{D_\zeta }}}{2}}}{{\lambda \sqrt {z_1^2 + \frac{{D_\zeta ^2}}{4}} \; }},\; {f_{\eta m}} = \frac{{\frac{{{D_\eta }}}{2}}}{{\lambda \sqrt {z_1^2 + \frac{{D_\eta ^2}}{4}} \; }}. $$

The diffuser modulates the incoming field with a transmittance function ${t_D}({\zeta ,\eta } )$ which imparts spatially varying random phase. The modulated field $u_1^{\prime}({\zeta ,\eta } )$ can be written as:

$$u_1^{\prime}({\zeta ,\eta } )= {u_1}({\zeta ,\eta } )\; \times {t_D}({\zeta ,\eta } ). $$

One widely accepted form of the transmittance function for thin diffusers [46] is as follows:

$${t_D}({\zeta ,\eta } )= \exp ({j\phi ({\zeta ,\eta } )} )\textrm{}Rect\left( {\frac{\zeta }{{{D_\zeta }}},\frac{\eta }{{{D_\eta }}}} \right), $$
where $\phi ({\zeta ,\eta } )$ are phase angles uniformly distributed within the range $({ - \pi ,\pi } ]$. This modulated field then further travels a distance ${z_2}$ to reach the image sensor where its intensity is recorded. The recorded intensity can be written as:
$${u_2}({\alpha ,\beta } )= \; {|{u_1^{\prime}({\alpha ,\beta } )\ast {h_S}({\alpha ,\beta } )\ast {p_{{z_2}}}({\alpha ,\beta } )} |^2}, $$
where ${p_{{z_2}}}({\alpha ,\beta } )$ is the angular spectrum propagation kernel corresponding to distance ${z_2}$ and ${h_S}({\alpha ,\beta } )$ is a filter that eliminates the frequencies that the finite dimension $({{S_\alpha },{S_\beta }} )$ of the image sensor would not be able to capture. The cut-off frequencies $({{f_{\alpha m}},{f_{\beta m}}} )$ of this filter are as follows:
$${f_{\alpha m}} = \frac{{\frac{{{S_\alpha }}}{2}}}{{\lambda \sqrt {z_2^2 + \frac{{S_\alpha ^2}}{4}} \; }},\; {f_{\beta m}} = \frac{{\frac{{{S_\beta }}}{2}}}{{\lambda \sqrt {z_2^2 + \frac{{S_\beta ^2}}{4}} \; }}. $$

Using the above equations, we obtain the PSF of our imaging system during numerical simulations. In the next subsection, we describe how an intensity image of this PSF is obtained by modeling a noisy camera.

We would like to clarify here that we are not proposing a new model for the diffuser or the lensless SRPE system. The diffuser transmittance model used in Eq. (5) has earlier been reported for lensless single random phase encoding [46]. Equations (1-7) have also been formulated based on the principles of Angular Spectrum Propagation [9]. Our contribution lies not in proposing the model but rather in carrying out further analysis of this model. Although more sophisticated models exist for diffusers, the results obtained from this model form a motivation for designing experiments. In future works, we shall design appropriate experiments and probe the validity of the conclusions formed herein.

2.2 Noise model for a typical CMOS imaging sensor

For this work, we have considered the three most common sources of noise seen in CMOS cameras, namely, shot noise, read noise and quantization noise [10,11]. Figure 2 shows the camera noise model used in this work.

 figure: Fig. 2.

Fig. 2. Camera noise model used in this work.

Download Full Size | PDF

We started by converting the individual pixel intensities $({{u_2}({\alpha ,\beta } )} )$ to their corresponding number of photons $({{I_{ph}}({\alpha ,\beta } )} )$. Since this relation is known to be linear, we simply multiply the intensities with a factor ${k_{ph}}$ that ensures that even the maximum light intensity remains below the saturation level of the camera, i.e.,

$${I_{ph}}({\alpha ,\beta } )= {k_{ph}} \times {u_2}({\alpha ,\beta } ). $$

The arrival of photons in a camera is a Poisson random process. This uncertainty in the number of arrived photons appears as shot noise in the camera model [11]. We obtained the corresponding noisy photon image using the following equation:

$${I_{sn}}({\alpha ,\beta } )= Poisson({{I_{ph}}({\alpha ,\beta } )} ), $$
where $Poisson(\lambda )$ is an operator that samples a Poisson random variable with mean $\lambda $. The photon counts are then converted to corresponding number of electrons ${I_e}({\alpha ,\beta } )$ according to the quantum efficiency ${\eta _{qe}}$ of the camera:
$${I_e}({\alpha ,\beta } )= {\eta _{qe}} \times {I_{sn}}({\alpha ,\beta } ). $$

We ignore the effect of dark noise in the model. Typically, dark current in a camera sensor is negligible as compared to read noise [11]. Since read noise is an additive Gaussian noise, we can sample the noise pattern as follows:

$${I_r}({\alpha ,\beta } )\sim \mathrm{{\cal N}}({0,{\sigma_r}} ), $$
where ${\sigma _r}$ is the standard deviation of the read noise mentioned in the camera specification. The noisy image ${I_{rn}}({\alpha ,\beta } )$ can hence be obtained as:
$${I_{rn}}({\alpha ,\beta } )= {I_e}({\alpha ,\beta } )+ {I_r}({\alpha ,\beta } ). $$

To convert the electron image ${I_{rn}}({\alpha ,\beta } )$ to a digital image, it needs to be multiplied with the Analog to Digital Unit (ADU) of the camera. The digitized image ${I_{ADU}}({\alpha ,\beta } )$ can be given as [12]:

$${I_{ADU}}({\alpha ,\beta } )= ADU \times {I_{rn}}({\alpha ,\beta } ). $$

To prevent the ADUs from becoming negative for very weak signals, a baseline ADU $({AD{U_{bl}}} )$ is added to the digitized image. The resulting image ${I_{cADU}}({\alpha ,\beta } )$ is given by:

$${I_{cADU}}({\alpha ,\beta } )= {I_{ADU}}({\alpha ,\beta } )+ AD{U_{bl}}. $$

This image is then quantized to one of ${2^b}$ levels where b is the number of bits of the camera. The captured image can be given as:

$${I_{captured}}({\alpha ,\beta } )= Quantize({{I_{cADU}}({\alpha ,\beta } )} ). $$

Here, $Quantize({\cdot} )$ is an operation that performs the abovementioned quantization and introduces a uniform noise in the process [13]. For this study, we have simulated captured images for increasing levels of readout noise. Hence, the standard deviation ${\sigma _r}$ of read noise can be given as a collection of k distinct levels of readout noise ${\sigma _{ri}}$ such that ${\sigma _{ri}} > {\sigma _{rj}}$ for $i > j$.

$${\sigma _r} = [{{\sigma_{r1}},\textrm{}{\sigma_{r2}},\textrm{}.\textrm{}.\textrm{}.,\textrm{}{\sigma_{rk}}} ], $$
where k is the total number of noise levels considered. The image captured at noise level ${\sigma _{ri}}$ would henceforth be called ${I_{ci}}({\alpha ,\beta } )$. The captured image for the case of no noise is denoted as ${I_c}({\alpha ,\beta } )$ (see Fig. 3).

 figure: Fig. 3.

Fig. 3. Noise robustness analysis performed in this work.

Download Full Size | PDF

2.3 Statistical measures of independence

To assess how much information is being lost due to noise, we calculate various statistical measures of dependence ${f_{md}}({ \cdot , \cdot } )$ between noise-free captured image ${I_c}({\alpha ,\beta } )$ and noisy captured images ${I_{ci}}({\alpha ,\beta } )$ for both lensless and lens-based imaging systems. If an optical system is robust to noise, the noisy intensity pattern acquired through it will maintain a strong statistical dependence with its corresponding noise-free intensity pattern. For this particular task, we employ two measures of dependence: (a) Mutual Information (MI) [14], and (b) Hilbert-Schmidt Independence Criterion (HSIC) [15]. Both these measures are $0$ at statistical independence between two random variables and increase as the random variables under analysis become statistically more dependent. We provide a brief discussion of each of these measures below.

2.3.1. Mutual information

Mutual information (MI) $I({X;Y} )$ [14] between two random variables X and Y is a statistical measure of how much entropy of X is explained by Y, i.e.,

$$I({X;Y} )= H(X )- H(X|Y), $$
where $H(X )$ is the entropy of X and $H(X|Y)$ is the entropy of X given Y. If X and Y are stationary continuous random variables with joint probability density function (PDF) ${f_{X,Y}}({x,y} )$ and marginals PDFs ${f_X}(x )$ and ${f_Y}(y )$ respectively, the MI can be given as below:
$$I({X;Y} )= \textrm{}\mathrm{\int\!\!\!\int }{f_{X,Y}}({x,y} )\log \left( {\frac{{{f_{X,Y}}({x,y} )}}{{{f_X}(x ){f_Y}(y )}}} \right)\textrm{}dx\textrm{}dy. $$

Note that $I({X;Y} )$ is $0$ only when X and Y are statistically independent and increases as they become more dependent. Hence, MI is a measure of statistical dependence between two variables. It can be intuitively interpreted as the Kullback-Leibler divergence between the joint PDF of $({X,Y} )$ and the product of their marginal PDFs. MI also has the attractive property that it is invariant under homeomorphic transformations (examples include translation, rotation, scaling etc.) of the underlying random variables.

However, MI is not a normalized measure. Normalization is important since we are using it to perform a comparative study between lens-based and lensless imaging systems. Hence, we obtain a normalized metric using the following equation:

$$\overline {I({X;Y} )} = \frac{{I({X;Y} )}}{{\sqrt {H(X )H(Y )} }}, $$
where $H(X )$ and $H(Y )$ are the entropies of X and Y respectively.

For all our analyses, we have assumed that the 1-dimensional (1D) histogram of a pattern X sufficiently approximates the PDF of X.

Unlike correlation that measures only linear dependence between two random variables, MI measures both linear and non-linear relations between random variables. Although simple in interpretation, it is remarkably difficult to derive an empirical estimate for MI for the fact that it requires joint density estimation. However, our simulation shows that MI is much faster to calculate than HSIC.

2.3.2. Hilbert-Schmidt independence criterion

Hilbert Schmidt Independence Criterion (HSIC) [15] is a kernel-based statistical measure of dependence. For two random variables X and Y, let us define two Reproducing Kernel Hilbert Spaces (RKHS) $\mathrm{{\cal F}}$ and $\mathrm{{\cal G}}$ with functionals ${\psi _F}(x )$ and ${\psi _G}(y )$ respectively. Under this scenario, a cross-covariance operator ${C_{xy}}:\mathrm{{\cal G}} \to \mathrm{{\cal F}}$ can be defined such that for all ${\psi _F} \in \mathrm{{\cal F}}$ and ${\psi _G} \in \mathrm{{\cal G}}$,

$${\left\langle {{\psi_F},{C_{xy}}{\psi_G}} \right\rangle _\mathrm{{\cal F}}} = \textrm{}{\mathrm{\mathbb{E}}_{xy}}({[{{\psi_F}(x )- {\mathrm{\mathbb{E}}_x}({{\psi_F}(x )} )} ][{{\psi_G}(y )- {\mathrm{\mathbb{E}}_y}({{\psi_G}(y )} )} ]} ), $$
where ${\left\langle { \cdot , \cdot } \right\rangle _\mathrm{{\cal F}}}$ denotes an inner product defined on $\mathrm{{\cal F}}$. The matrix ${C_{xy}}$ generalizes the cross-covariance operator between random vectors. If $\mathrm{{\cal F}}$ and $\mathrm{{\cal G}}$ are universal RKHS (dense in bounded continuous functions), the largest singular value of ${C_{xy}}$ is zero if and only if X and Y are independent. A more computationally convenient equivalent is to look at the squared Hilbert-Schmidt norm of ${C_{xy}}$ (which is the sum of the squared singular values). This is what is known as the Hilbert Schmidt Independence criterion. It is zero only when X and Y are independent and increases as they become statistically more and more dependent.

Calculating HSIC does not require joint density estimations. However, calculating HSIC is much more computationally expensive than MI. For the purpose of this report, the reason behind using two different measures of independence is that if both of these are in agreement with each other, it makes the conclusions more reliable.

3. Results

The studies performed in this work involve gradually changing the read noise specifications of the camera while keeping all the other system parameters fixed. In studies discussed later in this section, one needs to gradually change the parameters of a diffuser. More investigation is needed to figure out how this can be achieved in an experimental setting. Hence, in this section, we report and discuss the results obtained through our mathematical simulations. In future works, we shall design appropriate experiments and collect data to test the validity of the conclusions in real experiments.

We have used the angular spectrum propagation method to simulate our optical systems. Our simulated diffusers have high scattering angles, and hence, high spatial frequencies in the exiting field. A proper sampling (under the Nyquist criterion) of the field requires the spatial sampling rate of the simulation to be of the order of the wavelength. This means that sampling a 2D diffuser of practical dimensions (a few centimeters for example) would require approximately a billion pixels. To avoid such a huge computational burden, we restrict our analyses to one dimension (1D) with 2777 image sensor pixels.

The values used for the parameters in our simulations are listed in Table 1. Note that the lateral sampling rate $\tau $ has been kept as half of the wavelength $\lambda $. This ensures that all the optical fields in the simulation (which can have a maximum frequency of $1/\lambda $) are sampled properly. Also, the choice of the camera parameters (such as quantum efficiency, dark noise etc.) have been inspired by the parameters of actual cameras used in typical optical experiments performed in our laboratory with lensless single random phase encoding systems.

Tables Icon

Table 1. Simulation parameters used in this study (m denotes meters)

In all our analyses, we have compared our lensless system with typical lens-based systems. The object to diffuser or lens distance ${z_1}$ has been kept at 10 meters.

We have also simulated lenses with three different focal lengths $({{f_l}} )$: 75 millimeters, 300 millimeters and 500 millimeters. For each comparison, the dimensions of the lens and the diffuser have been kept the same and they have been kept at the same distance away from the object plane. This enables the lens and the diffuser to capture the same incoming frequencies (since the numerical apertures are same for all the systems simulated in this study). The distance between the diffuser and the sensor (see Table 1) has been kept fixed throughout the study. Figure 4 shows a schematic diagram of the lens-based systems. The separation ${z_{2l}}$ between the lenses and the sensor, on the other hand, have been always maintained such that a focused image of the object forms at the sensor. Hence,

$${z_{2l}} = \frac{{{f_l}{z_1}}}{{{z_1} - {f_l}}}. $$

 figure: Fig. 4.

Fig. 4. A schematic diagram of the lens-based system designed to form a baseline for the study of lensless SRPE systems. ${z_{2l}}$ has been chosen according to Eq. (25).

Download Full Size | PDF

Diffusers scatter light, making the acquired patterns of much lower light level than those acquired through lens-based systems. For low levels of illumination, this means that some of the pixel intensities which contain useful information about the input field may fall below the read noise level of the camera. To address this while still maintaining a fair comparison, we choose a ${k_{ph}}$ (see Eq. (8)) that was maintained the same for both lens-based and lensless imaging systems, and that would ensure that enough photons are collected at the camera sensor. In practical experiments, this can be done by adjusting the exposure time of the cameras. In our analysis, we have selected a ${k_{ph}}$ such that the maximum intensity of the lensless diffuser-based intensity pattern corresponds to 0.25 times the saturation level of the camera.

Scattering of light is a fundamental property of diffusers and a research effort to increase the light throughput of strong scattering diffusers would definitely make the noise robustness properties of the diffuser more apparent. However, this report merely serves to show that when lensless diffuser-based intensity patterns are bright enough to be captured by the camera, they maintain good noise robustness compared with equivalent lens-based imaging systems.

3.1 Effect of noise on the information content of the captured pattern

In all our studies, we have assumed that the noise is predominantly read noise. For this analysis, we increased the read noise (${\sigma _r}$), from 2.63 electrons to 100 electrons. For each level of read noise, we simulated the captured image ${I_{ci}}({\alpha ,\beta } )$ and compared it with the corresponding noise-free image ${I_c}({\alpha ,\beta } )$ using the aforementioned measures of independence.

Additionally, we also define an important parameter $rsat$ as follows:

$$rsat = \frac{{photons\;\textrm{}corresponding\;\textrm{}to\;\textrm{}the\;\textrm{}maximum\;\textrm{captured}\;intensity\;\textrm{}of\;\textrm{}lensless\;\textrm{}systems}}{{photons\;\textrm{}corresponding\;\textrm{}to\;\textrm{}the\;\textrm{}saturation\;\textrm{}level\;\textrm{}of\;\textrm{}camera}}. $$

All the studies in this paper have been performed with $rsat = 0.25$ unless otherwise specified. Figure 5 shows examples of noise-free and noisy images generated in our simulation.

 figure: Fig. 5.

Fig. 5. Sample images from our simulations. The lens has a focal length of 75 mm. The noise standard deviation here is 50 electrons.

Download Full Size | PDF

In Figs. 68, we report how lensless SRPE systems perform compared to typical lens-based systems. The camera saturation level is 10818 electrons as presented in Table 1. In all these cases, except for small levels of read noise, lensless SRPE imaging system maintains higher statistical dependence between noise-free and noisy captured intensities than typical lens-based systems. Note that these curves do not attain a value of 1 even at very low levels of noise. This is due to the shot noise which is present in the camera even at very low levels of read noise.

 figure: Fig. 6.

Fig. 6. Comparison between lensless SRPE and lens-based imaging systems. Change in mutual information and Hilbert Schmidt independence criterion as a function of standard deviation of read noise. The lens has a focal length of 75 mm.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Comparison between lensless SRPE and lens-based imaging systems. Change in mutual information and Hilbert Schmidt independence criterion as a function of standard deviation of read noise. The lens has a focal length of 300 mm.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Comparison between lensless SRPE and lens-based imaging systems. Change in mutual information and Hilbert Schmidt independence criterion as a function of standard deviation of read noise. The lens has a focal length of 500 mm.

Download Full Size | PDF

The steep drop in MI and HSIC for lens-based system can be explained by the fact that lenses (as shown in Fig. 5) concentrate the PSF only on a small area of image sensor pixels, i.e., there is no redundancy in information, unlike lensless systems. Hence, when noise creeps into the system, it has a high probability of degrading such concentrated information.

Since diffusers scatter light, we had to maintain $rsat = 0.25$ to ensure that most of the pixel intensities do not fall below the read noise level of the camera. However, for the sake of completeness, we present, in Fig. 9, similar results for $rsat$ varying in a range from 0.001 to 0.5. As one can observe, lensless systems perform poorly for $rsat$ values less than 0.05. However, for $rsat$ at and above 0.05, lensless systems appear to perform comparable to or better than lens-based systems for higher levels of read noise.

 figure: Fig. 9.

Fig. 9. A comparison between lens-based and lensless systems in terms of normalized mutual information for different values of $rsat$ (see Eq. (26)). $rsat$ is the ratio of the photons corresponding to the maximum captured intensity of lensless systems and the photons corresponding to the saturation level of the camera. The lens has a focal length of 75 mm. The visible kink in the curve of lens-based system for $rsat = 0.001$ is an effect of shot noise.

Download Full Size | PDF

Here, we attempt to provide an intuitive explanation as to why lensless SRPE systems seem to exhibit better noise robustness than equivalent lens-based systems. Due to the scattering nature of diffusers, SRPE systems spread the incoming information all over the image sensor pixels [6]. In a previous work [5] on this topic, SRPE systems were experimentally shown to be capable of successfully classifying between diseased and healthy cells even when the number of image sensor pixels was reduced by orders of magnitude. In [7], the researchers were able to recover large number of voxels from a small number of intensity pixels. In [4], the deep learning based SRPE classifier was able to maintain a high level of accuracy even when the intensity patterns were degraded by additive Gaussian noise and partial occlusion of patterns. These observations naturally led to the hypothesis that diffuser-based lensless systems may be capable of retaining useful information under noise. The results shown above make a case in favor of this hypothesis. Additional experiments would establish the validity of these inferences.

3.2 Effect of diffuser feature size on noise susceptibility

We conjecture that the noise robustness of the diffuser comes largely from its ability to spread the incoming information widely over the image sensor. The diffusion angle ${\theta _D}$ (see Fig. 10(a)), i.e., the maximum spread provided by a diffuser, depends on its feature size. This can be well understood by analyzing the diffuser function in Eq. (5). If we assume that the features on the diffuser ${t_D}({\zeta ,\eta } )$ are squares of dimension $({\mathrm{\Delta },\mathrm{\Delta }} )$ centered on co-ordinates $({m\mathrm{\Delta },n\mathrm{\Delta }} )$ and are of height $h({m\mathrm{\Delta },n\mathrm{\Delta }} )$ (see Fig. 10(b)), the transmittance function can be expanded as follows:

$${t_D}({\zeta ,\eta } )= Rect\left( {\frac{\zeta }{{{D_\zeta }}},\frac{\eta }{{{D_\eta }}}} \right)\textrm{}\mathop \sum \limits_{m,n} \exp \left( {j\frac{{2\pi }}{\lambda }h({m\mathrm{\Delta },n\mathrm{\Delta }} )} \right)\textrm{}Rect\left( {\frac{{\zeta - m\mathrm{\Delta }}}{\mathrm{\Delta }},\frac{{\eta - n\mathrm{\Delta }}}{\mathrm{\Delta }}} \right). $$

 figure: Fig. 10.

Fig. 10. (a) Diffusion angle ${\theta _D}$ of a diffuser, (b) features of a diffuser approximated by square columns of dimension $({\mathrm{\Delta },\mathrm{\Delta }} )$, (c) frequency response of diffuser transmittance for various feature sizes and, (d) diffusion angle as a function of diffuser feature size (see Eq. (28)).

Download Full Size | PDF

Its frequency response can be given as below:

$${T_D}({{f_x},{f_y}} )= {\mathrm{\Delta }^2}Sinc({\mathrm{\Delta }{f_x},\mathrm{\Delta }{f_y}} )\mathop \sum \limits_{m,n} \exp \left( {j\frac{{2\pi }}{\lambda }h({m\mathrm{\Delta },n\mathrm{\Delta }} )} \right)\exp ({ - j2\pi \mathrm{\Delta }({i{f_x} + j{f_y}} )} ). $$

Hence, for a uniform plane wave of unit amplitude traveling along the optical axis incident on the diffuser, the output field would have plane waves of spatial frequencies $({{f_x},{f_y}} )$. As shown in Fig. 10(c), for a feature size of $\mathrm{\Delta }$, the maximum spatial frequency (considering only the central lobe of the Sinc envelope) in the field exiting from the diffuser is $({1/\mathrm{\Delta }} )$. Hence, from the theory of angular spectrum propagation, the diffusion angle ${\theta _D}$ can be written as a function of feature size $\mathrm{\Delta }$ in the following way:

$${\theta _D} = 90^\circ{-} {\cos ^{ - 1}}\left( {\frac{\lambda }{\mathrm{\Delta }}} \right). $$

As we can see from Fig. 10(d), ${\theta _D}$ and hence the spreading capability of the diffuser drops rapidly with increase in feature size $\mathrm{\Delta }$.

To study the effect of diffuser feature size on the robustness to noise, we have kept the camera (with parameters as listed in Table 1) fixed but gradually increased the feature size of the diffuser from 0.6 microns to 900 microns. Figures 11 show that as we make the features larger, the statistical dependence between the noise-free image and noisy images decreases, gradually reducing the noise robustness of our diffuser-based lensless system.

 figure: Fig. 11.

Fig. 11. Change in mutual information and Hilbert Schmidt independence criterion as a function of diffuser feature size. Camera parameters are in Table 1. Read noise standard deviation has been maintained at 2.63 electrons. The visible kinks when diffuser features are very small is an effect of shot noise.

Download Full Size | PDF

3.3 Effect of diffuser height variation on noise susceptibility

The strength of the modulation provided by the diffuser depends on its height variation. For a thin diffuser, the phase angle $\phi ({\zeta ,\eta } )$ imparted by a feature of height $h({\zeta ,\eta } )$ located at $({\zeta ,\eta } )$ can be written as:

$$\phi ({\zeta ,\eta } )= \frac{{2\pi }}{\lambda }h({\zeta ,\eta } ). $$

If $h({\zeta ,\eta } )$ is uniformly sampled from $[{ - a\lambda /2,a\lambda /2} ]$ (with $0 \le a \le 1$), the phase $\phi ({\zeta ,\eta } )$ becomes a random variable uniformly distributed between $[{ - a\pi ,a\pi } ]$. As a increases, the modulation by the diffuser increases.

To perform this analysis, we have kept the camera fixed but gradually scaled up the height of the diffuser features until the maximum reached $\lambda /2$ which provides $\pi $ phase shift. This is mathematically equivalent to increasing a from $0$ to $1$. In Figs. 12, we observe that increasing feature height (or alternatively, $a$) seems to improve the noise robustness.

 figure: Fig. 12.

Fig. 12. Change in mutual information and Hilbert Schmidt independence criterion as a function of maximum feature height of the diffuser.

Download Full Size | PDF

4. Conclusion

In conclusion, we have performed numerical simulations to analyze the noise robustness of a lensless SRPE imaging system. We have used normalized mutual information and Hilbert Schmidt independence criterion to quantify how much change the acquired intensity patterns undergo as camera noise becomes stronger. We have run identical simulations for lens-based imaging systems to form a basis for comparison. Our results indicate that although lens-based systems exhibit robustness to low levels of read noise, SRPE lensless imaging systems are better at retaining information while operating under a significant level of read noise. We have also analyzed the effects of diffuser feature size and feature height variation on noise robustness. Our analyses suggest that diffusers with smaller features and larger height variations up to $\pi $ phase shift modulation perform better under camera noise. This analysis provides a way to optimize lensless SRPE systems to improve their performance under noisy camera conditions. In future, we plan to design experiments and collect data to see if the simulation matches experimental results. There are broad applications of this approach in various domains [1618].

Appendix A

In Table 2, we list the parameters for simulating the point spread function of the lensless SRPE system and the lens-based systems as well as the parameters used for simulating noisy cameras.

Tables Icon

Table 2. Definition of all the parameters used for simulating the point spread function and noisy cameras

Table 3 shows the parameters related to the calculation of the statistical measures of dependence.

Tables Icon

Table 3. Definition of all the parameters used in the calculation of measures of dependence

Funding

National Science Foundation (2141473); Office of Naval Research (N000142212349, N000142212375); Air Force Office of Scientific Research (FA9550-21-1-0333); Air Force Research Laboratory (FA8650-21-C-5711).

Acknowledgments

We gratefully acknowledge the Air Force Research Laboratory, Materials and Manufacturing Directorate (AFRL/RXMS) for the support and valuable discussions via Contract No. FA8650-21-C-5711. This document is Distribution A. Approved for public release: distribution unlimited. (AFRL-2022-6033), Date Approved 11-28-2023. B. Javidi acknowledges support under The Office of Naval Research (ONR) (N000142212375; N000142212349); Air-Force Office of Scientific Research (AFOSR) (FA9550-21-1-0333), and National Science Foundation grant # 2141473. The authors are also grateful to Kashif Usmani for the valuable discussions and comments on the manuscript.

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Display Technol. 3(3), 315–320 (2007). [CrossRef]  

2. B. Javidi, S. Rawat, S. Komatsu, et al., “Cell identification using single beam lensless imaging with pseudo-random phase encoding,” Opt. Lett. 41(15), 3663–3666 (2016). [CrossRef]  

3. B. Javidi, A. Markman, and S. Rawat, “Automatic multicell identification using a compact lensless single and double random phase encoding system,” Appl. Opt. 57(7), B190–B196 (2018). [CrossRef]  

4. T. O’Connor, C. Hawxhurst, L. M. Shor, et al., “Red blood cell classification in lensless single random phase encoding using convolutional neural networks,” Opt. Express 28(22), 33504–33515 (2020). [CrossRef]  

5. P. M. Douglass, T. O’Connor, and B. Javidi, “Automated sickle cell disease identification in human red blood cells using a lensless single random phase encoding biosensor and convolutional neural networks,” Opt. Express 30(20), 35965–35977 (2022). [CrossRef]  

6. S. Goswami, P. Wani, G. Gupta, et al., “Assessment of lateral resolution of single random phase encoded lensless imaging systems,” Opt. Express 31(7), 11213–11226 (2023). [CrossRef]  

7. N. Antipa, G. Kuo, R. Heckel, et al., “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

8. R. Corman, W. Boutu, A. Campalans, et al., “Lensless microscopy platform for single cell and tissue visualization,” Biomed. Opt. Express 11(5), 2806–2817 (2020). [CrossRef]  

9. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company Publishers, 2005).

10. “CCD Signal-To-Noise Ratio,” https://www.microscopyu.com/tutorials/ccd-signal-to-noise-ratio.

11. Y. Reibel, M. Jung, M. Bouhifd, et al., “CCD or CMOS camera noise characterisation,” Eur. Phys. J. AP 21(1), 75–80 (2003). [CrossRef]  

12. “EMVA 1288 Overview: Imaging Performance,” https://www.flir.com/discover/iis/machine-vision/emva-1288-overview-imaging-performance.

13. C. Boncelet, Image noise models: The essential guide to image processing (Academic Press, 2009), pp. 143–167.

14. T. M. Cover and J. A. Thomas, Elements of information theory, 2nd ed. (John Wiley & Sons, 1999).

15. A. Gretton, K. Fukumizu, C. H. Teo, et al., “A kernel statistical test of independence,” in 20th Advances in neural information processing systems (2007).

16. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). [CrossRef]  

17. A. Stern, Optical Compressive Imaging, 1st ed. (CRC Press, 2016).

18. V. Kravets, B. Javidi, and A. Stern, “Compressive imaging for defending deep neural networks from adversarial attacks,” Opt. Lett. 46(8), 1951–1954 (2021). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. A schematic diagram of our macroscopic lensless single random phase encoding imaging system.
Fig. 2.
Fig. 2. Camera noise model used in this work.
Fig. 3.
Fig. 3. Noise robustness analysis performed in this work.
Fig. 4.
Fig. 4. A schematic diagram of the lens-based system designed to form a baseline for the study of lensless SRPE systems. ${z_{2l}}$ has been chosen according to Eq. (25).
Fig. 5.
Fig. 5. Sample images from our simulations. The lens has a focal length of 75 mm. The noise standard deviation here is 50 electrons.
Fig. 6.
Fig. 6. Comparison between lensless SRPE and lens-based imaging systems. Change in mutual information and Hilbert Schmidt independence criterion as a function of standard deviation of read noise. The lens has a focal length of 75 mm.
Fig. 7.
Fig. 7. Comparison between lensless SRPE and lens-based imaging systems. Change in mutual information and Hilbert Schmidt independence criterion as a function of standard deviation of read noise. The lens has a focal length of 300 mm.
Fig. 8.
Fig. 8. Comparison between lensless SRPE and lens-based imaging systems. Change in mutual information and Hilbert Schmidt independence criterion as a function of standard deviation of read noise. The lens has a focal length of 500 mm.
Fig. 9.
Fig. 9. A comparison between lens-based and lensless systems in terms of normalized mutual information for different values of $rsat$ (see Eq. (26)). $rsat$ is the ratio of the photons corresponding to the maximum captured intensity of lensless systems and the photons corresponding to the saturation level of the camera. The lens has a focal length of 75 mm. The visible kink in the curve of lens-based system for $rsat = 0.001$ is an effect of shot noise.
Fig. 10.
Fig. 10. (a) Diffusion angle ${\theta _D}$ of a diffuser, (b) features of a diffuser approximated by square columns of dimension $({\mathrm{\Delta },\mathrm{\Delta }} )$, (c) frequency response of diffuser transmittance for various feature sizes and, (d) diffusion angle as a function of diffuser feature size (see Eq. (28)).
Fig. 11.
Fig. 11. Change in mutual information and Hilbert Schmidt independence criterion as a function of diffuser feature size. Camera parameters are in Table 1. Read noise standard deviation has been maintained at 2.63 electrons. The visible kinks when diffuser features are very small is an effect of shot noise.
Fig. 12.
Fig. 12. Change in mutual information and Hilbert Schmidt independence criterion as a function of maximum feature height of the diffuser.

Tables (3)

Tables Icon

Table 1. Simulation parameters used in this study (m denotes meters)

Tables Icon

Table 2. Definition of all the parameters used for simulating the point spread function and noisy cameras

Tables Icon

Table 3. Definition of all the parameters used in the calculation of measures of dependence

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

u 0 ( x , y ) = δ ( x , y ) ,
u 1 ( ζ , η ) = u 0 ( ζ , η ) h D ( ζ , η ) p z 1 ( ζ , η ) ,
f ζ m = D ζ 2 λ z 1 2 + D ζ 2 4 , f η m = D η 2 λ z 1 2 + D η 2 4 .
u 1 ( ζ , η ) = u 1 ( ζ , η ) × t D ( ζ , η ) .
t D ( ζ , η ) = exp ( j ϕ ( ζ , η ) ) R e c t ( ζ D ζ , η D η ) ,
u 2 ( α , β ) = | u 1 ( α , β ) h S ( α , β ) p z 2 ( α , β ) | 2 ,
f α m = S α 2 λ z 2 2 + S α 2 4 , f β m = S β 2 λ z 2 2 + S β 2 4 .
I p h ( α , β ) = k p h × u 2 ( α , β ) .
I s n ( α , β ) = P o i s s o n ( I p h ( α , β ) ) ,
I e ( α , β ) = η q e × I s n ( α , β ) .
I r ( α , β ) N ( 0 , σ r ) ,
I r n ( α , β ) = I e ( α , β ) + I r ( α , β ) .
I A D U ( α , β ) = A D U × I r n ( α , β ) .
I c A D U ( α , β ) = I A D U ( α , β ) + A D U b l .
I c a p t u r e d ( α , β ) = Q u a n t i z e ( I c A D U ( α , β ) ) .
σ r = [ σ r 1 , σ r 2 , . . . , σ r k ] ,
I ( X ; Y ) = H ( X ) H ( X | Y ) ,
I ( X ; Y ) = f X , Y ( x , y ) log ( f X , Y ( x , y ) f X ( x ) f Y ( y ) ) d x d y .
I ( X ; Y ) ¯ = I ( X ; Y ) H ( X ) H ( Y ) ,
ψ F , C x y ψ G F = E x y ( [ ψ F ( x ) E x ( ψ F ( x ) ) ] [ ψ G ( y ) E y ( ψ G ( y ) ) ] ) ,
z 2 l = f l z 1 z 1 f l .
r s a t = p h o t o n s c o r r e s p o n d i n g t o t h e m a x i m u m captured i n t e n s i t y o f l e n s l e s s s y s t e m s p h o t o n s c o r r e s p o n d i n g t o t h e s a t u r a t i o n l e v e l o f c a m e r a .
t D ( ζ , η ) = R e c t ( ζ D ζ , η D η ) m , n exp ( j 2 π λ h ( m Δ , n Δ ) ) R e c t ( ζ m Δ Δ , η n Δ Δ ) .
T D ( f x , f y ) = Δ 2 S i n c ( Δ f x , Δ f y ) m , n exp ( j 2 π λ h ( m Δ , n Δ ) ) exp ( j 2 π Δ ( i f x + j f y ) ) .
θ D = 90 cos 1 ( λ Δ ) .
ϕ ( ζ , η ) = 2 π λ h ( ζ , η ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.