Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Advantages of holographic imaging through fog

Open Access Open Access

Abstract

In this paper, we demonstrate digital holographic imaging through a 27-m-long fog tube filled with ultrasonically generated fog. Its high sensitivity makes holography a powerful technology for imaging through scattering media. With our large-scale experiments, we investigate the potential of holographic imaging for road traffic applications, where autonomous driving vehicles require reliable environmental perception in all weather conditions. We compare single-shot off-axis digital holography to conventional imaging (with coherent illumination) and show that holographic imaging requires 30 times less illumination power for the same imaging range. Our work includes signal-to-noise ratio considerations, a simulation model, and quantitative statements on the influence of various physical parameters on the imaging range.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

Sensing through scattering media is still a very prominent field in optical research. In numerous applications, the absorbing and scattering particles have detrimental effects on the image formation. In general, there are only two different approaches for minimizing these effects: one is to discard the scattered photons from the imaging process by means of their physical properties (i.e., temporal, spatial, coherence, polarization), and the other one is to use all photons (i.e., multiple scattered and ballistic photons) for imaging [1]. For the latter approach, the light scattering characteristics of the media have to be known in order to extract object information. Transmission matrix, memory effect, and wavefront shaping approaches along with the introduction of neural networks constitute the current research field of scattered photon imaging [24]. However, for long distance (i.e., 1 to 100 m), these techniques become impractical, and one has to resort to use ballistic photons only (the scattered photons are discarded). Despite the dominant scattering of biological tissue in medical applications, optical imaging with ballistic photons has been used very successfully especially in ophthalmology. Optical coherence tomography (OCT) takes advantage of the coherence property of light to discriminate unwanted multiple scattered photons against ballistic photons. The typical imaging range in medical applications is in the range of few millimeters [5]. For macroscopic scales, the most common approach is time-resolved imaging with active illumination for applications like underwater imaging [6] and autonomous driving vehicles [7]. Fast gated cameras enable the separation of multiple scattered and ballistic photons based on their temporal properties. With a pulsed scanning confocal illumination and gated detection, temporal and spatial properties of photons are exploited for discrimination to further increase the imaging range [8].

A different approach for imaging through scattering media is holographic imaging. Stetson showed the principle of coherence gating with holography for imaging through a fog-like medium in 1967 [9]. Holography is a wide-field two-dimensional (2D) coherent detection technology. The work presented by Lohmann and Shuman [10] shows how moving fog particles introduce a frequency shift, thus rendering the scattered photons incoherent. A major advantage of interference-based principles over time-gating is the inherent amplification of a weak signal by a strong reference beam following from the interference of coherent waves. In general, the best possible contrast for the interference fringes is obtained with balanced powers in both beams. In case of weak signals, however, the bigger the imbalance is toward a stronger reference, the bigger the interferometric amplification. This increases the sensitivity and enables digital holography (where the holograms are recorded with a digital sensor, typically a CCD or CMOS camera, and numerically reconstructed) at the shot noise limit [1113]. Far-infrared digital holography has been used to image humans through smoke and flames [14]. Holography with short coherence light sources enhances digital microscopy [1517]. Holographic imaging in combination with multi-frame processing and time-gating has been demonstrated to image through extended scattering media [18]. However, coherent averaging requires multiple frames of the same scene, rendering this approach unsuitable for dynamic conditions. Single-shot digital holography with an ultra-short pulsed laser achieves imaging ranges in the range of 12 attenuation lengths (ALs) and enables high-speed imaging to overcome the problems of mechanical vibrations [19]. One major field in automotive engineering today is the development of optical sensor systems for reliable environmental perception. Bad weather conditions such as dense fog and heavy rain result in considerable performance drops of the state of the art imaging sensors of autonomous driving vehicles like LiDAR and gated cameras [20]. Due to its increased sensitivity and the ability to discard stray light, digital holography is a promising technology that could provide new concepts for automotive sensor systems. In this paper, we investigate the capabilities of holographic imaging through a 27-m-long fog tube and use conventional imaging as a performance reference. The dimensions and the scattering media used in our experiments are close to the conditions in road traffic applications. We focus on the increased sensitivity and discuss the theoretical signal-to-noise ratio (SNR) considerations for both imaging methods and derive a simulation model that reproduces the behavior observed in the experiment.

2. METHODS

We aim to quantitatively describe the benefit of holographic imaging over conventional imaging (i.e., active illumination with coherent light) through fog. Both methods rely on ballistic photons only. According to Lambert–Beer law [21], the intensity (in our case the number of ballistic photons) of a light beam traveling the distance $d$ through scattering media is reduced to

$$I(d) = {I_0}{e^{- \varepsilon d}},$$
where $I$ is the intensity and $\varepsilon$ is the material and wavelength dependent attenuation coefficient. One AL is the distance over which the signal strength is reduced by a factor of ${\text{e}^{- 1}}$. For convenience, the imaging range in imaging through scattering media can be expressed in ALs with
$$AL = - \text{ln}\left({\frac{I}{{{I_0}}}} \right),$$
where ${I_0}$ is the intensity before and $I$ after traversing the media. With increasing attenuation, the number of ballistic photons decreases until finally the object signal is lost in the noise. The region where there is just enough object signal to separate it from the noise is referred to as the maximum imaging range and expressed in ALs. In our experiment, this distance depends on the number of ballistic photons traversing the fog before hitting the detector and the SNR of the detection system. First, the experimental setup and the theoretical consideration of the SNRs for both imaging methods are described, followed by our simulation model and the description of an evaluation criterion for the imaging range.

A. Experimental Setup

We use an off-axis holographic setup in image plane configuration that allows direct image comparison between holographic reconstructions and conventional images by simply blocking the reference beam. A schematic representation of our experimental setup is shown in Fig. 1. As a light source, we use a continuous wave (cw) laser of type Toptica TA pro 780 with an optical power output of up to 4 W. The wavelength of the cw laser used for illumination is 780 nm, and the coherence length is approximately 150 m. Because our coherence length greatly exceeds the imaging range, we do not need to be concerned by low coherence effects. For shorter coherence lengths, one would need to match the optical path lengths, for example, with a long loop of optical fiber in the reference arm. The linearly polarized laser beam is divided into two parts (signal and reference beams) by a polarizing beam splitter (PBS). The intensity ratio between signal and reference beams is controlled by rotating the polarization with a $\lambda /{2}$-wave plate. The signal beam enters a diverging lens (L1) that enlarges the beam for properly illuminating the object. Light reflected from the object is collected by the lens (L2), imaging the object onto the camera. The reference beam is coupled into a polarization maintaining single mode fiber with the emitting end located near the entrance pupil plane of the imaging lens. The fiber tip is rotated to match the object beam polarization. The angle between the signal and diverging reference beam emitted from the fiber is adjusted to achieve sufficient sampling of the interference fringes on the detector (of at least 2 times the Nyquist frequency). Holograms and conventional images are recorded sequentially by opening and closing a shutter in the reference beam path. The camera used in this setup (eco655MVGE SVS-VISTEK, 8-bit mode) has ${2448} \times {2050}$ pixels and a pixel size of ${3.45}\; \times {3.45}\;{{\unicode{x00B5}\text{m}}^2}$. The exposure time is set to 500 µs. The object is located inside a tube with a length of 27 m and diameter of 0.6 m filled with ultrasonically generated fog. The fog tube is shown in Fig. 2. The overall distance to the camera is 30 m. Our test objects have a size of approximately 20 to 30 cm and are placed at the far end of the tube. The imaging lens L2 has a focal length of 450 mm and a clear aperture of approximately 50 mm. Backscattered light is strongly suppressed due to the small acceptance angle of the detection system (field of view is approximately 0.5 deg) and by the separation of illumination and detection beams (of about 20 cm). For the camera with the lens cap on, we measured a mean camera output of 0.48 (digital counts). With the lens cap off, the tube filled with fog, and the laser illuminating the fog, we measured a mean output of 0.55, which includes the dark noise plus photons scattered by the fog.

 figure: Fig. 1.

Fig. 1. Off-axis digital holographic setup in image plane configuration for single-shot coherent detection. A movable object is located inside a tube with a length of 27 m filled with ultrasonically generated fog.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. (a) View inside the fog tube while the fog is flowing in; (b) 27-m-long fog tube installed in the experimental facility of our neighboring institute.

Download Full Size | PDF

Fog density is continuously monitored by measuring the beam attenuation of a separate 780 nm diode laser (5 mW output power) with a powermeter (Thorlabs PM160) in one-way propagation. To reduce the amount of stray light on the powermeter, a spatial filter composed of a lens and a pinhole is used to limit the acceptance angle to less than 5 mrad. Due to the quick dissipation, fog as a volume scattering media is very well suited to generate smooth transitions from extremely dense to faint. The tube is filled up completely, which takes about 120 s. The fog settles and establishes an approximately homogeneous scattering body with a slight density gradient in vertical direction. Over the following minutes, the fog density decreases; thus, more and more ballistic photons reach the detector, and the object structures become increasingly visible. The time interval between the acquisitions of hologram and conventional image is small compared to the rate of change of the fog density. The comparison between the maximum imaging ranges of both methods provides us with a quantitative statement about the advantage of holographic imaging.

B. Data Processing for Holographic Image Reconstruction

At first, the 2D fast Fourier transform (FFT) of the digitally recorded hologram is calculated. The result will include the spectrum of the object and reference signal and the modulated object signal (by the reference signal). The modulated signal results in two disks shifted symmetrically away from the zero-frequency location. We refer to each one of these discs as the modulated object spectrum. The diameter of these disks is proportional to the imaging lens aperture. We use a digital binary mask to select one modulated object spectrum, which corresponds to a process of spatial filtering. The chosen mask diameter is equal to the diameter of the modulated object spectrum in order to maximize the object signal strength and spatial resolution. The filtered spectrum is shifted numerically to the zero-frequency point, and an inverse 2D FFT results in the reconstructed image of the object.

C. SNR Comparison of Conventional and Holographic Imaging

Imaging through scattering media requires high sensitivity of the system since the object signal is weak and embedded in a noisy background. In order to quantitatively discuss the increased sensitivity in coherent detection following from the (spatial) heterodyne gain [1,12,13], we compare the SNRs of conventional and holographic imaging. Therefore, we make two assumptions, which will hold in real experiments, as shown later: (I) we assume that the reference beam intensity is much larger than the object signal intensity; (II) we assume that the diffuse photons hitting the detector act as noisy background with a variance not stronger than the camera noise, as demonstrated by the mean intensities quoted in Section 2.A. We will see that the shot noise of the strong reference beam will exceed all other noise sources and that the SNR of the reconstructed object signal is approximately equal to the object wave field amplitude itself. For conventional imaging, the recorded intensity values for each pixel can be expressed as

$${{\bf I}_{{\text{img}}}}(p,q) = |{\bf O}(p,q{)|^2} + {{\bf N}_{{\text{img}}}}(p,q),$$
where ${\bf O}$ is the object field amplitude, and ${{\bf N}_{{\text{img}}}}$ combines the object field shot noise $\sqrt {{\bf |O}{{\bf |}^2}}$, the camera noise ${{\bf N}_{{\text{cam}}}}$ (i.e., dark current, read out, etc.), and the diffuse photons ${{\bf N}_{{\text{fog}}}}$ scattered multiple times in the fog, all in units of photoelectrons (${\text{e}^ -}$). The indices $p,q$ are pixel coordinates that will be omitted in the following notations for the sake of readability. The combination of the different, uncorrelated noise terms can be written as
$${{\bf N}_{{\text{img}}}} = \sqrt {{\bf |O}{{\bf |}^2} + {\bf N}_{{\text{cam}}}^2 + {\bf N}_{{\text{fog}}}^2} .$$

Thus, the SNR of the recorded image is

$${\bf SN}{{\bf R}_{{\text{img}}}} = \frac{{{\bf |O}{{\bf |}^2}}}{{\sqrt {{\bf |O}{{\bf |}^2} + {\bf N}_{{\text{cam}}}^2 + {\bf N}_{{\text{fog}}}^2}}}.$$

The intensity of the recorded hologram is

$$\begin{split}{{{\bf I}_{{\text{holo}}}}}&= {|{\bf R} + {\bf O}{|^2} + {{\bf N}_{{\text{holo}}}}}\\&= {{\bf |R}{{\bf |}^2} + {\bf |O}{{\bf |}^2} + 2{\bf |R||O|}{\cos}(\varphi) + {{\bf N}_{{\text{holo}}}}}\end{split},$$
where ${\bf R}$ denotes the reference wave amplitude, $\varphi$ is the phase difference between the object and reference wave, and ${{\bf N}_{{\text{holo}}}}$ is the combination of the noise sources for the hologram as
$${{\bf N}_{{\text{holo}}}} = \sqrt {{\bf |R}{{\bf |}^2} + {\bf |O}{{\bf |}^2} + 2{\bf |R||O|}{\cos}(\varphi) + {\bf N}_{{\text{cam}}}^2 + {\bf N}_{{\text{fog}}}^2} .$$

According to our assumptions, the expression above can be simplified (since ${{\bf R}^2} \gg ({\bf |O}{{\bf |}^2} + 2{\bf |R||O|} + {\bf N}_{{\text{cam}}}^2 + {\bf N}_{{\text{fog}}}^2)$) to

$${{\bf N}_{{\text{holo}}}} \approx {\bf |R|}.$$

In case of holographic reconstruction, the object signal is embedded in the following term:

$${{\bf I}_{{\text{reco}}}} = {\bf |R||O|}{ \cos}(\varphi).$$

Note that this term is much larger than ${\bf |O}{{\bf |}^2}$ since the object signal is boosted by the strong reference due to the coherence of both wave fields. In addition, there is a compression gain since the noise is uniformly distributed in the Fourier space and with the spatial filtering the noise power is reduced by a constant factor [11]. This factor $k$ is determined by the ratio of half the hologram size to the mask size (since we are dealing with a real signal, the spectrum is conjugate symmetric; thus, the negative frequencies do not contain additional information). If we include the additional compression gain, the SNR for the reconstruction is

$${\bf SN}{{\bf R}_{{\text{reco}}}} \approx \frac{{{\bf |R||O|}}}{{{\bf |R|}\;{k^{- 1}}}} = k{\bf |O|}.$$

The size of the mask should equal the pupil of the imaging system in order to achieve the best possible SNR ($k$ large). A larger mask would not increase the signal, but more noise would be included in the reconstruction. From Eq. (10), it follows that the holographic imaging range is only minimally affected by the camera noise and any stray light hitting the detector (as long as the two previously mentioned assumptions are valid). From the comparison of Eqs. (5) and (10), it follows that, for weak object signals (near the maximum imaging range), the SNR of conventional imaging is significantly smaller than the one of holographic imaging since the object signal will be in the same order of magnitude as the camera noise and the stray light. This means that holographic imaging (coherent detection) has a significantly longer imaging range compared to conventional imaging, as illustrated in Fig. 3. With our experiments, we are able to verify this statement and furthermore to give exact quantitative information about the actual advantage of the holographic method in a realistic application.

 figure: Fig. 3.

Fig. 3. Illustration of the imaging range differences of conventional imaging and holographic detection through fog. The difference of 1.7 AL corresponds to approximately 30x less laser illumination power for the coherent detection to achieve the same imaging range.

Download Full Size | PDF

D. Simulation Model

The previous considerations regarding the SNR are at first qualitatively verified with a simulation. We simulate the influence of crucial parameters like object illumination intensity, reference beam intensity, and noise intensity on the SNR and, thus, on the difference in the imaging range for conventional and holographic imaging. According to our second assumption, we consider the fog as an absorber (decreasing the number of ballistic photons) and neglect diffuse photons beyond their contribution to background noise. As an object signal, we generate a realistic complex optical wavefront including speckle and aperture diffraction. Figure 4 illustrates the process of generating the object image.

 figure: Fig. 4.

Fig. 4. Signal process to generate a realistic image with speckle and diffraction pattern. Random phase values are added to a binary test image to generate speckle. The diffraction effect is generated by spatial low-pass filtering in the Fourier space.

Download Full Size | PDF

Starting from a binary test image, we simulate a rough object surface producing a speckle pattern by adding a random phase term,

$${{\bf O}_{{\text{speck}}}}(p,q) = {{\bf O}_{{\text{bin}}}}(p,q){e^{{2}\pi {\text{i}}{\bf j}{\text{(p,q)}}}},$$
where ${\bf j}$ contains uniform distributed random numbers in the interval [0,1]. For the aperture diffraction, we apply a centered circular mask in the Fourier space:
$${\bf O}(p,q) = {{\cal F}^{- 1}}\{{\cal F}\{{{\bf O}_{{\text{speck}}}}(p,q)\} \cdot {\text{mask}}(p^\prime ,q^\prime)\} ,$$
where ${\cal F}$ denotes the 2D Fourier transformation. We then rescale ${\bf |O|}(p,q)$ to the interval [0,1]. The attenuation of the object signal intensity ${I_{{\text{obj}}}}$ caused by the transition through the fog back and forth is calculated according to Lambert–Beer law as
$${I_{{\text{obj}}}} = {I_0}{e^{- 2AL}},$$
where $AL$ specifies the one-way AL. The conventional image is calculated as the absolute square of the product of the object signal amplitude and the complex valued object field and a noise term added,
$${{\bf I}_{{\text{img}}}} = {\cal P}\{|\sqrt {{I_{{\text{obj}}}}} \cdot {\bf O}{|^2}\} + {{\bf N}_{}},$$
with the Poisson operator ${\cal P}$ applied on the optical field after traversing the scattering media back and forth and ${\bf N}$ Poisson distributed thermal noise. The hologram is calculated as the superposition of the optical field ${\bf O}$ and a tilted plane wave as reference ${\bf R}$,
$${{\bf I}_{{\text{holo}}}}(p,q) = {\cal P}\{|\sqrt {{I_{{\text{obj}}}}} \cdot {\bf O} + \sqrt {{I_{{\text{ref}}}}} \cdot {\bf R}{|^2}\} + {{\bf N}_{}}.$$

In the process of analog-to-digital conversion, the continuous values of ${{\bf I}_{{\text{img}}}}$ and ${{\bf I}_{{\text{holo}}}}$ are constrained to discrete integer values. The digitized signal is calculated as

$${{\bf I}_{\text{x,digi}}}(p,q) = \left[{\frac{{{{\bf I}_{\text{x}}}(p,q)}}{{{\mu _{\text{sat}}}}}{{(2}^Q} - 1)} \right]_0^{{2^Q} - 1},$$
where ${\mu _{\text{sat}}}$ is the detector pixel saturation capacity, Q is the number of quantization bits, and the square brackets indicate the integer conversion. Note that this calculation only yields valid results if the camera specific dynamic range is higher than ${2^Q} - 1$. Otherwise, this term needs to be replaced by the value of the dynamic range. The reconstructed image is obtained by inverse Fourier transforming the masked cross correlation term in the spectral domain,
$${{\bf I}_{{\text{reco}}}}(p,q) = {{\cal F}^{- 1}}\{{\cal F}\{{{\bf I}_{\text{holo,digi}}}(p,q)\} \cdot {\text{mask}}(p^\prime ,q^\prime)\} .$$

We calculate the simulated images ${{\bf I}_{\text{img,digi}}}$ and ${{\bf I}_{{\text{reco}}}}$ for a variety of different ALs and compare them to the experimental results. Our strategy for determining the imaging range based on a large amount of images is described in the following subsection.

E. Imaging Range Evaluation

To define the maximum imaging range, we evaluate the amount of object information embedded in each image, depending on the current fog density. The amount of signal is determined by calculating the deviation to a ground truth image of the test object taken without fog. A useful method to compare the similarity of two images is the structural similarity index measure (SSIM) developed by Wang et al. [22]. As the amount of ballistic photons hitting the detector decreases, illumination and contrast will change significantly. Note that we are interested in the structural information of the test object that both imaging methods are able to retrieve. Thus, other common techniques like mean squared error (MSE) or peak signal-to-noise ratio (PSNR) are unsuitable since they estimate absolute errors. As mentioned earlier, the fog inside the tube tends to form a vertical density gradient, which has the consequence that there is a significant variance in contrast and illumination across the image in vertical direction, while the object structures are still visible. Therefore, we use a modified version of the MSE algorithm where each image row is rescaled independently before it is compared to the corresponding row in the ground truth image. The error value for one image is calculated as

$${\text{MSE}_{{\text{norm}}}} = \frac{1}{m}\sum\limits_{i = 1}^m \left({\frac{1}{n}\sum\limits_{j = 1}^n {{\left({\frac{{{i_{\textit{ij}}}}}{{\sum\limits_j {{\bf i}_{\text{i}}}}} - \frac{{{{\hat i}_{\textit{ij}}}}}{{\sum\limits_j {{{\hat{\bf i}}}_{\text{i}}}}}} \right)}^2}} \right),$$
where $m$ is the number of image lines, $n$ is the number of image columns, $i,j$, and $k$ are the pixel indices, ${\bf I}$ is the image, and ${\hat{\bf I}}$ is the ground truth. The inner summation describes the line-wise MSE calculation of the rescaled image lines. With the summation in the denominator, each line is scaled according to its signal power. The outer summation describes the calculation of the mean value over all image lines. In this way, the error measure takes the presence of a vertical density gradient into account.

Both measures, the SSIM and our application specific MSE algorithm, are suitable; however, we rely on the latter since it provides more accurate results and use SSIM only as reference. We use the SSIM algorithm implementation provided by MATLAB with the exponents for illumination and contrast set to zero and adjust the radius for weighting the neighborhood pixels to match the feature size in the image. Before the calculation of the similarity measure, we apply a 2D low-pass filter to slightly smoothen the image features; thus, the error measure is less sensitive to small displacements caused by moving parts in the experimental setup. Figure 5 shows the characteristic sigmoidal curves for SSIM and our application specific ${\text{MSE}_{{\text{norm}}}}$ algorithm. Both similarity measures generate smooth transitions as the fog density changes. However, they produce slightly different curves. In the low-AL region, where fog density is very low, only the ${\text{MSE}_{{\text{norm}}}}$ measure shows the real structural difference between holographic reconstruction and conventional image (due to clipping, the diffraction spikes are much more pronounced in the holographic reconstruction; see Fig. 6).

 figure: Fig. 5.

Fig. 5. (a) SSIM and (b) ${\text{MSE}_{{\text{norm}}}}$ values for simulated conventional and holographic images over one measurement cycle plotted over the fog density. The corresponding images are shown in Fig. 6(a).

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. (a) Simulated and (b) measured conventional images and holographic reconstructions at four different attenuation lengths. The rectangle indicates the region where the fog density is measured.

Download Full Size | PDF

For human perception, the object starts to become recognizable at an error value of around 0.85–0.9. We specify the imaging range of the corresponding imaging method as the AL at an error value of 0.85 [see the images in Fig. 6(a) for visual reference].

3. EXPERIMENTAL RESULTS AND COMPARISON WITH SIMULATIONS

We use the experimental setup, the simulation model, and the imaging range evaluation method described above to make an accurate quantitative statement about the benefit of holographic imaging solely based on the interferometric amplification of two coherent light fields. Furthermore, we want to investigate the influence of object and reference beam intensity and the noise on the imaging range difference.

A. Comparison of Simulated and Experimentally Retrieved Images

The simulation parameters for the camera are derived from measurements and specifications of the camera manufacturer so they correspond to the optical experiment. In our experiment, we use a camera with a saturation capacity of $8{,}260 \;{\text{e}^ -}$ (photo electrons per pixel), and the conversion factor is $32.3 \;{\text{e}^ -}$ per count for 8-bit resolution. The images and reconstructions in Fig. 6(a) are calculated according to Eqs. (13)–(15), respectively, with an expected value of the Poisson distributed noise term of $16.5 \;{\text{e}^ -}$. The object beam intensity before attenuation ${I_{0}}$ is 45,000, ${\text{e}^-}$ and the reference beam intensity ${I_{{\text{ref}}}}$ is $4{,}130 \;{\text{e}^ -}$ (50% of saturation capacity). The object signal intensity decreases exponentially with increasing AL.

In contrast to the simulation, the illumination in the experiment is inhomogeneous due to the beam profile of the laser. Furthermore, the vertical gradient in fog density present in the experimental images is not included in the simulation model. To reduce the uncertainty of the actual fog density for each pixel row, the region of interest for the imaging range evaluation algorithm is reduced to the region indicated by the rectangle in Fig. 6. The fog density is measured at the same height as the vertical center of this rectangle. The imaging range difference between conventional and holographic imaging derived from the simulation predominantly agrees with the imaging range difference in the experiment.

B. Influence of Object Illumination

An increase in object illumination will also increase the number of ballistic photons reaching the detector; therefore, one assumes that the benefit for both imaging methods will be the same. According to Eq. (2), an exponential increase in object illumination will result in a linear increase in imaging range. Figure 7 shows the simulated and experimental results for exponentially increasing object beam intensities. For the simulation, the unit is in photoelectrons per pixel, and in the experiment we measured the laser intensity with a powermeter before traversing the fog.

 figure: Fig. 7.

Fig. 7. (a) Simulated and (b) measured imaging ranges for conventional and holographic imaging with varying object illumination.

Download Full Size | PDF

The experimental results consist of six complete measurement cycles per object illumination value with the fog tube carefully preconditioned before each new cycle. The mean values and the standard deviations are shown in the plot. As expected, for simulation and experimental results, the imaging range difference is not affected by an increase in object illumination. From the measurements shown in Fig. 7(b), it follows that a 20-fold increase in the object illumination is not sufficient for conventional imaging to reach the same imaging range as holographic imaging. This is in accordance with the imaging range difference of 1.7 AL, which translates according to Eq. (2) to 30 times less object illumination power required for holographic imaging.

C. Influence of the Reference Intensity

As shown in Eq. (9), the object signal is modulated by the reference light; thus, the signal intensity of the interference pattern containing the object information increases with increasing reference intensity. Conventional imaging is not affected by the reference light. The influence of a varying reference beam intensity on the imaging range of holographic imaging is shown in Fig. 8. The reference intensity value is the mean intensity value over all pixels expressed as a percentage of the detector saturation capacity.

 figure: Fig. 8.

Fig. 8. (a) Simulated and (b) measured imaging ranges for conventional and holographic imaging with varying reference intensity.

Download Full Size | PDF

The experimental results consist of six complete measurement cycles for each reference intensity value with careful preconditioning. The mean values and the standard deviations are shown in the plot. A significant increase in the imaging range with increasing reference intensity for holographic imaging can be observed only in the lower region. For reference intensity values above 30% of detector saturation capacity, simulation and experimental results behave differently. In the simulations, the reference wave is perfectly homogeneous whereas in the experiment the cover glass of the sensor leads to low frequency interference fringes. Due to these and other inhomogeneities in the reference beam, pixel saturation starts already at a mean intensity of 20–30% for some areas. In these areas, the interference fringes containing the object information can no longer be recorded (referred to as clipping).

D. Influence of Noise

From the SNR considerations for conventional imaging in Eq. (5) and holographic imaging in Eq. (10), it follows that increasing the noise term (which includes all non-intensity dependent noise sources) will have a much bigger effect on conventional imaging. A controlled manipulation of the noise is difficult to realize in the experiment, but it can be easily achieved in the simulation model. In Fig. 9, the influence of varying noise intensity on the imaging range is shown.

 figure: Fig. 9.

Fig. 9. Simulated imaging ranges for conventional imaging and holographic reconstruction as a function of noise intensity.

Download Full Size | PDF

Increasing noise reduces the maximum imaging range of conventional imaging while the holographic reconstructions are unaffected since the reference beam shot noise is still dominant. From the simulations, it follows that from all relevant parameters the noise term has the most significant influence on the imaging range difference since increasing noise drastically decreases the imaging range of conventional imaging while holographic imaging is immune until the noise intensity approaches the reference beam shot noise intensity.

E. Imaging Ranges for Different Objects

The previous measurements were all carried out with a high reflectance “ITO” logo as the test object. In theory, the reflectivity influences the imaging ranges of conventional and holographic imaging the same way since both methods rely on ballistic photons. However, ballistic photons that lost their polarization will have a reduced contribution to the image formation in holographic imaging. To show if there is a significant amount of depolarized ballistic photons, we carried out measurements with test objects consisting of different materials. These materials include retroreflective sheeting, paper, and glossy plastic composites. The objects and the corresponding imaging ranges are shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Imaging ranges for different types of objects: retroreflective logo, paper logo, glossy plastic lamp (side view), and motorcycle helmet (front view).

Download Full Size | PDF

While the different test objects show a huge variation in the imaging range, the variation of the difference between conventional and holographic imaging is small. For each object, we conducted one single measurement cycle. This experiment shows that the advantage of holographic imaging through fog also applies to objects relevant to practical applications, such as a motorcycle helmet.

4. DISCUSSION AND CONCLUSION

We compare the sensitivity of holographic imaging to conventional imaging through fog. Using spatially separated narrow illumination and imaging cones, the stray light hitting the detector is reduced to the magnitude of the overall detector noise. Thus, we are able to determine the sensitivity difference based on weak signal recovery itself. Otherwise, one would also need to take the inherent suppression of incoherent stray light in holographic imaging into account since the light scattered by the moving fog particles will experience a change in its physical properties (i.e., polarization and direction), thus becoming partially incoherent to the reference beam. Sweeping through the fog density, our imaging range measure generates sigmoidal error curves describing the amount of recovered object signal as a function of fog density. We compare the curves for both imaging methods at a certain value that corresponds to the limit where the object becomes visible for the human perception. Thus, the imaging ranges are determined based on a very large number of images, which increases the reliability of our experimental results. The most important finding from the experimental results is the non-linear relation between the reference intensity and holographic imaging range. In the experiment, a mean value of 10% of the saturation capacity for the reference intensity is enough to provide the maximum imaging range. Even for the simulations, where the reference beam is perfectly homogeneous, values significantly higher than 10% do not result in a substantially larger imaging range. Another important finding based on the simulation model is that, in contrast to conventional imaging, holographic imaging is immune to an increase in detection noise, as long as the reference shot noise stays dominant. This suggests that, as the amount of stray light hitting the detector increases (i.e., by enlarging the illumination and imaging fields of view), holographic imaging will still perform well. There are many publications on the principles and benefits of holographic imaging through scattering media (as mentioned in the introduction), especially on the mechanism of stray light rejection based on coherence gating. There are also many works about the high sensitivity of holography at weak object signals (in particular holographic imaging at the shot noise level). Our thoughts and calculations on the SNR are based on these previous works. With our experimental setup, we introduce holographic imaging to the realm of large-scale imaging such as environmental perception under harsh weather conditions for autonomous driving vehicles. The dimensions and the scattering media used in our experiments are very close to the conditions in such applications. This underlines the importance of our own and previous results with regard to the use of holographic imaging as a new sensor concept in automotive engineering. A promising holographic shape measurement technology is two-wavelength holography [23]. It is based on the acquisition of two holograms with slightly different wavelengths. It enables remote three-dimensional surface measurement through scattering media with almost all the benefits demonstrated with our experimental results. In a modified version, both holograms can be recorded simultaneously. As a consequence, only half the camera chip saturation capacity is available for each hologram. However, as described above, a reference intensity of 10% saturation capacity is already sufficient to achieve quite high sensitivity. Of course, laser eye safety is still a challenge to be solved in order to use the system in road traffic application. Combining our results with other methods to reduce camera noise [24] that have been proposed recently might further improve the performance of such holographic systems. In ongoing research projects, we apply deep learning to further extract object information from noisy holograms. Another interesting topic is the use of deep learning architectures applied on the holographic reconstructions for automated object recognition. Deep neural networks have already been proven to successfully recognize objects partly obscured by scattering media [25].

In conclusion, the results presented here show that the high sensitivity of holographic imaging, which is based on interferometric amplification, leads to significant advantages over conventional imaging techniques, even for large-scale applications. For our investigations, we used a cw laser, and the exposure time was 500 µs. For higher dynamic imaging, a pulsed laser (with a pulse length of few nanoseconds) can be used. In our case, holographic imaging through 27 m of fog requires 30 times less illumination power for the same imaging range.

Funding

Baden-Württemberg Stiftung.

Acknowledgment

We express our great appreciation to our neighbors at the Institute of Thermodynamics and Thermal Process Engineering ITT for sharing their research facilities needed for this extraordinary large optical experiment. Special thanks go to Dieter Höhn for his outstanding support in building the test setup. We are also immensely grateful to Thomas Schoder from the Institute of Applied Optics for his contributions to the realization of the project.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. C. Dunsby and P. French, “Techniques for depth-resolved imaging through turbid media including coherence-gated imaging,” J. Phys. D 36, R207–R227 (2003). [CrossRef]  

2. J. Radford, A. Lyons, F. Tonolini, and D. Faccio, “Role of late photons in diffuse optical imaging,” Opt. Express 28, 29486–29495 (2020). [CrossRef]  

3. S. Zhu, E. Guo, Q. Cui, L. Bai, J. Han, and D. Zheng, “Locating and imaging through scattering medium in a large depth,” Sensors 21, 90 (2021). [CrossRef]  

4. Y. Choi, C. Yoon, M. Kim, W. Choi, and W. Choi, “Optical imaging with the use of a scattering lens,” IEEE J. Sel. Top. Quantum Electron. 20, 61–73 (2014). [CrossRef]  

5. A. G. Podoleanu, “Optical coherence tomography,” J. Microsc. 247, 209–219 (2012). [CrossRef]  

6. F. Caimi and F. Dalgleish, “Performance considerations for continuous-wave and pulsed laser line scan (LLS) imaging systems,” J. Eur. Opt. Soc. Rap. Publ. 5, 10020s (2010). [CrossRef]  

7. D. Kijima, T. Kushida, H. Kitajima, K. Tanaka, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Time-of-flight imaging in fog using multiple time-gated exposures,” Opt. Express 29, 6453–6467 (2021). [CrossRef]  

8. Z. Guo, C. Li, T. Zhou, B. Chen, and M. Cui, “Off-axis spatiotemporally gated multimode detection toward deep fog imaging,” Opt. Express 27, 33326–33332 (2019). [CrossRef]  

9. K. A. Stetson, “Holographic fog penetration,” J. Opt. Soc. Am. 57, 1060–1061 (1967). [CrossRef]  

10. A. Lohmann and C. Shuman, “Image holography through convective fog,” Opt. Commun. 7, 93–97 (1973). [CrossRef]  

11. A. E. Tippie and J. R. Fienup, “Weak-object image reconstructions with single-shot digital holography,” in Biomedical Optics and 3-D Imaging (Optical Society of America, 2012), paper DM4C.5.

12. M. Gross and M. Atlan, “Digital holography with ultimate sensitivity,” Opt. Lett. 32, 909–911 (2007). [CrossRef]  

13. F. Verpillat, F. Joud, M. Atlan, and M. Gross, “Digital holography at shot noise level,” J. Disp. Technol. 6, 455–464 (2010). [CrossRef]  

14. M. Locatelli, E. Pugliese, M. Paturzo, V. Bianco, A. Finizio, A. Pelagotti, P. Poggi, L. Miccio, R. Meucci, and P. Ferraro, “Imaging live humans through smoke and flames using far-infrared digital holography,” Opt. Express 21, 5379–5390 (2013). [CrossRef]  

15. G. Pedrini and H. J. Tiziani, “Short-coherence digital microscopy by use of a lensless holographic imaging system,” Appl. Opt. 41, 4489–4496 (2002). [CrossRef]  

16. L. Martínez-León, G. Pedrini, and W. Osten, “Applications of short-coherence digital holography in microscopy,” Appl. Opt. 44, 3977–3984 (2005). [CrossRef]  

17. E. Leith, C. Chen, H. Chen, Y. Chen, D. Dilworth, J. Lopez, J. Rudd, P.-C. Sun, J. Valdmanis, and G. Vossler, “Imaging through scattering media with holography,” J. Opt. Soc. Am. A 9, 1148–1153 (1992). [CrossRef]  

18. A. V. Kanaev, A. T. Watnik, D. F. Gardner, C. Metzler, K. P. Judd, P. Lebow, K. M. Novak, and J. R. Lindle, “Imaging through extreme scattering in extended dynamic media,” Opt. Lett. 43, 3088–3091 (2018). [CrossRef]  

19. A. Ziaee, C. Dankwart, M. Minniti, J. Trolinger, and D. Dunn-Rankin, “Ultra-short pulsed off-axis digital holography for imaging dynamic targets in highly scattering conditions,” Appl. Opt. 56, 3736–3743 (2017). [CrossRef]  

20. M. Kutila, P. Pyykönen, H. Holzhüter, M. Colomb, and P. Duthon, “Automotive lidar performance verification in fog and rain,” in 21st International Conference on Intelligent Transportation Systems (ITSC) (2018), pp. 1695–1701.

21. A. Beer, “Bestimmung der absorption des rothen Lichts in farbigen Fluessigkeiten,” Ann. Phys. 162, 78–88 (1852). [CrossRef]  

22. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

23. G. Pedrini, I. Alekseenko, G. Jagannathan, M. Kempenaars, G. Vayakis, and W. Osten, “Feasibility study of digital holography for erosion measurements under extreme environmental conditions inside the international thermonuclear experimental reactor tokamak,” Appl. Opt. 58, A147–A155 (2019). [CrossRef]  

24. B. Mandracchia, X. Hua, C. Guo, J. Son, T. Urner, and S. Jia, “Fast and accurate scmos noise correction for fluorescence microscopy,” Nat. Commun. 11, 94 (2020). [CrossRef]  

25. V. Bianco, P. Mazzeo, M. Paturzo, C. Distante, and P. Ferraro, “Deep learning assisted portable IR active imaging sensor spotsand identifies live humans through fire,” Opt. Laser Eng. 124, 105818 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Off-axis digital holographic setup in image plane configuration for single-shot coherent detection. A movable object is located inside a tube with a length of 27 m filled with ultrasonically generated fog.
Fig. 2.
Fig. 2. (a) View inside the fog tube while the fog is flowing in; (b) 27-m-long fog tube installed in the experimental facility of our neighboring institute.
Fig. 3.
Fig. 3. Illustration of the imaging range differences of conventional imaging and holographic detection through fog. The difference of 1.7 AL corresponds to approximately 30x less laser illumination power for the coherent detection to achieve the same imaging range.
Fig. 4.
Fig. 4. Signal process to generate a realistic image with speckle and diffraction pattern. Random phase values are added to a binary test image to generate speckle. The diffraction effect is generated by spatial low-pass filtering in the Fourier space.
Fig. 5.
Fig. 5. (a) SSIM and (b)  ${\text{MSE}_{{\text{norm}}}}$ values for simulated conventional and holographic images over one measurement cycle plotted over the fog density. The corresponding images are shown in Fig. 6(a).
Fig. 6.
Fig. 6. (a) Simulated and (b) measured conventional images and holographic reconstructions at four different attenuation lengths. The rectangle indicates the region where the fog density is measured.
Fig. 7.
Fig. 7. (a) Simulated and (b) measured imaging ranges for conventional and holographic imaging with varying object illumination.
Fig. 8.
Fig. 8. (a) Simulated and (b) measured imaging ranges for conventional and holographic imaging with varying reference intensity.
Fig. 9.
Fig. 9. Simulated imaging ranges for conventional imaging and holographic reconstruction as a function of noise intensity.
Fig. 10.
Fig. 10. Imaging ranges for different types of objects: retroreflective logo, paper logo, glossy plastic lamp (side view), and motorcycle helmet (front view).

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

I ( d ) = I 0 e ε d ,
A L = ln ( I I 0 ) ,
I img ( p , q ) = | O ( p , q ) | 2 + N img ( p , q ) ,
N img = | O | 2 + N cam 2 + N fog 2 .
S N R img = | O | 2 | O | 2 + N cam 2 + N fog 2 .
I holo = | R + O | 2 + N holo = | R | 2 + | O | 2 + 2 | R | | O | cos ( φ ) + N holo ,
N holo = | R | 2 + | O | 2 + 2 | R | | O | cos ( φ ) + N cam 2 + N fog 2 .
N holo | R | .
I reco = | R | | O | cos ( φ ) .
S N R reco | R | | O | | R | k 1 = k | O | .
O speck ( p , q ) = O bin ( p , q ) e 2 π i j (p,q) ,
O ( p , q ) = F 1 { F { O speck ( p , q ) } mask ( p , q ) } ,
I obj = I 0 e 2 A L ,
I img = P { | I obj O | 2 } + N ,
I holo ( p , q ) = P { | I obj O + I ref R | 2 } + N .
I x,digi ( p , q ) = [ I x ( p , q ) μ sat ( 2 Q 1 ) ] 0 2 Q 1 ,
I reco ( p , q ) = F 1 { F { I holo,digi ( p , q ) } mask ( p , q ) } .
MSE norm = 1 m i = 1 m ( 1 n j = 1 n ( i ij j i i i ^ ij j i ^ i ) 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.