Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Looking through a diffuser and around an opaque surface: A holographic approach

Open Access Open Access

Abstract

Retrieving the information about the object hidden around a corner or obscured by a diffused surface has a vast range of applications. Over the time many techniques have been tried to make this goal realizable. Here, we are presenting yet another approach to retrieve a 3-D object from the scattered field using digital holography with statistical averaging. The methods are simple, easy to implement and allow fast image reconstruction because they do not require phase correction, complicated image processing, scanning of the object or any kind of wave shaping. The methods inherit the merit of digital holography that the micro deformation and displacement of the hidden object can also be detected.

© 2014 Optical Society of America

1. Introduction

Is there a way to see an object obscured by a strong diffuser such as a transmissive ground glass or an opaque plate with a reflectively scattering surface? Such a question has long been addressed in the context of inverse scattering problems [1], and a technique has been known that can detect a 2-D periodic grating structure hidden by a diffuser [2]. Recently a technique of ultrafast time-of-flight 3-D imaging that can look around a corner using diffusely reflected light was demonstrated [3]. Also SLM-based techniques that compensate the random phase and permit imaging a 3-D object through a diffuser have been reported [48]. The applications of such techniques range from medical imaging through turbid media or cells to rescue operations in hazardous conditions.

In this paper, we propose yet another approach to the imaging of 3-D object obscured by a diffuser or hidden around a corner. We interpret that the obscuration of the object image is due to the loss of phase information caused by scattering due to the diffuser. Then we note that the clue to the solution is to find an imaging technique that can cope with the loss of phase information. Indeed, holography is the technique that can recover phase information that is lost by intensity recording in conventional photography.

We present a simple solution that is based on the numerical reconstruction of 3-D object by digital holography where the hologram is formed on a transmissive ground glass or a reflectively scattering opaque surface and recorded remotely by a digital camera focused on the hologram. Though our approach based on holographic techniques is functionally more restrictive than time-of-flight 3-D imaging [3] in the sense that it requires a reference beam for holographic recording, but the system is much simpler and requires no special equipment such as a femtosecond laser and a high-speed streak camera. We use a reference beam for holography, just as a reference point source used for the SLM-based random phase compensation [48]. Our technique can be realized easily by the combination of a common CW laser and a conventional camera, and does not even require a SLM and the iterative search of the phase distribution that compensates the random phase introduced by the diffuser. Already in late 1960s, two holographic techniques have been proposed by Goodman et al. [9] and Kogelnik et al. [10] for imaging through random media. In a sense, our technique may be regarded as reviving their seminal work by use of a modern technique of digital holography plus imaging optics, and thereby giving new functionalities that enable not only remote observation but also remote deformation measurement of the object that is hidden behind a strong diffuser or around a corner away from the recording position. The technique by Goodman et al. [9] (which we call the Goodman scheme for short) is simple and convenient, but the hologram has to be recorded at a position in close proximity to the diffuser so that both object and reference beams experience the same phase perturbation. This hinders our objective of remote acquisition of hidden image information at a distance off from the diffuser. The technique by Kogelnik et al. [10] (which we call the Kogelnik scheme for short) has solved this problem by use of imaging optics that forms the image of the diffuser on a plane off from its original position so as to permit remote recording of the hologram. However, the Kogelnik scheme has the drawback that the hologram must be placed at exactly the object location in order to cancel the random phase introduced by the diffuser. We base our technique on the Goodman scheme but employ imaging optics like the Kogelnik scheme to gain the advantages of the two schemes, and integrate them into a novel scheme of digital holography.

2. Principles

In Fig. 1(a), an object is illuminated by coherent light and it cannot be seen directly since it is obscured by a diffuser. However, we can make it visible (though indirectly) by superposing a reference beam uR(ξ,η) on the object beam uO(ξ,η)such that

u(ξ,η)=uO(ξ,η)+uR(ξ,η).
The interference between the two beams forms an aerial hologram immediately in front of the diffuser. Figure 1(b) shows the ray diagram of the lab setup, where a is the separation between the object and the reference, s1 is the distance from the diffuser to the object, s2is the distance between the lens and the diffuser, θ1is the half angle between the object and the reference wave, θ2is the diffraction angle from the diffuser and θ3 is the half angle subtended by the imaging lens onto the diffuser. When this superposed field u(ξ,η)is transmitted by the diffuser, an additional random phase ϕr(ξ,η)is introduced. The field immediately behind the diffuser is given by
udiff(ξ,η)=u(ξ,η)exp[iϕr(ξ,η)].
Note that the diffuser causes a random variation in the phase but the relative phase information between the object and the reference beams is not lost because both beams experience the same random phase. This assumption of a thin scatter layer is the key to our principle. Instead of directly recording the field immediately behind the diffuser as in the Goodman scheme [9], we image the field onto the CCD using a lens of diameterDas in the Kogelnik scheme [10]. While this permits remote recording of the hologram, the coherent imaging of the diffused field with a lens of the limited aperture gives rise to a new problem of speckle formation [1113] though this was not addressed in the Kogelnik paper [10]. The (complex) amplitude point spread function (PSF) g(x,y)of the imaging optics is given by the Fourier transform of the pupil function of the lens. Thus according to the diffraction theory, the complex amplitude of the scattered field at the CCD plane is given by the convolution of the diffused field and the PSF

 figure: Fig. 1

Fig. 1 Transmission mode; (a) schematic diagram and (b) ray diagram of the experimental setup.

Download Full Size | PDF

uCCD(x,y)={u(ξ, η)exp[iϕr(ξ,η)]}g(xξ,yη)dξdη.

The intensity distribution recorded on the CCD can be written as

|uCCD(x,y)|2=u(ξ1,η1)u*(ξ2,η2)exp[iϕr(ξ1,η1)]exp[iϕr(ξ2,η2)]×g(xξ1,yη1)g*(xξ2,yη2)dξ1dη1dξ2dη2.
Now we examine the conditions that influence the imaging performance. The half angle between the object and the reference beam can be written as θ1=tan1(a/(2s1)). We assume that the diffuser is statistically homogeneous, and that the correlation length of its complex transmittanceexp[iϕr(ξ,η)] is given byΔ. Roughly speaking, the most of the power of a ray incident normal to the diffuser is scattered into a cone with a cone angle θ2=sin1(λ/Δ). For the rays that come from the edge of the object or a reference point source and impinge on the diffuser with the incident angleθ1, the cone angle of the scattered rays takes the maximum value of θmax=θ1+θ2=tan1(a/(2s1))+sin1(λ/Δ). We assume that the imaging lens is diffraction-limited and has a numerical apertureNA=sinθ3. Then, if θ3>θmaxor tan1(D/(2s2))>tan1(a/(2s1))+sin1(λ/Δ), a resolved image of the hologram will be formed on the CCD without any loss of information including the random phase distribution of the diffuser. If the numerical aperture of the lens is sufficiently large to resolve the maximum spatial frequency of the holographic fringes such that θ3θ1orD/s2a/s1, then we can neglect the loss of the OTF (Optical Transfer Function) gain of the diffraction-limited imaging lens in this spatial frequency range, and assumeg(x,y)δ(x,y)in Eq. (3). In this case, we have|uCCD(x,y)|2=|u(x,y)|2, which means that the hologram is identical to that directly recorded in proximity to the diffuser as in the Goodman scheme that gives an ideal reconstructed image. In our remote imaging system, however, the imaging condition tan1(D/(2s2))>tan1(a/(2s1))+sin1(λ/Δ)can hardly be satisfied because we have to record the image of the hologram far from the diffuserD/(2s2)1. A failure of satisfying this imaging condition results in the imperfect cancellation of the random phase introduced by the diffuser, which manifests itself as speckle noise that degrades the image to such a degree that it is hard to recognize. To reduce the speckle noise, time average over the speckle field intensity can be performed by rotating the diffuser. Now, assuming that the diffuser creates a field that is a stationary and ergodic process and is delta-correlated, we can replace the ensemble average by the time average [13] as
exp[iϕr(ξ1,η1,t)]exp[iϕr(ξ2,η2,t)]=δ(ξ1ξ2,  η1η2).
Thus, the time averaged field intensity at the CCD plane can be written as:
|uCCD(x,y)|2=|u(ξ,η)|2×|g(xξ,yη)|2dξdη.
Where the first term in the integral
|u(ξ,η)|2=|uO(ξ,η)|2+|uR(ξ,η)|2+uO(ξ,η)uR*(ξ,η)+uO*(ξ,η)uR(ξ,η),
is the hologram, which is then convoluted with the intensity impulse response |g(x,y)|2 of the imaging lens to produce the intensity distribution at the CCD plane. For the lensless Fourier transform hologram, the image is reconstructed by the Fourier transform of the intensity distribution, i.e.
[|uCCD(x,y)|2]=[|u(x,y)|2]×[|g(x,y)|2].
Where represents the Fourier transformation. The first term of Eq. (8) i.e. [|u(x,y)|2] gives the zeroth and ± first order images reconstructed from a conventional lensless Fourier transform hologram. For other kind of holograms in which the reference is not a point source and is not located in the object plane, the Fresenel diffraction integral may be used for the reconstruction. The second term [|g(x,y)|2] is the optical transfer function (OTF) of the lens. When the resolution of the lens is reduced by defocus or aberration, the OTF decreases rapidly from its high central value. This in turn causes the brightness of the reconstructed holographic images to get dark rapidly as the observation point moves to the outer regions, and this effectively limits the maximum size of the image that can be reconstructed. In other words, object points in the outer regions create finer interference fringes than those in the inner regions, and their fringe contrast is reduced due to the low-pass characteristic of the OTF. For those object points for which the OTF of the imaging lens remains nearly unity, the intensity distribution recorded by the CCD is the same as that of conventional digital holography, being free from the influence of the random phase introduced by the diffuser.

3. Experiments and results

3.1. Transmission mode

In the experiment a laser beam is coupled into a fiber and split into two parts, one of which is utilised to illuminate the object and the other is used as reference beam. As the object scatters the beam in all the directions, a part of it interferes with the reference beam. The schematic diagram of the experimental setup is shown in the Fig. 1(a), in which the left part forms a set up for the conventional lensless Fourier transform holography. A hologram of the 3D object is formed immediately in front of the diffuser (here a ground glass is used as a strong thin diffuser) which randomises the transmitted field. The field immediately behind the diffuser is imaged by an objective lens (AF NIKKOR) of focal length ƒ = 50 mm and f-number 1.4 onto the image sensor with magnification M = 1.5. The aperture size of the lens was kept large. A SVS-VISTEK camera, of pixel size 7.4 x 7.4 µm2 and number of pixels 3280 x 4896, is kept normal to the scatterer to receive maximum amount of light. Precise focusing is crucial, and only that part of the diffusive surface that is in exact focus produces images because the OTF term decreases rapidly with defocus. Inverse Fourier transformation is then performed on the recorded intensity distribution to reconstruct the object.

When the diffuser was set static, the reconstructed images looked very noisy with speckles and the contrast was also poor. This is because the ensemble averaging operation assumed in our principle is not performed. To replace the ensemble average with time average, the exposure time of the camera was set to 500 ms and the diffuser was kept rotating at very slow speed. As a result, the contrast of the image was improved and speckles were averaged out. In Fig. 2(a) is a magnified view of a part of the hologram recorded while the diffuser is rotating, Figs. 2(b) and 2(c) show the reconstructed images without and with the averaging the speckle field, respectively. With the speckle field being averaged out, the fringes in the holograms gained high visibility and the quality of the reconstructed image has been improved.

 figure: Fig. 2

Fig. 2 Transmission mode; object size: 5.5 cm, number of Pixels used: 1500 x 1500. (a) Part of recorded hologram. (b) Reconstruction without averaging the speckle field. (c) Reconstructed image while the diffuser is rotating

Download Full Size | PDF

3.2. Reflection mode

The setup for the reflection mode is the same as in transmission mode; only the imaging geometry is different. The experimental setup is shown in Fig. 3(a), where the camera is kept normal to the scattering surface which in this case is an aluminum plate with a rough surface. Slight movement of the diffuser during the recording was sufficient to reduce the speckle noise. In contrast to the transmission mode where almost all the transmitted light reaches to the CCD plane, here only a small amount of scattered light reaches to the detector (as we carefully avoided the use of specularly reflected lights) and the exposure time is increased to 900 ms.

 figure: Fig. 3

Fig. 3 Reflection mode; object size: 2.7 cm, number of pixels used: 1500 X 1500. (a) Schematic diagram of the experimental setup. (b) Reconstructed image with the static diffuser. (c) Reconstructed image with temporal averaging of the speckle field.

Download Full Size | PDF

The reconstruction process is the same as mentioned in the previous section. The reconstructed images with the static and the moving diffuser are shown in Figs. 3(b) and 3(c), respectively. Figure 3(c) has speckles because they were not averaged out completely.

4. Measurement of the deformation of objects hidden behind a diffuser

The phase of the light reflected (or transmitted) by an object will change if the object is subjected to deformations (path change). Since the methods described in this paper allow the reconstruction of amplitude and phase of object hidden behind a diffuser or around a corner, they can be used as well to obtain their deformations.

Figure 4 shows an example where at first a hologram of the object located behind a diffuser was recorded. The object was then illuminated with an infrared lamp producing a deformation of the surface, later another holograms was recorded. The phase difference between the two object wavefronts recorded before and after loading contains the information about the object deformation. Phase unwrapping methods to remove the 2π phase jumps are available and could be used to retrieve the deformation from the phase. Figure 4(a) shows the reconstructed holographic image and Fig. 4(b) the fringes due to the deformation in the object.

 figure: Fig. 4

Fig. 4 (a) Reconstructed object. (b) Retrieved phase information about the deformation.

Download Full Size | PDF

5. Dual reference holography method

In the arrangement shown in Fig. 1 the diffuser surface is imaged onto the CCD by using a lens. When the imaging condition is not satisfied, the microscopic interference patterns produced on the diffuser are not resolved. From the blurred image of the hologram it is not possible to reconstruct the wave from the object located behind the diffuser. Figure 5 shows an arrangement without the imaging lens between the diffuser and the CCD sensor. In this setup a second reference (R2) is introduced in order to holographically record amplitude and phase of the wave transmitted by the diffuser. We have two holograms, the first one (H1) is created on the diffuser by the interference between the object wave and reference R1 and the second hologram (H2) is on the CCD by the interference between the field from the diffuser and the reference R2.

 figure: Fig. 5

Fig. 5 (a) Object to be imaged: size 6 mm. (b) Dual reference holography method to image the object behind the diffuser. (c) Reconstructed image.

Download Full Size | PDF

After the recording we may propagate (e.g. by using the Fresnel diffraction integral) the wavefront to the diffuser plane and reconstruct the hologram H1, from which we are able to obtain the object wavefront. The advantage of this setup is that the distance between the diffuser and the CCD can be chosen arbitrarily and do not need to satisfy any imaging condition since the focusing necessary to obtain H1 is done numerically. In this case, multiple holograms are being recorded by rotating the diffuser after every exposure and the averaging is performed over the intensities of the reconstructed images later. Figure 5(a) shows a small object (nut), Fig. 5(b) a schematic diagram of the dual reference holography method and Fig. 5(c) reconstructed image of the object.

6. Conclusion

We proposed methods to look through the scattering diffuser and around the corners by means of digital holography, which are simple and practical with no need of sequential scanning or iterative numerical computations for image reconstruction. The experimental results verify the validity of our methods. The object sizes, used in the first method, are significantly larger and more realistic than in the other already proposed methods. By virtue of digital holography, these methods are also capable of sensing small deformations and micro displacements of the object. The separation between the object and the reference can be increased by using the infrared sources. The dual reference holography method reduces the complexity of the imaging system as there is no imaging lens used. It will also open the possibility of imaging through multiple scattering layers in cascade by using many reference beams.

Acknowledgment

Mitsuo Takeda and Dinesh N. Naik are thankful to Alexander von Humboldt Foundation for the opportunity of their research stay at ITO, Universität Stuttgart.

References and links

1. H. P. Baltes, Inverse Scattering Problems in Optics (Springer, 1980).

2. J. C. Dainty and D. Newman, “Detection of gratings hidden by diffusers using photon-correlation techniques,” Opt. Lett. 8(12), 608–610 (1983). [CrossRef]   [PubMed]  

3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three dimensional shape around a corner using ultra-fast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).

4. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]   [PubMed]  

5. A. P. Mosk, “Imaging and focusing through turbid media,” in Proceedings of Novel Techniques in Microscopy (Hawaii, 2013).

6. A. P. Mosk, “High resolution imaging using scattered light,” in Proceedings of Digital Holography and Three-Dimensional Imaging (Hawaii, 2013).

7. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]   [PubMed]  

8. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

9. J. W. Goodman, W. H. Huntley Jr, D. W. Jackson, and M. Lehmann, “Wavefront-reconstruction imaging through random media,” Appl. Phys. Lett. 8(12), 311–313 (1966). [CrossRef]  

10. H. Kogelnik and K. S. Pennington, “Holographic imaging through a random medium,” J. Opt. Soc. Am. 58(2), 273–274 (1968). [CrossRef]  

11. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

12. J. W. Goodman, Speckle Phenomena in Optics Theory and Application (Robert and Company, 2007).

13. J. W. Goodman, Statistical Optics (John Wiley, 1985).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Transmission mode; (a) schematic diagram and (b) ray diagram of the experimental setup.
Fig. 2
Fig. 2 Transmission mode; object size: 5.5 cm, number of Pixels used: 1500 x 1500. (a) Part of recorded hologram. (b) Reconstruction without averaging the speckle field. (c) Reconstructed image while the diffuser is rotating
Fig. 3
Fig. 3 Reflection mode; object size: 2.7 cm, number of pixels used: 1500 X 1500. (a) Schematic diagram of the experimental setup. (b) Reconstructed image with the static diffuser. (c) Reconstructed image with temporal averaging of the speckle field.
Fig. 4
Fig. 4 (a) Reconstructed object. (b) Retrieved phase information about the deformation.
Fig. 5
Fig. 5 (a) Object to be imaged: size 6 mm. (b) Dual reference holography method to image the object behind the diffuser. (c) Reconstructed image.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

u ( ξ , η ) = u O ( ξ , η ) + u R ( ξ , η ) .
u d i f f ( ξ , η ) = u ( ξ , η ) exp [ i ϕ r ( ξ , η ) ] .
u CCD ( x,y )= { u( ξ, η )exp[ i ϕ r ( ξ,η ) ] }g( xξ,yη )dξdη .
| u CCD ( x,y ) | 2 = u( ξ 1 , η 1 ) u * ( ξ 2 , η 2 )exp[ i ϕ r ( ξ 1 , η 1 ) ]exp[ i ϕ r ( ξ 2 , η 2 ) ] ×g( x ξ 1 ,y η 1 ) g * ( x ξ 2 ,y η 2 )d ξ 1 d η 1 d ξ 2 d η 2 .
exp[ i ϕ r ( ξ 1 , η 1 ,t ) ]exp[ i ϕ r ( ξ 2 , η 2 ,t ) ] =δ( ξ 1 ξ 2 ,   η 1 η 2 ).
| u CCD (x,y) | 2 = | u(ξ,η) | 2 ×| g(xξ,yη) | 2 dξdη.
| u(ξ,η) | 2 = | u O (ξ,η) | 2 + | u R (ξ,η) | 2 + u O (ξ,η) u R * (ξ,η)+ u O * (ξ,η) u R (ξ,η),
[ | u CCD ( x,y ) | 2 ]=[ | u( x,y ) | 2 ]×[ | g( x,y ) | 2 ].
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.