Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM)

Open Access Open Access

Abstract

Fresnel incoherent correlation holography (FINCH) records holograms under incoherent illumination. FINCH was implemented with two focal length diffractive lenses on a spatial light modulator (SLM). Improved image resolution over previous single lens systems and at wider bandwidths was observed. For a given image magnification and light source bandwidth, FINCH with two lenses of close focal lengths yields a better hologram in comparison to a single diffractive lens FINCH. Three techniques of lens multiplexing on the SLM were tested and the best method was randomly and uniformly distributing the two lenses. The improved quality of the hologram results from a reduced optical path difference of the interfering beams and increased efficiency.

©2012 Optical Society of America

1. Introduction

The developments of spatial light modulators (SLMs) during the last three decades have boosted the technology of computerized holography. SLMs can function in different ways within holographic systems. In some systems, SLMs are used as input data displays for holographic memory storage [1]. In other systems, SLMs are used as holographic displays of computer generated holograms (CGHs) [2, 3]. An SLM can even be used as a means to synthesize CGHs by iterative algorithms [4], or it can be used as a spatial phase modulator for a phase shifting process in a digital holography recorder [5]. In the Fresnel incoherent correlation holography (FINCH) [613] system, which is the main topic of this study, the SLM plays three important roles. First, it splits the light coming from each object point into two separated wavefronts. This splitting is achieved by displaying two different diffractive optical elements on the SLM. Second, the phase shifting procedure, essential for in-line digital holography systems in general, and for FINCH in particular, is implemented by a phase factor in one of the diffractive lenses displayed on the SLM. Third, the SLM is used as an imaging lens which images the observed objects to some planes closer to the recording camera. So far, in most of our [611], and others [12], designs of FINCH, one diffractive lens and a constant phase mask as the second diffractive element, have been multiplexed on the SLM. However, in a recent publication [13] we have shown that maximum image resolution can be also achieved with two diffractive lenses multiplexed on the SLM. In the present study, we extend this topic of dual lens FINCH, and analytically show that for a given source bandwidth, and image magnification, this design can be superior over the single diffractive lens configuration.

Before going into the technical details, let's briefly introduce the main principles of FINCH. The setup of the dual lens FINCH, shown schematically in Fig. 1 , includes a collimation lens, an SLM and a digital camera. The principle of operation is as follows; incoherent light emitted from each point in the object being imaged is split by two diffractive lenses displayed on the SLM into two beams that interfere with each other. The camera records the entire interference patterns of all the beam pairs emitted from the entire object points, creating a hologram. Typically, three such holograms, each with a different phase constant in the pattern of one of the diffractive lenses, are recoded sequentially and are superimposed in order to eliminate the unnecessary bias term and the twin image from the reconstructed scene. The resulting complex-valued Fresnel hologram of the 3D scene is then reconstructed on the computer screen by the standard Fresnel back propagation algorithm [14], revealing the 3D properties of the observed scene.

 figure: Fig. 1

Fig. 1 FINCH with two diffractive lenses created by the SLM. L1-collimating lens. SLM creates two lenses with focal length f1 and f2 with image focus at Image 1 and Image 2, respectively.

Download Full Size | PDF

Recently, we have demonstrated, theoretically and experimentally, improved resolution beyond the Rayleigh limit with FINCH when compared to conventional glass-lens-based incoherent and coherent imaging systems with the same numerical aperture [13]. It has recently been highlighted that reconstruction of incoherent FINCH holograms is performed using coherent reconstruction techniques [12]. Analysis of FINCH showed it is actually a hybrid between coherent and incoherent imaging systems with better modulation transfer function (MTF) and point spread function (PSF) characteristics than either system alone [13]. This feature is the reason that FINCH has properties of a super-resolution system. We also theoretically showed [13] that generalizing the pattern displayed on the SLM from a single diffractive lens to two multiplexed diffractive lenses can yield similar imaging resolution.

In this study, we implement the concept of displaying two lens patterns on a SLM and demonstrate the advantage of a larger field of view and a more compact optical system. There are many methods to multiplex two lenses on a phase-only SLM. In the experimental section of this study we compare three different methods.

2. Dual lens FINCH analysis

In the analysis herein we follow the main steps of Ref [13], but with the modified setup of a dual diffractive lens, instead of a single lens, displayed on the SLM. FINCH creates incoherent holograms in a single channel system as a result of interference between two waves originating from every object point located in front of a collimating lens. The object is illuminated by an incoherent light source, or it is an incoherent light source by itself, like for instance, in the case of fluorescent objects. Either way, it is assumed that the 3D object is a collection of quasi-monochromatic, spatially incoherent, point sources. The following analysis refers to the system scheme shown in Fig. 1, where the wave from only one infinitesimal point, out of the entire object, is treated. Therefore, the result of this analysis is considered as a PSF. For simplicity, we assume that the object point is located at r¯s=(xs,ys) on the front focal plane of the collimating lens L1. A more general analysis for other transversal planes is given in Ref [9], but for the purpose of the present study the simpler case is enough to learn about the main features of FINCH.

In reference to Fig. 1, an infinitesimal object point with the complex amplitude Is, located at (xs,ys,-fo), induces just before the lens L1, in the paraxial approximation [14], a complex amplitude of Is C(r¯s )L(r¯s/fo )Q(1/fo ), where fo is the focal length of lens L1, Q designates the quadratic phase function, such that Q(b)=exp[iπbλ1(x2+y2)], L denotes the linear phase function, such that L(s¯)=exp[i2πλ1(sxx+syy)], λ is the central wavelength of the light and C(r¯s) is a complex constant dependent on the location of the point source. This wave is multiplied by the transparency of the lens L1[multiplied by Q(1/fo ) ], propagates a distance d [convolved with Q(1/d)] and meets the SLM. The SLM transparency is[C1Q(1/f1)+C2exp(iθ)Q(1/f2)], where C1, C2 are constants, and the angle θ is one of the three angles used in the phase shifting procedure in order to eliminate the bias term and the twin image from the final hologram [69]. This SLM transparency is a combination of two diffractive positive spherical lenses of focal length f1 and f2. In the past, we presented two different methods to display a constant phase mask with a lens mask on the same SLM. However, one of these methods, based on the polarization sensitivity of the SLM [9], cannot be applied to the case of multiplexing two diffractive lenses. Therefore, in the experimental part of this study we test the other multiplexing method, using random pixels on the SLM, to separate the collimated beam and spherical wave created by the SLM, and compare its performances with two new multiplexing schemes. Continuing with the description of the wave propagation in reference to Fig. 1, one can realize that beyond the SLM there are two different beams propagating an additional distance zh [convolved withQ(1/zh)] till the camera. On the camera detector, only the area of the beam overlap, denoted by the area of the function P(RH), is considered as part of the hologram. P(RH) stands for the limiting aperture of the system, where it is assumed that the aperture is a clear disk of radius RH determined by the overlap area of the two interfering beams on the camera plane. Finally, the magnitude of the interference is squared to yield the intensity distribution IH(u,v) of the recorded hologram, whereρ¯=(u,v)are the coordinates of the camera plane. All this description is formulated into the following equation,

IH(u,v)=|IsC(r¯s)L(r¯sfo)Q(1fo)Q(1fo)*Q(1d)×[C1Q(1f1)+C2exp(iθ)Q(1f2)]*Q(1zh)P(RH)|2,
where the asterisk denotes a two dimensional convolution.

Three holograms of the form of Eq. (1) with three different values of the angle θ are recorded and superimposed, according to Eq. (5) of Ref [6], in order to obtain a complex hologram of the object point, given by,

H(ρ¯)=C'IsP(RH)L(r¯rzr)Q(1zr),
where C’ is a constant and zr is the reconstruction distance from the hologram to the image point calculated to be,
zr=±|(zhf1)(zhf2)f1f2|.
The ± indicates that there are two possible reconstructed images, although only one of them is chosen to be reconstructed, as desired.r¯ris the transverse location of the reconstructed image point calculated to be,
r¯r=(xr,yr)=r¯szhfo.
The precise way by which the results of Eqs. (2)-(4) are calculated from Eq. (1) is described in the appendix of Ref [10]. Therefore, we save the detailed algebra from being repeated herein. From Eq. (4) it is clear that the transverse magnification is MT = zh/fo. The PSF of FINCH is obtained by digitally reconstructing the Fresnel hologram given in Eq. (2) at a distance zr from the hologram plane. The expression of the hologram in Eq. (2) contains a transparency of a lens with focal distance zr and hence, according to Fourier optics theory [14], the reconstructed image is

hF(r¯)=C'Isν[1λzr]{L(r¯rzr)P(RH)}=C''IsJinc(2πRHλzr(xMTxs)2+(yMTys)2),
where C is a constant, denotes Fourier transform, ν is the scaling operator such that ν[a]f(x)=f(ax), r¯=(x,y) are the coordinates of the reconstruction plane and Jinc is defined as Jinc (r)= J1(r)/r, where J1(r) is the Bessel function of the first kind, of order one.

The width of the PSF in every imaging system determines the resolution of the system. This width is chosen herein as the diameter of the Jinc's central lobe defined by the circle of the first zeros of the Jinc function of Eq. (5). This diameter is equal to 1.22λzr/RH in the image plane, or

Δ=1.22λzrRHMT=1.22λzrfoRHzh
in the object plane. The goal here is to find the values of f1 and f2 which yields the minimum value of the PSF diameter. It is easy to prove that the minimum value the PSF diameter is obtained for the case of a perfect overlap between the two cones of the spherical waves, one is focused at f1, before the camera plane, and the other at f2, beyond the camera plane, as is shown in Fig. 1. Under the condition of perfect overlap of the two beam cones, and based on the triangular similarity, it is evident that,
RH=Rozhf1f1=Rof2zhf2,
where Ro is the radius of the SLM. Therefore, the SLM-camera distance, zh, under the perfect overlap condition is given by,
zh=2f1f2f1+f2
Substituting Eqs. (3), (7) and (8) into Eq. (6) yields that the PSF diameter at the object plane is
Δ=0.61λfoRo=0.61λNA.
This diameter value is half of the PSF diameter of a regular glass-lens imaging system with the same numerical aperture (NA). This result looks similar to the Rayleigh resolution criterion [14] for incoherent imaging systems. However, recall that Rayleigh criterion is defined as the radius of the PSF with the diameter of 1.22λ/NA, we realize that the resolution of the optimal FINCH is beyond the Rayleigh resolution.

The diameter value of Eq. (9) is equal to the PSF diameter of the special FINCH in which f1 = zh/2 and f2 = ∞ [13]. Note that f2 = ∞ is implemented by a constant phase mask, while f1 = zh/2 is realized by the single diffractive lens, both displayed on the SLM. Practically, it is more efficient to multiplex constant phase with a single diffractive lens rather than two diffractive lenses. This is because the constant phase is implemented by the entire SLM pixels from the polarization component orthogonal to the polarization of the SLM [9], and the diffractive lens is implemented by the other polarization component, also by use of the entire SLM pixels. In other words, this method of multiplexing by polarization guarantees that each diffractive element is implemented by all the pixels of the entire SLM. On the other hand, multiplexing two diffractive lenses on the same SLM means that different pixels of the SLM are allocated for each lens. Therefore, only half of the SLM pixels are utilized for each lens and the consequence is some reduction of the efficiency of each lens.

Nevertheless, we find an important advantage in multiplexing two diffractive lenses rather than one diffractive lens and a constant phase mask. The dual lens system has the advantage that the maximum optical path difference (OPD) of this configuration is smaller than that of a single lens system. Moreover, this difference becomes smaller as the distance between the two focuses is reduced. Next, the FINCH configuration shown in Fig. 1 is considered as an interferometer illuminated by a narrow band point source. According to the temporal coherence theory [15,16], the visibility of the interference pattern of such interferometer is equal to the magnitude of the complex degree of coherence. Moreover, the visibility value goes to zero as the OPD between the interferometer channels increases. According to the Wiener-Khintchine theorem the complex degree of coherence and the source spectrum are a Fourier pair [15,16]. Hence, a smaller OPD means that for a given light source bandwidth the hologram visibility is higher and therefore its entire quality is better. In other words, in order to get a hologram with good fringe visibility all over its area with a light source of bandwidth Δν, the maximum OPD should satisfy the condition [15,16]

δmaxCoΔν=λ2Δλ,
where Co is the speed of light in the free space and Δλ is the source bandwidth in terms of wavelengths. Note that the maximum OPD can be minimized in the single lens FINCH by increasing the SLM-camera distance, zh. However, by increasing zh, one also increases the image magnification, decreases the field of view and decreases the recorded intensity which, in turn, decreases the signal-to-noise ratio. Therefore, in order to avoid all these negative effects, it is preferred to minimize the maximum OPD by adjusting the distance between the two focuses of the diffracted lenses displayed on the SLM.

Referring to Fig. 1, and in case of free space, the maximum OPD, for an object point in the origin of the coordinates, is the maximum path difference between two wavefronts starting on the SLM plane and interfering on the camera plane. Geometrically, the maximum OPD is the difference between the lines BH¯ andAH¯, as the following

δmax=|BH¯AH¯|=(Ro+RH)2+zh2(RoRH)2+zh2.
Substituting Eqs. (7) and (8) into Eq. (11) yields,
δmax=zh[(Rof1)2+1(Rof2)2+1 ].
Under the condition that f1,f2>>Ro, the maximum OPD can be approximated to,
δmaxRo2f1f2Δf,
whereΔf=f2f1. From Eq. (13) it is evident that reducing the gap between the focuses, Δf, yields a corresponding reduction of the maximum OPD. Therefore, we expect to see a continuous improvement in the quality of the reconstructed images from holograms recorded as the gap is reduced between the two focuses. This is indeed the case as demonstrated in the experimental section. In the case that one of the diffractive elements on the SLM is a constant phase mask [13], the maximum OPD is δmaxRo2/f1=2Ro2/zh and it is evident that increasing the distance between the camera and the SLM, zh, is a way to reduce the maximum OPD and to improve the quality of the reconstructed images. However, as mentioned above, increasing zh has several negative effects that should be avoided.

The upper limit for the distance between the two focuses for achieving a high visibility hologram for a given source bandwidth and image magnification, can be derived by substituting MT = zh/fo, zh2≈f1f2 and Eq. (10) into Eq. (13) to obtain;

Δfmax=δmaxf1f2 Ro21Δλ(λMTfoRo)2.
Above this limit, parts of the hologram might have low fringe visibility, which means that part of the information on the image might be lost and cannot appear on the reconstructed image.

The lower limit for the distance between the two focuses is obtained from the condition that the distance between each image and the camera should be long enough to justify using the paraxial (Fresnel) approximation. If both images are too close to the camera and below the Fresnel range, the recorded hologram cannot be considered a Fresnel hologram and the analysis based on Eq. (1) is not valid any more. Based on the discussion in page 69 of Ref [14], the lower limitΔfmin, for a given system magnification and maximum object size Omax, is

Δfmin=π(MTOmax)4128λ3.
All the above mentioned analysis is based on the assumption that FINCH is diffraction limited and that the pixel size of the camera does not limit the system resolution. This assumption is fulfilled if the finest fringe of the hologram can be correctly sampled by the camera. Referring to Fig. 1 and recalling that the finest fringe is created by the interfered beams with the largest angle difference between them, the condition that should be satisfied is
tanφ=tan(tan1Ro+RHzh+tan1RoRHzh)2Rozhλ2δc,
where φ is the largest angle difference between the interfered beams in the system and δc is the camera pixel size. The approximation in Eq. (16) is valid for a small angle φ, where Ro<<zh, as is satisfied in our experimental system. For a given SLM and digital camera, the only free variable is zh. Therefore, in order to keep the system as diffraction limited as possible, the lower limit of the distance between the SLM and the camera should be4Roδc/λ. Another limitation on the value of zh is due to the SLM pixel size. Let's assume that the minimal focal length to be implemented by the SLM is zh/2, and it is well-known that the ratio between the radius of a diffractive lens and its minimal focal length is equal to the ratio between the central wavelength and the minimal cycle of the diffractive lens (i.e., Ro/fmin=2Ro/zh=λ/2δs, where δs is the SLM pixel size). Therefore, considering the pixel size of both devices, the minimal SLM-camera distance is,
zh,min=4Roλmax{δc,δs}.
As mentioned above, increasing the distance zh has mostly negative effects on the hologram and therefore in general we tend to keep the distance zh as close as possible to the lower limit.

3. Experimental methods and results

The theoretical prediction that the quality of the reconstructed image is improved by narrowing the gap between the two focuses applied to the SLM was tested in the following experiments. Furthermore, we tested three different methods of lens multiplexing in order to find the best among them. The three methods tested for simultaneously applying dual Fresnel patterns to the SLM are: 1) Random pixel method - the phase values of the two quadratic functions are distributed randomly, uniformly and equally among the entire SLM pixels, where each SLM pixel contains only the value of one of the lenses. 2) Phase sum method - the quadratic phase functions of the two lenses are summed and the phase distribution of the sum is displayed on all the pixels of the SLM. 3) Random ring method - the phase values of the two quadratic functions are distributed randomly and uniformly among the SLM rings where each ring on the SLM contains the value of one of the lenses. An example of the SLM phase distribution created by the random pixel method is shown in Fig. 2(a) and 2(d). An example of the SLM phase distribution created by the phase sum method is shown in Fig. 2(b) and 2(e). An example of the SLM phase distribution created by the random ring method is shown in Fig. 2(c) and 2(f).

 figure: Fig. 2

Fig. 2 Three different methods of lens multiplexing: (a) random pixel method, (b) phase sum method and (c) random ring method. Magnified views of each pattern are shown in panels d, e and f below the complete pattern for each method. Only a single phase pattern (0°) is shown, but in practice, three phase patterns of 0°, 120° or 240° phase shift are sequentially displayed on the SLM during capture of the three holograms for each method.

Download Full Size | PDF

The experiments were carried out on the setup shown in Fig. 3 . The tested object, illuminated by white light, was a section of a resolution chart of 2.0mm × 1.0mm in size, which is a combination of a binary grating with a spatial frequency of 10 cycles per mm and the digits '10.0'. The object was placed 25cm before the collimation lens L1 with a focal length of fo = 25cm. The distance between the SLM (Holoeye Pluto, 1920 × 1080 pixels, 8μm pixel pitch, phase only) and the CMOS camera (PCO.EDGE, pixel size δc = 6.5μm, 2560 × 2160 pixels array) was zh = 49.5cm. The distance between the camera and SLM, zh, was kept constant for all the experiments. A polarizer and a chromatic bandpass filter (BPF) with a 80nm bandwidth around 650nm central wavelength, was introduced to the setup between the object and the refractive lens. The polarizer direction was adjusted in order to get maximum modulation of the polarized-dependent SLM. The BPF was introduced in order to increase the temporal coherence and by that to improve the fringe visibility of the hologram. The SLM was placed at an angle of 77.5 degrees with respect to the optical axis and the aspect ratio of the Fresnel patterns displayed on the SLM adjusted in one direction for this angle so that the pattern displayed on the camera retained its spherical geometry as if the SLM was normal to the imaging plane. Experiments were carried out with the three different methods of representation of the dual lenses mentioned above. In all the experiments we used the same object, and diffractive masks of 1080 × 1080 pixels displayed on the SLM. FINCH complex holograms created by image capture at three different phases were created for each of eight different values of the gap between the two focuses, ranging from Δf = 5.3cm (f1 = 47cm, f2 = 52.3cm) to Δf = 24.9cm (f1 = 40cm, f2 = 64.9cm). This selection of masks satisfied the condition of a complete overlap formulated by Eq. (8).

 figure: Fig. 3

Fig. 3 Experimental setup. POL-Polarizer, BPF-Bandpass filter.

Download Full Size | PDF

For each method of lens representation, and for each focus gap, three different holograms were acquired for each of the diffractive lens combinations of two focal planes, each with a different phase angle of 0°, 120° or 240°. The three recorded holograms for each of the two lens combinations were superimposed in the computer, as mentioned above, in order to eliminate the two unwanted terms, the bias and the twin image. The superimposed complex valued holograms were stored in the computer and later digitally reconstructed

Results of the best in-focus reconstructed planes, computed by the Fresnel back propagation algorithm [14], for each of the 8 Δf values and for each method, are depicted in Fig. 4 . Figures 4(a), 4(b) and 4(c) show the best in-focus reconstructions of the holograms captured with the random pixel method, the phase sum method and the random ring method, respectively.

 figure: Fig. 4

Fig. 4 Best in-focus reconstructed images from holograms captured with (a) random pixel method, (b) phase sum method and (c) random ring method.

Download Full Size | PDF

It can be seen that, consistently, the best visual reconstructed results are obtained with the random pixel multiplexing method. It can also be seen that, with the random pixel method, there is a continuous improvement in the reconstruction quality with narrowing the focal length difference Δf of the two diffractive lenses. This result verifies our prediction that reducing the optical path differences in the system increases the fringe visibility of the recorded holograms.

To evaluate the results, we have compared the reconstructions of each hologram with the image of the object shown in Fig. 5 taken by a conventional dual lens (refractive lens L1 and diffractive lens with f = zh = 49.5cm) imaging system with the same magnification. The mean-square error (MSE) between the best in-focus reconstructed planes from each hologram and the reference image of the object shown in Fig. 5 is used as a quantitative comparison measure. The MSE calculation is defined in the following equation:

MSE=1MKi=1Mj=1K[P(i,j)ξP˜(i,j)]2,
where i, j are the coordinates of each pixel, M, K are the dimensions of the considered area, P(i,j) is the reference image shown in Fig. 5, P˜(i,j) is one of the reconstructed images shown in Fig. 4, and ξ is a factor that scales the reconstructed images to minimize the MSE given by
ξ=[i=1Mj=1KP(i,j)P˜(i,j)]/i=1Mj=1KP˜(i,j)P˜(i,j).
All graphs of MSE versus Δf for the three multiplexing methods are shown in Fig. 6 . The quality improvement is demonstrated both in the visual comparison of Fig. 4, as well as by the quantitative MSE given in Fig. 6. Higher quality is achieved by using the random pixel based method as well as by reducing the focal length difference of the two diffractive lenses on the SLM.

 figure: Fig. 5

Fig. 5 The reference image of the input object.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 MSE versus the distance between the two focuses calculated on the images of Fig. 4 in comparison with the reference image shown in Fig. 5.

Download Full Size | PDF

Based on the above mentioned given experimental parameters one can calculate the limits on the values of the focus gap, Δf and the SLM-camera distance, zh. Substituting the data of Ro = 4.3mm, λ = 650nm, Δλ = 80nm and zh = 49.5cm into Eq. (14) yields that Δfmax≈7.5cm. Substituting the data of MT≈2, Omax = 2.0mm and λ = 650nm into Eq. (15) yields that Δfmin≈2cm. The range of our measurements was between Δf = 5.3cm and Δf = 24.9cm, and indeed as can be seen in Figs. 4 and 6, in the random pixel method, below Δf = 10cm the quality of the reconstruction is relatively good and is further improved as Δf is decreased. Substituting the data of Ro = 4.3mm, λ = 650nm and δc = 6.7μm and δs = 8μm into Eq. (17) yields that zh,min = 21.2cm. The value of zh = 49.5cm used in our setup ensures that our holographic system is deep inside the regime of a diffracted limited system.

In order to exclude the possibility that the poor quality of some hologram reconstructions is caused by poor images created by the related diffractive lenses, we checked these images for two values of Δf. All the images, denoted in Fig. 1 as ‘image 1’ and ‘image 2’, were recorded while both diffractive lenses are multiplexed on the SLM by one of the three methods. The images were reordered directly by moving the camera once to a distance f1 from the SLM and then to a distance f2 from the SLM. No holographic process is involved in this part, and the aim of this measurement is to support the argument that there is no relation between the qualities of the images in the system and the images reconstructed from the various holograms.

Figures 7(a) and 7(b) show the images captured at the distance of 40cm and 64.9cm from the SLM, respectively, where the focal length of the diffractive lenses multiplexed on the SLM by the three methods are f1 = 40cm and f2 = 64.9cm. This Δf = 24.9 value is the highest point on the graph of Fig. 6, in which the reconstruction quality is poorest of all, in the entire three multiplexing methods. Nonetheless, as can be seen from Figs. 7(a) and 7(b), all the images are relatively clear and the entire details in the images are revealed. How much these images are good can be learned by comparing them to the images at the value of Δf = 10cm, shown in Figs. 7(c) and 7(d), taken at the distance of 45cm and 55cm from the SLM, respectively, where the focal length of the diffractive lenses multiplexed on the SLM were f1 = 45cm and f2 = 55cm. There is not any significant difference in the quality between the images of Figs. 7(a) and 7(b) and the images of Figs. 7(c) and 7(d), although the quality of the hologram reconstruction for Δf = 10cm, taken under the random pixel and random ring methods, are much better than that of Δf = 24.9cm.

 figure: Fig. 7

Fig. 7 Images which were taken by dual lens imaging system. The input lens is always L1 with focal lens of fo = 25cm. the output lens is a diffractive lens with the focal length of (a) 40cm, (b) 64.9cm (c) 45cm and (d) 55cm. The left-hand image in each group was recorded with the random pixel method, the central was recorded with the phase sum method, and the right-hand image in each group was recorded with the random ring method.

Download Full Size | PDF

The conclusion from these observations is that the quality of the hologram reconstruction is by no means related to the quality of the images created in the system during the hologram capturing. Also note there is not a significant difference between the images of the different multiplexing methods, indicating that the difference between the holograms with the different multiplexing methods is not caused by the different images involved in the process.

4. Discussion and conclusions

We have analyzed the dual lens FINCH from two perspectives; first, three methods of multiplexing two diffractive lenses have been compared. The results indicate that among the three methods, distributing randomly the lenses among the entire pixels of the SLM is the best. Since according to Fig. 7 all three methods yield essentially the same two main images before and behind the camera, we conclude that the worse methods of multiplexing generate additional unwanted waves which distort the interference between the two main waves related to the two images. Note that the reconstructions of the holograms taken with the random ring method are much improved for Δf values below 10cm. This indicates that at least for this type of hologram the disturbing waves are weakened as the Fresnel numbers of the two diffractive lenses become close to each other.

Nevertheless, we realize that theoretically the random pixel method can be sub-efficient in the sense that each lens is implemented by only half of the SLM pixels and that, in practice, the ideal resolution limit given in Eq. (9) might not be achievable. This disadvantage can be eliminated by either increasing the pixel density on the SLM or introducing into FINCH an additional refractive lens between the SLM and the camera as is suggested in Fig. 3(c) of Ref [13]. The refractive lens can be allocated at different locations, so several configurations with different features can be obtained. Therefore, this kind of FINCH with the additional refractive lens is too broad a topic to be analyzed in this work, and should be discussed in another publication.

The other perspective of this study is the issue of coherence. FINCH is a spatially incoherent holographic imaging system which can work well only if some level of temporal coherence exists. In the past, with the setup of a single diffractive lens and the constant phase mask, the average OPD in the system was longer than the coherence distance and hence in order to get a hologram with a reasonable fringe visibility we did one or both of the following actions: We narrowed the source bandwidth by a chromatic filter in order to increase the temporal coherence of the system, or we increased the SLM-camera distance in order to interfere the waves inside the high temporal coherence regime. Both actions have high prices and, as is demonstrated here, they both can be avoided by use of dual lens FINCH. The values of the SLM-camera distance and the source bandwidth in comparison to our previous experiment [13] indicate a considerable improvement. The SLM-camera distance is about 50cm herein instead of 138cm in Ref [13]. and the source bandwidth is 80nm herein instead of 10nm in Ref [13]. These improved values enable one to detect weaker radiating objects in a much wider field of view. Once the source bandwidth and the SLM-camera distance (and consequently the image magnification) are given, the upper limit on the value of the gap between the two focuses can be easily calculated by Eq. (14). Working in the range below this upper limit guarantees relatively high coherence and consequently high fringe visibility for the recorded holograms.

Acknowledgments

This work was supported by The Israel Ministry of Science and Technology (MOST) to JR and by NIST ARRA Award No. 60NANB10D008 to GB and by Celloptic, Inc.

References and links

1. J. F. Heanue, M. C. Bashaw, and L. Hesselink, “Volume holographic storage and retrieval of digital data,” Science 265(5173), 749–752 (1994). [CrossRef]   [PubMed]  

2. F. Mok, J. Diep, H.-K. Liu, and D. Psaltis, “Real-time computer-generated hologram by means of liquid-crystal television spatial light modulator,” Opt. Lett. 11(11), 748–750 (1986). [CrossRef]   [PubMed]  

3. J. N. Mait and G. S. Himes, “Computer-generated holograms by means of a magnetooptic spatial light modulator,” Appl. Opt. 28(22), 4879–4887 (1989). [CrossRef]   [PubMed]  

4. J. Rosen, L. Shiv, J. Stein, and J. Shamir, “Electro-optic hologram generation on spatial light modulators,” J. Opt. Soc. Am. A 9(7), 1159–1166 (1992). [CrossRef]  

5. C.-S. Guo, Z.-Y. Rong, H.-T. Wang, Y. Wang, and L. Z. Cai, “Phase-shifting with computer-generated holograms written on a spatial light modulator,” Appl. Opt. 42(35), 6975–6979 (2003). [CrossRef]   [PubMed]  

6. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]   [PubMed]  

7. J. Rosen and G. Brooker, “Fluorescence incoherent color holography,” Opt. Express 15(5), 2244–2250 (2007). [CrossRef]   [PubMed]  

8. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

9. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011). [CrossRef]   [PubMed]  

10. B. Katz and J. Rosen, “Super-resolution in incoherent optical imaging using synthetic aperture with Fresnel elements,” Opt. Express 18(2), 962–972 (2010). [CrossRef]   [PubMed]  

11. B. Katz, D. Wulich, and J. Rosen, “Optimal noise suppression in Fresnel incoherent correlation holography (FINCH) configured for maximum imaging resolution,” Appl. Opt. 49(30), 5757–5763 (2010). [CrossRef]   [PubMed]  

12. P. Bouchal, J. Kapitán, R. Chmelík, and Z. Bouchal, “Point spread function and two-point resolution in Fresnel incoherent correlation holography,” Opt. Express 19(16), 15603–15620 (2011). [CrossRef]   [PubMed]  

13. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [CrossRef]   [PubMed]  

14. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

15. J. W. Goodman, Statistical Optics (Wiley, 1985).

16. E. Wolf, Introduction to the theory of coherence and polarization of light (Cambridge, 2007).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 FINCH with two diffractive lenses created by the SLM. L1-collimating lens. SLM creates two lenses with focal length f1 and f2 with image focus at Image 1 and Image 2, respectively.
Fig. 2
Fig. 2 Three different methods of lens multiplexing: (a) random pixel method, (b) phase sum method and (c) random ring method. Magnified views of each pattern are shown in panels d, e and f below the complete pattern for each method. Only a single phase pattern (0°) is shown, but in practice, three phase patterns of 0°, 120° or 240° phase shift are sequentially displayed on the SLM during capture of the three holograms for each method.
Fig. 3
Fig. 3 Experimental setup. POL-Polarizer, BPF-Bandpass filter.
Fig. 4
Fig. 4 Best in-focus reconstructed images from holograms captured with (a) random pixel method, (b) phase sum method and (c) random ring method.
Fig. 5
Fig. 5 The reference image of the input object.
Fig. 6
Fig. 6 MSE versus the distance between the two focuses calculated on the images of Fig. 4 in comparison with the reference image shown in Fig. 5.
Fig. 7
Fig. 7 Images which were taken by dual lens imaging system. The input lens is always L1 with focal lens of fo = 25cm. the output lens is a diffractive lens with the focal length of (a) 40cm, (b) 64.9cm (c) 45cm and (d) 55cm. The left-hand image in each group was recorded with the random pixel method, the central was recorded with the phase sum method, and the right-hand image in each group was recorded with the random ring method.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

I H (u,v)=| I s C( r ¯ s )L( r ¯ s f o )Q( 1 f o )Q( 1 f o )*Q( 1 d ) ×[ C 1 Q( 1 f 1 )+ C 2 exp(iθ)Q( 1 f 2 ) ]*Q( 1 z h )P( R H ) | 2 ,
H( ρ ¯ )=C' I s P( R H )L( r ¯ r z r )Q( 1 z r ),
z r =±| ( z h f 1 )( z h f 2 ) f 1 f 2 |.
r ¯ r =( x r , y r )= r ¯ s z h f o .
h F ( r ¯ )=C' I s ν[ 1 λ z r ]{ L( r ¯ r z r )P( R H ) } =C'' I s Jinc( 2π R H λ z r ( x M T x s ) 2 + ( y M T y s ) 2 ),
C
ν
ν[ a ]f( x )=f( ax )
r ¯ =( x,y )
Jinc
Jinc (r)=  J 1 (r)/r
J 1 (r)
Δ= 1.22λ z r R H M T = 1.22λ z r f o R H z h
R H = R o z h f 1 f 1 = R o f 2 z h f 2 ,
z h = 2 f 1 f 2 f 1 + f 2
Δ= 0.61λ f o R o = 0.61λ NA .
δ max C o Δν = λ 2 Δλ ,
δ max =| BH ¯ AH ¯ |= ( R o + R H ) 2 + z h 2 ( R o R H ) 2 + z h 2 .
δ max = z h [ ( R o f 1 ) 2 +1 ( R o f 2 ) 2 +1   ].
δ max R o 2 f 1 f 2 Δf,
Δ f max = δ max f 1 f 2   R o 2 1 Δλ ( λ M T f o R o ) 2 .
Δ f min = π ( M T O max ) 4 128λ 3 .
tanφ=tan( tan 1 R o + R H z h + tan 1 R o R H z h ) 2 R o z h λ 2 δ c ,
z h,min = 4 R o λ max{ δ c , δ s }.
MSE= 1 MK i=1 M j=1 K [ P( i,j )ξ P ˜ ( i,j ) ] 2 ,
ξ= [ i=1 M j=1 K P( i,j ) P ˜ ( i,j ) ] / i=1 M j=1 K P ˜ ( i,j ) P ˜ ( i,j ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.