Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ghost imaging schemes: fast and broadband

Open Access Open Access

Abstract

In ghost imaging schemes information about an object is extracted by measuring the correlation between a beam that passed the object and a reference beam. We present a spatial averaging technique that substantially improves the imaging bandwidth of such schemes, which implies that information about high-frequency Fourier components can be observed in the reconstructed diffraction pattern. In the many-photon regime the averaging can be done in parallel and we show that this leads to a much faster convergence of the correlations. We also consider the reconstruction of the object image, and discuss the differences between a pixel-like detector and a bucket detector in the object arm. Finally, it is shown how to non-locally make spatial filtering of a reconstructed image. The results are presented using entangled beams created by parametric down-conversion, but they are general and can be extended also to the important case of using classically correlated thermal-like beams.

©2004 Optical Society of America

1. Introduction

Lately a substantial effort has been put into understanding the physics behind “ghost” imaging [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], also called entangled imaging, two-photon imaging, coincidence imaging, or correlated imaging. The ghost imaging schemes are based on two correlated beams typically originating from parametric down conversion (PDC). One beam travels a path (the test arm) containing an unknown object, while the other beam is sent through a reference optical system (the reference arm). Information about the object is then obtained by measuring the spatial correlation between the beams. This two-arm configuration allows for obtaining different kinds of information by solely operating on the reference arm optical setup while keeping the test arm fixed. In particular, both the image (near-field distribution) and the diffraction pattern (far-field distribution) of the object can be measured [9].

The early studies concentrated on the working principles of the ghost imaging scheme, both in terms of basic formalism [1, 2] as well as the experimental implementation [3, 4, 5]. A general theory has been developed that discusses how to extract the information, not only in the coincidence counting regime [6, 7, 8], but also in the high gain regime [9, 10, 11, 12] where recent experimental results showed nonclassical spatial correlations for for high-gain PDC [14], which promises well for an efficient quantum ghost imaging protocol. Most recent discussions have instead focused on whether entanglement is necessary for extracting the information [6, 7, 8, 9, 10, 11, 13] and references therein. The present paper focuses on how to optimize any source, entangled or not, to give as much image information as possible. The issues of imaging bandwidth and image resolution have been taken up previously [9, 10, 11, 12, 13]. Even in an ideal ghost imaging system the imaging bandwidth is limited by the source generating the correlated beams, characterized by a finite extension of the spatial gain in Fourier space δ q source: beyond this value the Fourier frequencies are cut off and this information is lost in the reconstruction of the object diffraction pattern. The image resolution is instead limited by the coherence length x coh, which is given by the characteristic width of the near-field correlation function.

The question is: can we increase the source cutoff value? In the case of PDC the cutoff is determined by the phase-matching conditions [15, 10, 12] while in the case of a classically correlated field generated from chaotic thermal radiation, the bandwidth is roughly given by the inverse of speckle size found from the self-interference of the near field [16] (see also [10, 13]). Thus, the cutoff is an inherent property of the source. However, in this paper we discuss a spatial averaging technique which circumvents this cutoff and improves the imaging bandwidth of the system, regardless of how the correlated beams are created. The spatial average is performed over the position of a pixel-like detector located in the test arm after the object, exploiting that each position of this detector gives access to a different part of the diffraction pattern through the correlations. Thus, making an average over all possible positions of the test detector it is possible to substantially extend the imaging bandwidth of the scheme.

The spatial averaging technique works particularly well in the high-gain regime where many photon pairs are generated in each pump pulse. Thus the test arm has many photons per pulse transmitted by the object, (in contrast in the low-gain coincidence counting regime only one photon at a time is impinging on the object, and either it is transmitted or it is not), and therefore at the measurement plane they are scattered over the entire transverse plane. The information about the object is then extracted by spatially correlating the intensities recorded in the test and reference arm. In the low-gain regime this implies registering coincidences while in the high gain regime successive sampling over repeated shots of the pump pulse is used. Since the spatial averaging technique employs an average over the position of the test detector, within a single shot we can get information about the image from all the test detector positions in the transverse plane containing photons. This implies that in the high gain regime a much faster convergence rate is obtained using the spatial averaging technique.

The spatial averaging technique was already introduced in Ref. [12] in the case where homodyne detection was used. Since the homodyne measurements give access to orthogonal quadratures, an increased image resolution can be obtained by using the spatial averaging technique to measure the diffraction pattern and then use an inverse Fourier transform to obtain the image. In the present work we only have access to the field intensity, so we cannot use this method to get an increased image resolution. However, we may use a bucket detector in the test arm when we want to observe the image, which was not possible in the homodyne scheme due to phase-control problems, and it turns out to make the imaging system incoherent. We discuss the benefits and disadvantages in this respect. We will also extend the discussion of [12] and give a more detailed and general picture as to why the spatial averaging technique works, and as to how much the convergence rate can be improved.

The majority of the results of this paper hold for ghost imaging schemes in general, however we use in the following a model for PDC as the source for the correlated beams.

The paper starts by presenting the analytical results in Sec. 2, and the spatial averaging scheme is introduced and discussed. The numerical results are contained in Sec. 3, going beyond the approximations made in the analytical section and validating the results by using very realistic parameters of current experiments. The paper is summarized in Sec. 4.

2. The model and analytical results

We consider the PDC model for the three-wave quantum interaction inside a χ (2) nonlinear crystal previously discussed in detail in Refs. [12, 15]. The crystal of length l c is cut for type II phase matching, and the modeconsists of a set of operator equations describing the evolution inside the crystal of the quantum mechanical boson operators aj (x, z, t) for the signal (j=1) and idler (j=2) fields, obeying at a given z the commutator relations [ai (z,x, t),aj (z,x , t )]=δijδ(x-x )δ(t-t ), i, j=1, 2. In the stationary and plane-wave pump approximation (SPWPA) unitary input-output transformations can be written relating the field operators in q and Ω space at the output face of the crystal bj (q,Ω)≡aj (z=l c,q,Ω) to those at the input face ajin(q,Ω)≡aj (z=0,q,Ω) as follows

bj(q,Ω)=Uj(q,Ω)ajin(q,Ω)+Vj(q,Ω)akin(q,Ω),jk=1,2.

The gain functions Uj and Vj can for example be found in [12, 15].

The system setup we consider is the same as in Refs. [9, 10, 11, 12] and has the characteristic two-arm configurat of the ghost imaging scheme, see Fig. 1. The signal and idler exiting from the crystal are separated with a polarizing beam splitter (PBS). The object is located in the test (signal) arm immediately after the PBS, and the transmitted field then propagates through an f-f lens system: a lens placed distance f from the object as well as to the measurement plane, where D 1 (an array of pixels or a CCD camera) is recording the intensity of the field. Thus D 1 is in the focal plane of the lens, but it is important to stress that it cannot on its own measure the interference pattern of the object because the signal beam itself is incoherent. This test arm setup is kept fixed, while in the reference (idler) arm two different configurations are used. In Fig. 1(a) the reference arm contains also an f-f lens system, and by recording the idler field intensity with D 2 (another CCD camera) and correlating it with the signal intensity, information about the object diffraction pattern can be extracted from the correlations. Conversely, in Fig. 1(b) a so-called telescope setup is used consisting of two lenses with focal length f placed 4 f from each other and 2 f from the PBS and D 2, respectively. This setup allows for retrieval of the object image distribution from the correlations.

 figure: Fig. 1.

Fig. 1. System setup. In (a) the setup for reconstructing the diffraction pattern of the object is shown, where the reference arm is in the f-f setup. In (b) the reference arm is changed to a telescope setup, which is used to reconstruct the object near field. PBS: polarizing beam splitter. L: lens of focal length f. T obj: object. D 1, D 2: arrays of detectors.

Download Full Size | PDF

The analytical results for the correlations in the two cases of Fig. 1 were derived in Ref. [9, 10], and we review them in the following. For simplicity we neglect for the moment the temporal argument, which corresponds to using a narrow frequency filter after the crystal. The propagation of the fields from the PBS to the measurement planes are described by the kernels hj (x j ,xj ). The fields at the measurement plane cj (x j ) are then found by the Fresnel transformation cj (x j )=∫dxj hj (x j ,xj )bj (xj )+Lj (x j ). Lj (x j ) represent the losses in the propagation which are proportional to the vacuum field operators and are uncorrelated from bj (x j ). The field intensities at the measurement planes Ij (x j )=cj (x j )cj (x j ) are detected with Dj , and result in the correlation 〈I 1(x 1)I 2(x 2)〉=〈c1(x 1)c 1(x 1)c2(x 2)c 2(x 2)〉. The information about the object is obtained by subtracting the background term to obtain the intensity fluctuation correlations

G(x1,x2)=I1(x1)I2(x2)I1(x1)I2(x2).

It is then straightforward to show the following essential result [9, 10]

G(x1,x2)=dx1dx2h1(x1,x1)h2(x2,x2)Γ(x1x2,Ω=0)2,

Here Γ(x1,x2,Ω) is the near-field correlation at the crystal exit, and in the SPWPA and with ajin in the vacuum state it is found from Eq. (1) as

Γ(x1,x2,Ω)dτeiΩτb1(x1,t1)b2(x2,t1+τ)=dq(2π)2eiq·(x1x2)γ(q,Ω),

where we have introduced the gain function γ(q,Ω)=U 1(q,Ω)V 2(-q,-Ω). Since Eq. (4) is a function of x 1-x 2 (because of the SPWPA) we will use the notation Γ(x 1-x 2,Ω)=Γ(x 1,x 2,Ω) in the following.

We should mention that when temporal argument is taken into account, Eq. (3) is no longer valid. We consider the intensities averaged over the detection time TD as Ij(xj)=TD1TDdtcj(xj,t)cj(xj,t). When TD is much larger than the coherence time of the source τ coh (which for PDC is typically the case) the following expression holds instead [11]

G(x1,x2)=1TDdΩ2πdx1dx2h1(x1,x1)h2(x2,x2)Γ(x1x2,Ω)2.

This form will be used later for comparing with the numerics.

We fix the test arm once and for all as shown in Fig. 1, so h1(x1,x1)eix1·x1kfTobj(x1), where k is the free space wave number of the degenerate down-converted fields. To extract information about the diffraction pattern the reference arm is set in the f-f setup of Fig. 1(a), h2(x2,x2)eix2·x2kf, and Eq. (3) then straightforwardly implies the correlation [9, 10, 11]

Gf(x1,x2)γ(x2kf,Ω=0)T˜obj[(x1+x2)kf]2,

where T˜obj(q)=dx2πeiq·xTobj(x) is the amplitude of the object diffraction pattern. The subscript “f” denotes that the reference arm is in the f-f configuration. The correlation provides information about the diffraction pattern of the object if we fix x 1 and scan x 2, but since the gain also depends on x 2 there is a limit to the information we can extract. Precisely, the gain γ(-x 2 k/f,Ω=0) introduces a cutoff of the reproduced spatial Fourier frequencies at a certain characteristic value which we denote δqPDC ; the imaging bandwidth of the PDC source.

We will now show how to circumvent this source-related limitation on the imaging bandwidth. As previously shown [12] we may in a suitable way average over the position of the test arm detector x 1: First, a change of variables is introduced as xx 1+x 2, and then an average over x 1 is performed. We then obtain

Gf,SA(x)dx1Gf(x1,xx1)dx1γ[(x1x)kf,Ω=0]T˜obj(xkf)2
=T˜obj(xkf)2dx1γ[(x1x)kf,Ω=0]2const×T˜obj(xkf)2

where the subscript “SA” indicates that a spatial average has been carried out. The final approximation in (7) is that |γ(q,Ω=0)|2 is a bound function inside the detection range of x 1 implying that the integral evaluates to a constant. Thus, there is now no gain cutoff of the diffraction pattern, so the imaging bandwidth becomes practically infinite (apart from limitations arising from the finite size of the optical elements).1 Note that this average over x 1 does not correspond to D 1 being a bucket detector. Instead, the change in variables xx 1+x 2 implies that the spatial averaging technique is a convolution of the signal and idler intensity fluctuations, which in practice is easily calculated using fast Fourier transform techniques.

 figure: Fig. 2.

Fig. 2. The spatial averaging technique explained by the Klyshko picture. The lower sketch shows the unfolded version of Fig. 1(a). The red line represents the ray from the detector at position x10 as it travels through the system and reaches its conjugate point in the reference plane on the right side. The upper plot shows the source gain curve in Fourier space and the position of a typical diffraction pattern for the current value of x10; only the black part is amplified while the gray part is not amplified. As the test detector position is changed to x10x the dashed red ray comes into play, and the diffraction pattern in the upper plot moves a corresponding amount (also dashed). The gain instead remains the same, and hence a different part of the diffraction pattern is amplified. (Movie size 449 KB).

Download Full Size | PDF

In Fig. 2 we give an intuitive explanation of why the bandwidth is extended by averaging over x 1, and it is based on the Klyshko picture [1] (see also [17]). This states that the propagation in the two distinct arms can be viewed in an “unfolded” scheme having the nonlinear crystal as the hinge point of the signal and idler rays, i.e. the crystal can be treated as a geometrical reflection mirror. This is a consequence of momentum conservation in the PDC process, so a signal ray with momentum q implies an idler ray with momentum -q . Thus, we may treat the scheme starting from the signal detector with the signal ray passing backwards in time through the test arm, until it encounters the crystal and becomes the idler ray traveling forward in time. The point is that the action of the crystal is to fix the idler ray propagation direction through the momentum conservation, and in the unfolded version of Fig. 2 the idler ray is a straight-line continuation of the signal ray. According to the Klyshko picture the detector in the test arm acts as a point-like source emitting a spherical wave from x 1. This wave is transformed by the f-f lens system into a tilted plane-wave with transverse wave vector q=-x 1 k/f which then after impinging on the object is diffracted. Inside the crystal the diffracted signal ray is “converted” into the idler ray with “transmission” function γ(q,Ω=0) determined by the phase-matching conditions. This transmission function γ(q,Ω=0) is more precisely amplifying certain components of the signal ray and damping others (gray area in Fig. 2), and this is the origin of the infamous cutoff. Therefore, as the idler ray now propagates forward through the f-f lens system in the reference arm we can with D 2 only detect a bandwidth-limited version of the diffraction pattern |γ(-x 2 k/f,Ω=0)obj[(x 2+x 1)k/f]|2. The spatial averaging technique exploits that as we move to a new detector position x10x the center of the object changes correspondingly to x 2=-x10x, while the gain remains fixed [due to the structure of Eq. (6)]. Thus, by changing the test detector position, we measure a different part of the spatial Fourier spectrum, as shown in Fig. 2. Therefore, by performing a suitable average as described in Eq. (7) we cover the entire Fourier plane, and effectively the bandwidth has become unlimited.

Additionally, the spatial averaging technique gives an increased convergence rate of the correlation. This is again related to the fact that when the test detector position is changed from x10 to x10+Δx the gain does not change position but the diffraction pattern does. Assuming that the PDC gain extension is much larger than the extension of a pixel, the shifted diffraction pattern at x10x overlaps quite substantially with the previous one. Thus, as x 1 is scanned a given position of the diffraction pattern has as many independent contributions as there are independent modes inside the gain bandwidth, which as a good estimate is given by the ratio of the PDC bandwidth δ qPDC with the pump bandwidth δq p=2/w 0 [15], where w 0 is the pump waist. A measure of the speedup in the correlation convergence in each transverse dimension is therefore given by

ρSA=δqPDCδqp.

In the simulations shown later concerning the convergence rate, we used a temporal grid that corresponds to a 22 nm interference filter. In that case δ qPDC≃6q0, where q0=klc is the natural bandwidth of PDC at a given temporal frequency [15]. The pump size in the numerics was chosen so δq p=q 0/18 (corresponding to a pump size of 600 µm), implying we should expect a convergence rate speedup of ρ SA≃102.

Note that instead of fixing x 1 and scanning x 2, we may scan x 1 and fix x 2. In this case Eq. (6) reveals that the gain no longer limits the imaging bandwidth, and therefore it is in principle not necessary to do a spatial average to overcome the gain limitation. The physical explanation behind this is again found from the Klyshko picture: fixing x 2 at a suitable position (i.e. at the position of maximum gain, cf. Fig. 2), scanning x 1 will move the diffraction pattern seen at this position over the whole range giving an unlimited imaging bandwidth. However, scanning x 1 and fixing x 2 does not benefit of the improved convergence. To achieve this a spatial average should be done, whereby the method amounts to the same operation as described previously.

Let us now turn to the problem of reconstructing the object image. Keeping the test arm fixed and changing the reference arm to the telescope setup [see Fig. 1(b)], h 2(x 2,x2)=δ(x 2-x2) and Eq. (3) becomes [9, 10, 11]

GT(x1,x2)dx1Γ(x1x2,Ω=0)Tobj(x1)eix1·x1kf2
Tobj(x2)2γ(x1kf,Ω=0)2.
Tobj(x2)2γ(x1kf,Ω=0)2.

Thus, fixing x 1 the object can be observed by scanning x 2. The approximation that leads to Eq. (10) holds as long as the smallest length scale of the object is larger than the coherence length x coh of Γ(x1-x 2,Ω=0). This is because Γ(x1-x 2,Ω=0) is nonzero in a region of the size x coh around x1=x 2 and thus T obj changes slowly with respect to this function and may consequently be pulled out of the integration. In general, Eq. (9) is a convolution between the correlation function and the object, and hence the width x coh of the near-field correlation function Γ(x1-x 2,Ω=0) determines the resolution of the reconstructed image, and has typically a value of x coh=1/q 0.

In reconstructing the image a technique corresponding to the spatial average done for reconstructing the diffraction pattern would result in a largely increased image resolution. Unfortunately, it is not possible to carry out such a spatial average to achieve this. However, if we make a simple sum over all the positions of D 1 (corresponding to D 1 being a bucket detector), instead of Eq. (9) we have the following exact expression2

ḠT(x2)=dx1GT(x1,x2)dx1Γ(x1x2,Ω=0)2Tobj(x1)2,

where the bar denotes that D 1 is a bucket detector and therefore that x 1 has been integrated out. Eq. (11) has the form of an incoherent imaging scheme, while Eq. (9) has the form of a coherent imaging scheme (the same conclusion - that using a bucket detector can make the imaging incoherent - was reached in Ref. [8] in the low-gain regime). As we shall see later, the incoherent form has both advantages and drawbacks compared to the coherent case.

3. Numerical examples

The imaging performance of the system was checked through numerical simulations of the model described in detail in Ref. [15]. It includes spatial and temporal dispersion up to second order, as well as a Gaussian shape in space and time of the pump beam. The simulations with 1 transverse dimension (1D) were calculated including the temporal argument (using a grid of Nx =512 and Nt =64), while the ones with 2 transverse dimensions (2D) had a more qualitative nature since they neglected the time argument (a grid of Nx =Ny =256 was used).3 Each pump shot was propagated along the crystal in Nz =200 steps. The Gaussian pump profile had a waist w 0=600 µm and a duration of τ 0=1.5 ps, and the other parameters were as in [12, 15] chosen to closely mimic that of an l c=4 mm long BBO crystal used in a current experiment in Como [14]. The characteristic space and time units of the PDC source are x coh=1/q 0≃16 µm and τ coh=1/Ω0≃0.96 ps. The pulsed character of the pump is important because typically the CCD cameras used in experiments have very slow response times. If the measurement time becomes too long with respect to τ coh, the visibility of the correlation becomes very poor (see also Refs. [6, 10, 11]). This only affects the case when the intensity is detected in the high gain regime, i.e. when the background term in Eq. (2) is substantial, while it does not affect the coincidence counting regime or the case when homodyne measurements are performed as in Ref. [12]. Note that in the telescope setup the performance was optimized by taking the imaging plane of the telescope setup inside the crystal by the amount Δz=-(1/n 1+1/n 2) tanh(σp l c)/σp [12, 15], where nj are the refractive indices, and σp is a gain parameter. In this way, the quadratic phase dependence of the gain is cancelled. For more technical details on this and the numerics we refer to Refs. [12, 15].

Let us show quantitatively how the object diffraction pattern is reconstructed by using a double slit as an object in the setup of Fig. 1(a). The correlations as calculated in 1D both with and without the spatial average are presented in Fig. 3(a)–(b). The correlation without spatial average in (a) clearly suffers from a limited bandwidth, and from the animation a slow convergence rate is evident. In contrast, the correlation with spatial average in (b) is able to reproduce the entire spectrum of the diffraction pattern and after much fewer repeated pump shots. To check the convergence rate, we have calculated the root-mean-squared error of the correlation Gfn (after averaging over n number of shots) relative to the analytically calculated correlation G f. G f was calculated by using semi-analytical methods including the analytical SPWPA gain as well as integrating over time [see Eq. (5)]. The error is then evaluated as

εf(n)=(xGfn(x)Gf(x)2)12

where at each shot the maximum of Gfn is rescaled to the maximum of G f. Figure 3(c) shows ε(n) and clearly the fixed detector case (dashed) converges much slower than the spatial average (full). The red curves are fits to the function

εfit(n)=(d0n)12+d1

with n being the number of shots. The factor d 0 has the dimension shot-1 and the ratio between d 0’s obtained using the spatial average and using fixed detector should then give an indication of the increased convergence rate. From the fits of Fig. 3(c), d0SA,1D/d0fixed,1D≃125, which corresponds well to the value ρ SA≃102 estimated in the previous section. The factor d 1 indicates the error remaining after convergence, but in this case the correlations are not converged sufficiently within the number of shots shown. Now, according to what was predicted in Eq. (8), using the spatial average in 2D should benefit with a factor of ρ SA in convergence speedup compared to using the spatial average in the 1D case (due to the extra dimension to average over). In Fig. 3(c) the result of such a 2D simulation is also shown,4 and the improved convergence rate of the 2D simulation is evident: from the fits d0SA,2D/d0SA,1D≃73, again corresponding well to the predicted estimate of ρ SA≃102. A minor problem in this comparison is that the 2D results saturate very quickly to a rather high error level of 1%, while the 1D results go as low as 0.1%. This difference turned out to be caused by the Gaussian shape of the pump field and the object extension; the object is very localized in the x direction and therefore we get a low error rate in the 1D case. However, in the 2D case the object is quite extended in the y direction and the average error reported in the ε value is therefore higher. This quick saturation in 2D gave some numerical problems in fitting well the data to Eq. (13), so the d 0 value obtained should be taken cautiously. However, there is no doubt about the main point: going from 1D to 2D the spatial averaging technique speeds further up.

 figure: Fig. 3.

Fig. 3. Reconstructing the diffraction pattern using the f-f setup in the reference arm. (a) shows the correlation from a fixed x1 after averaging over 10000 shots while (b) shows the correlation using the spatial averaging technique and averaging over 1000 shots. In (a) and (b) black curves are numerics, while dashed red curves are obj. In (a) the blue curve is the analytically calculated correlation in the SPWPA in arbitrary units [Eq. (6) including integration over time, as shown in Eq. (5)]. (c) shows ε (n) from Eq. (12) in a double-logarithmic plot using the spatial averaging technique (full) and a fixed detector (dashed), while the red curves are fits to Eq. (13). Double slit parameters: aperture 14 µm (5 pixels) and aperture center distance 87 µm (30 pixels). [Movie sizes: (a) 1 MB and (b) 933 KB].

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Reconstructing the diffraction pattern of the object shown in (a) which has spatial Fourier components both inside and outside the PDC bandwidth. The real part of the inverse Fourier transform of the reconstructed diffraction pattern using the f-f setup is shown with (b) fixed x 1 and (c) using the spatial averaging technique. 35000 shots were used.

Download Full Size | PDF

Let us show an example where the reconstructed diffraction pattern using the f-f setup leads to completely different results depending on whether the spatial average is applied or not. The object is shown in Fig. 4(a) and is a 2D mask T obj(x)=[1+cos(xq 0)][1+cos(3yq 0)]/4 (consisting of 4 main sidebands, 2 located at q/q 0e x and 2 located at q/q 0=±3e y ). We have chosen the peaks in the y-direction so they lie outside the imaging bandwidth of the source, while the peaks in the x-direction lie inside the bandwidth. In the numerics the diffraction pattern was reconstructed from the correlation with x 1 fixed as well as employing the spatial averaging technique. Instead of showing these data, we have in Fig. 4(b) and (c) shown the results after taking the inverse Fourier transform (IFT) of the reconstructed diffraction patterns. In this way we can reconstruct to some extend the nature of the near field mask (some phase information is lost, obviously, but in this simple case the lost phase information is not so important). Figure 4(b) shows Re(IFT[G f(x 1,x 2)]), i.e. without spatial average, and only the roll pattern in the x-direction is reproduced. Instead, the correct object is reproduced by the result Re(IFT[G f,SA(x)]) from using the spatial average, as shown in Fig. 4(c).

The imaging system with the reference arm set in the f-f setup is able to reconstruct the diffraction pattern of a pure phase object,5 both with and without the spatial averaging technique applied (the latter was already demonstrated in Ref. [10]). The chosen object [see Fig. 5(a)] was a pure phase object modulated by a Gaussian as to avoid a cosmetically disturbing DC peak [the analytical Fourier transform is shown in Fig. 5(b)]. An imaging scheme that is unable to reconstruct phase information would therefore only show the Fourier transform of the Gaussian which is another Gaussian. Figure 5(c) shows the result of a numerical simulation in 2D where the spatial averaging technique was used, and it closely follows the analytical diffraction pattern of the object in Fig. 5(b) confirming that phase information is preserved.

 figure: Fig. 5.

Fig. 5. The reconstructed diffraction pattern of a phase object with the spatial averaging technique. (a) shows Re(T obj) consisting of 4×4 square holes (with the value T obj=-1 inside and T obj=+1 outside the holes) modulated by a Gaussian. (b) analytical |obj|2. (c) G f,SA after 2000 shots.

Download Full Size | PDF

The image of the object can be reconstructed by merely changing the setup in the reference arm to the telescope setup [see Fig. 1(b)]. Figure 6 compares using (a) a pixel-like detector and (b) a bucket detector in the test arm for this case. In both cases the double slit is reproduced, albeit with the characteristic blurring of the sharp contours of the double slit apertures because of the finite resolution. In fact, the aperture size of the double slit (14 µm) is on the limit of the system’s resolution x coh≃16 µm. The convergence rates as observed in the animations seem identical, which is confirmed by the error plot on the right [showing the image version of (12), i.e. ε T(n)=(∑x|GTn(x)-G T(x)|2)1/2].

From the semi-analytical calculations an interesting observation about the resolution abilities of the imaging scheme emerged when varying the interference filter after the PDC crystal, which filters a frequency range [-δΩ,δΩ]. In the case considered in the analytical section δΩ=0, which in practice corresponds to a <1 nm interference filter. The numerics with time used δΩ=40Ω0, i.e. a 22 nm filter. Figure 7(a) shows the case where D 1 is a pixel-like detector; the result remains more or less unchanged as the interference filter changes and no noticeable difference is observed between the 22 nm and 100 nm filters. In contrast, when D 1 is a bucket detector the result is much more sensitive; as shown in Fig. 7(b) the narrow interference filter gives the best resolution, while taking a broader filter gives a deteriorating resolution. The explanations for these different behaviours are found from considering the two explicit expressions for the correlations integrated over time. From Eq. (5) we namely find instead of Eqs. (9) and (11)

GT(x1,x2)dΩdx1Γ(x1x2,Ω)Tobj(x1)eix1·x1kf2,coherent(pixel-linkD1)
GT(x2)dΩdx1Γ(x1x2,Ω)2Tobj(x1)2,incoherent(bucketD1).

Thus, in the coherent case Γ(x1-x 2,Ω) acts at each Ω as an imaging kernel, which is shown in Fig. 7(d) for different values of Ω. As Ω increases Γ(x1-x 2,Ω) becomes broader and the sidebands become pronounced. This can be understood by noting that Eq. (4) expresses Γ(x1-x 2,Ω) as the inverse Fourier transform of γ(q,Ω), which is shown in Fig. 7(c): as Ω increases γ(q,Ω) becomes double-peaked, giving the sideband oscillations. All these contributions should then be added up as the integration in Ω is carried out. It is therefore important to notice that Γ(x1-x 2,Ω) becomes weaker as Ω is increased. Therefore, despite the fact that the broader sidebands in Γ(x1-x 2,Ω) imply a deteriorating resolution, the reduced peak value of Γ(x1-x 2,Ω) implies that the end result is affected less and less as Ω is increased. Thus, the coherent nature of the imaging explains why the image does not change much as the bandwidth is increased in Fig. 7(a). In the bucket detector case the situation is completely different. Instead of having many contributions from different Ω, the incoherence of the imaging system implies that the temporal integration defines a new imaging kernel as

ΓB(x1x2)dΩ2πΓ(x1x2,Ω)2.

so T(x 2)∝∫dx 1ΓB(x 1-x 2,Ω)|T obj(x 1)|2. Thus, ΓB(x 1-x 2) uniquely determines the resolution of the incoherent imaging system, and from Fig. 7(e) we see it becomes broader as the frequency bandwidth is increased, implying a decrease in resolution which is exactly what was seen in Fig. 7(b). We should stress that this result is a particular consequence of the PDC phase-matching conditions and does therefore not necessarily apply to ghost imaging schemes using other sources for the correlated beams.

 figure: Fig. 6.

Fig. 6. The reconstructed image using the telescope setup with (a) D 1 a pixel-like detector and (b) D 1 a bucket detector. The black curves are numerics, the dashed red curves are T obj, while the blue curves are the analytically calculated correlations in the SPWPA in arbitrary units [Eqs. (14) and (15)]. (c) shows ε T(n) (full: bucket detector and dashed: pixel detector), and the red curves are fits to Eq. (13). The same parameters and notation as in Fig. 3. [Movie sizes: (a) 748 KB and (b) 1.25 MB].

Download Full Size | PDF

In 2D the advantage of using the bucket detector for the image reconstruction becomes more obvious. As an example of a reasonably complicated object, we have in Fig. 8 reconstructed a mask with the letters “INFM”. The difference between the pixel-detector case (a) and the bucket detector case (b) is again related to coherent vs. incoherent imaging.6 The semi-analytical calculations in (d) in the SPWPA confirm this; whereas the pixel-detector result is suffering from a speckle-like effect [16], this effect is absent in the bucket-detector result. Note also the different results obtained from a narrow and a broad bandwidth filter, as discussed in the previous paragraph [the numerics in Fig. 8(a)–(b) are without time, so they should be compared to < 1 nm filter results of Fig. 8(d)]. Finally, (c) shows ε(n), and it is evident that the bucket-detector case (full lines) converges faster than the pixel-detector case (dashed lines); from the fits we obtain d0bucket/d0pixel≃20. This speedup in convergence is a consequence of the bucket detector D 1 making a spatial average over the field at the test arm detection plane, and the magnitude of the speedup is therefore related to the number of independent modes recorded by D 1. In 1D this number is much smaller than in 2D which is why no significant speedup was observed in the 1D case (Fig. 6).

 figure: Fig. 7.

Fig. 7. The reconstructed image from semi-analytical calculations with (a) D 1 a pixel-like detector and (b) D 1 a bucket detector, using different interferential filters (different temporal bandwidths). The blue curves are semi-analytical calculations in the SPWPA in arbitrary units [Eqs. (14) and (15)]. (c) shows |γ (qx ,Ω)| for the chosen PDC setup. (d) shows |Γ(x 1-x 2,Ω)| for different values of Ω. (e) shows ΓB(x 1-x 2) for different interferential filters.

Download Full Size | PDF

If filters are inserted in the reference arm the reconstructed object can be manipulated. As a simple example of this, using the telescope setup in the reference beam we have inserted a filter in the focal plane of the first lens [see Fig. 9(a)]. The object in the test arm was a square pattern T obj(x)=[1+cos(3xq 0/2)][1+cos(3yq 0/2)]/4 (consisting of 4 main sidebands, 2 located at qq0=32ex and 2 located at qq0=32ey.). Filtering in the y-direction we observe only rolls in the x-direction [Fig. 9(b)], while removing the filter the square pattern is seen in the correlations [Fig. 9(c)]. This shows that the image Fourier components can be manipulated non-locally.

4. Conclusion

The ghost imaging schemes rely on two spatially correlated beams. These are generated by a source with a limited spatial bandwidth in Fourier space, which determines the highest Fourier component in the diffraction pattern of the object that can be reproduced (i.e., the imaging bandwidth is limited). In turn, the resolution of the reconstructed image is limited by the characteristic near-field coherence length of the source.

 figure: Fig. 8.

Fig. 8. The reconstructed near field of a mask with the letters “INFM” using the telescope setup. In the numerical simulations of (a) D 1 was a pixel-like detector, while in (b) D 1 was a bucket detector. (c) shows the convergence rate of the two cases (same notation as in Fig. 6.) 105 shots were used, as well as a larger pump waist than usual (corresponding to 800 µm). (d) shows semi-analytical calculations without time (<1 nm filter) and with time (22 nm filter).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Non-local filtering using the telescope setup with a bucket detector in the test arm. The setup (a) was as Fig. 1(b) but with a spatial filter inserted in the focal plane of the first lens in the reference arm. (b) shows the correlation using the filter, while (c) shows the correlation without filter. 15000 shots were used.

Download Full Size | PDF

In this paper we have through theory and numerics analyzed in detail a technique (already presented by us in [12] for a different imaging scheme) that improves the imaging bandwidth by implementing a spatial average over the test arm detector position. This technique makes the imaging bandwidth of the reconstructed diffraction pattern virtually infinite (apart from limitations arising from lenses and apertures), as well as making the correlations converge much faster. We showed that the speed-up of this convergence is related to the number of statistically independent modes inside the source bandwidth. When reconstructing the object image no benefits to the image resolution could be made by doing a spatial average. However, by merely using a bucket detector collecting all photons in the test arm, the imaging becomes incoherent. Whether coherent or incoherent imaging is advantageous depends obviously on the problem at hand [16], but we showed examples where the speckle problem of coherent imaging could be avoided using a bucket detector. On the other hand, when using a bucket detector it turned out to be important to use a narrow-band interference filter of the down-converted beams, otherwise a strong degrading in resolution was observed. Finally, we also showed that by inserting a filter in the focal plane of a lens in the reference arm, the reconstructed image could be non-locally filtered; the filter never interferes with the field containing the object information but through the correlations the image is filtered nonetheless.

We have in this paper chosen to focus on an imaging scheme using spatially entangled PDC beams. However, we should stress that practically all the results shown are general for any imaging system, quantum or classical, and are therefore relevant also for the important case when classical spatially correlated beams are created by splitting a thermal-like radiation on a beam splitter [10, 13]. In that case, e.g. the near-field correlation function (4) is instead governed by the second-order correlation of the thermal field [10], but the overall results of this paper remains the same. Thus, the spatial averaging technique may still be used to extend the bandwidth and speed up the convergence. In this connection it should be stressed that with thermal-like beams the reconstructed object image has exactly the same properties as when using PDC beams: it is coherent of one uses a pixel-like detector, while it becomes incoherent if a bucket detector is used.

Acknowledgments

This project has been carried out in the framework of the FET project QUANTIM of the EU, of the PRIN project MIUR “Theoretical study of novel devices based on quantum entanglement”, and of the INTAS project “Non-classical light in quantum imaging and continuous variable quantum channels”. M.B. acknowledges financial support from the Danish Technical Research Council (STVF) and the Carlsberg Foundation.

Footnotes

11We should mention that when x is taken to the boundary of the detection range, the integral is no longer a constant. Thus at the boundaries the bandwidth slowly dies out, but it is a quite small effect only affecting a range of δqPDC there. In our numerical simulations we use periodic boundary conditions, so this limit does not come into play.
22The expression is exact because no approximations have been made about object length scales vs. x coh.
33This was done to save CPU time since many thousands of repeated pump shots were needed for the correlations to converge, and corresponds to using a narrow bandwidth interference filter after the crystal. In fact, the temporal grid acts just as an interference filter by providing a cutoff in frequency space. In the simulations with time the cutoff was chosen so it corresponds to having a 22 nm interference filter after the BBO crystal, while the simulations neglecting time are equivalent to placing a<1 nm interference filter after the crystal.
44This was a full 3+1D simulation with Nx =256, Ny =128, Nz =200 and Nt =32. The slits had the same extension in the x-direction as in the 1D case and were 580 µm (90 pixels) long in the y direction.
55Ref. [18] shows an experimental implementation of imaging the diffraction pattern of a phase object in the coincidence counting regime.
66The curvature of the reproduced images is due to the Gaussian profile of the signal field impinging on the object.

References and links

1. D. N. Klyshko, “Effect of focusing on photon correlation in parametric light scattering,” Zh. Eksp. Teor. Fiz.94, 82–90 (1988), [Sov. Phys. JETP67, 1131–1135 (1988)].

2. A. V. Belinskii and D. N. Klyshko, “2-photon optics - diffraction, holography and transformation of 2-dimensional signals,” Zh. Eksp. Teor. Fiz.105, 487–493 (1994), [Sov. Phys. JETP78, 259–262 (1994)].

3. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of Two-Photon “Ghost” Interference and Diffraction,” Phys. Rev. Lett. 74, 3600–3603 (1995). [CrossRef]   [PubMed]  

4. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]   [PubMed]  

5. P. H. Souto Ribeiro, S. Padua, J. C. Machado da Silva, and G. A. Barbosa, “Controlling the degree of visibility of Young’s fringes with photon coincidence measurements,” Phys. Rev. A 49, 4176–4179 (1994). [CrossRef]  

6. B. E. A. Saleh, A. F. Abouraddy, A. V. Sergienko, and M. C. Teich, “Duality between partial coherence and partial entanglement,” Phys. Rev. A 62, 043816 (2000). [CrossRef]  

7. A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, “Role of Entanglement in two photon imaging,” Phys. Rev. Lett. 87, 123602 (2001). [CrossRef]   [PubMed]  

8. A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, “Entangled-photon Fourier optics,” J. Opt. Soc. Am. B 19, 1174–1184 (2002). [CrossRef]  

9. A. Gatti, E. Brambilla, and L. A. Lugiato, “Entangled Imaging andWave-Particle Duality: From the Microscopic to the Macroscopic Realm,” Phys. Rev. Lett. 90, 133603 (2003). [CrossRef]   [PubMed]  

10. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Correlated imaging, quantum and classical,” Phys. Rev. A70, 013802 (2004); “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett.93, 093602 (2004). [CrossRef]  

11. A. Gatti, E. Brambilla, and L. A. Lugiato, “Correlated imaging with entangled light beams,” In Quantum Communications and Quantum Imaging, R. E. Meyers and Y. Shih, eds., SPIE Proceedings5161, 192–203 (2004).

12. M. Bache, E. Brambilla, A. Gatti, and L. A. Lugiato, “Ghost imaging using homodyne detection,” Phys. Rev. A 70, 023823 (2004). [CrossRef]  

13. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost imaging and ghost diffraction experiments with classical thermal light,” submitted to Phys. Rev. Lett., quant-ph/0408021 (2004).

14. O. Jedrkiewicz, Y. Jiang, E. Brambilla, A. Gatti, M. Bache, L. A. Lugiato, and P. Di Trapani, “Detection of sub-shot-noise spatial correlation in high-gain parametric down-conversion,” to appear in Phys. Rev. Lett., quantph/0407211 (2004).

15. E. Brambilla, A. Gatti, M. Bache, and L. A. Lugiato, “Simultaneous near-field and far field spatial quantum correlations in the high gain regime of parametric down-conversion,” Phys. Rev. A 69, 023802 (2004). [CrossRef]  

16. J. W. Goodman, Introduction to Fourier optics (McGraw-Hill, New York, 1968).

17. E.-K. Tan, J. Jeffers, S. M. Barnett, and D. T. Pegg, “Retrodictive states and two-photon quantum imaging,” Eur. Phys. J. D 22, 495–499 (2003). [CrossRef]  

18. A. F. Abouraddy, P. R. Stone, A. V. Sergienko, B. E. A. Saleh, and M. C. Teich, “Entangled-Photon Imaging of a Pure Phase Object,” Phys. Rev. Lett. 93, 213903 (2004).. [CrossRef]   [PubMed]  

Supplementary Material (5)

Media 1: AVI (450 KB)     
Media 2: AVI (1305 KB)     
Media 3: AVI (1423 KB)     
Media 4: AVI (748 KB)     
Media 5: AVI (1288 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. System setup. In (a) the setup for reconstructing the diffraction pattern of the object is shown, where the reference arm is in the f-f setup. In (b) the reference arm is changed to a telescope setup, which is used to reconstruct the object near field. PBS: polarizing beam splitter. L: lens of focal length f. T obj: object. D 1, D 2: arrays of detectors.
Fig. 2.
Fig. 2. The spatial averaging technique explained by the Klyshko picture. The lower sketch shows the unfolded version of Fig. 1(a). The red line represents the ray from the detector at position x10 as it travels through the system and reaches its conjugate point in the reference plane on the right side. The upper plot shows the source gain curve in Fourier space and the position of a typical diffraction pattern for the current value of x10; only the black part is amplified while the gray part is not amplified. As the test detector position is changed to x10x the dashed red ray comes into play, and the diffraction pattern in the upper plot moves a corresponding amount (also dashed). The gain instead remains the same, and hence a different part of the diffraction pattern is amplified. (Movie size 449 KB).
Fig. 3.
Fig. 3. Reconstructing the diffraction pattern using the f-f setup in the reference arm. (a) shows the correlation from a fixed x1 after averaging over 10000 shots while (b) shows the correlation using the spatial averaging technique and averaging over 1000 shots. In (a) and (b) black curves are numerics, while dashed red curves are obj. In (a) the blue curve is the analytically calculated correlation in the SPWPA in arbitrary units [Eq. (6) including integration over time, as shown in Eq. (5)]. (c) shows ε (n) from Eq. (12) in a double-logarithmic plot using the spatial averaging technique (full) and a fixed detector (dashed), while the red curves are fits to Eq. (13). Double slit parameters: aperture 14 µm (5 pixels) and aperture center distance 87 µm (30 pixels). [Movie sizes: (a) 1 MB and (b) 933 KB].
Fig. 4.
Fig. 4. Reconstructing the diffraction pattern of the object shown in (a) which has spatial Fourier components both inside and outside the PDC bandwidth. The real part of the inverse Fourier transform of the reconstructed diffraction pattern using the f-f setup is shown with (b) fixed x 1 and (c) using the spatial averaging technique. 35000 shots were used.
Fig. 5.
Fig. 5. The reconstructed diffraction pattern of a phase object with the spatial averaging technique. (a) shows Re(T obj) consisting of 4×4 square holes (with the value T obj=-1 inside and T obj=+1 outside the holes) modulated by a Gaussian. (b) analytical |obj|2. (c) G f,SA after 2000 shots.
Fig. 6.
Fig. 6. The reconstructed image using the telescope setup with (a) D 1 a pixel-like detector and (b) D 1 a bucket detector. The black curves are numerics, the dashed red curves are T obj, while the blue curves are the analytically calculated correlations in the SPWPA in arbitrary units [Eqs. (14) and (15)]. (c) shows ε T(n) (full: bucket detector and dashed: pixel detector), and the red curves are fits to Eq. (13). The same parameters and notation as in Fig. 3. [Movie sizes: (a) 748 KB and (b) 1.25 MB].
Fig. 7.
Fig. 7. The reconstructed image from semi-analytical calculations with (a) D 1 a pixel-like detector and (b) D 1 a bucket detector, using different interferential filters (different temporal bandwidths). The blue curves are semi-analytical calculations in the SPWPA in arbitrary units [Eqs. (14) and (15)]. (c) shows |γ (qx ,Ω)| for the chosen PDC setup. (d) shows |Γ(x 1-x 2,Ω)| for different values of Ω. (e) shows ΓB(x 1-x 2) for different interferential filters.
Fig. 8.
Fig. 8. The reconstructed near field of a mask with the letters “INFM” using the telescope setup. In the numerical simulations of (a) D 1 was a pixel-like detector, while in (b) D 1 was a bucket detector. (c) shows the convergence rate of the two cases (same notation as in Fig. 6.) 105 shots were used, as well as a larger pump waist than usual (corresponding to 800 µm). (d) shows semi-analytical calculations without time (<1 nm filter) and with time (22 nm filter).
Fig. 9.
Fig. 9. Non-local filtering using the telescope setup with a bucket detector in the test arm. The setup (a) was as Fig. 1(b) but with a spatial filter inserted in the focal plane of the first lens in the reference arm. (b) shows the correlation using the filter, while (c) shows the correlation without filter. 15000 shots were used.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

b j ( q , Ω ) = U j ( q , Ω ) a j in ( q , Ω ) + V j ( q , Ω ) a k in ( q , Ω ) , j k = 1 , 2 .
G ( x 1 , x 2 ) = I 1 ( x 1 ) I 2 ( x 2 ) I 1 ( x 1 ) I 2 ( x 2 ) .
G ( x 1 , x 2 ) = d x 1 d x 2 h 1 ( x 1 , x 1 ) h 2 ( x 2 , x 2 ) Γ ( x 1 x 2 , Ω = 0 ) 2 ,
Γ ( x 1 , x 2 , Ω ) d τ e i Ω τ b 1 ( x 1 , t 1 ) b 2 ( x 2 , t 1 + τ ) = d q ( 2 π ) 2 e i q · ( x 1 x 2 ) γ ( q , Ω ) ,
G ( x 1 , x 2 ) = 1 T D d Ω 2 π d x 1 d x 2 h 1 ( x 1 , x 1 ) h 2 ( x 2 , x 2 ) Γ ( x 1 x 2 , Ω ) 2 .
G f ( x 1 , x 2 ) γ ( x 2 k f , Ω = 0 ) T ˜ obj [ ( x 1 + x 2 ) k f ] 2 ,
G f , SA ( x ) d x 1 G f ( x 1 , x x 1 ) d x 1 γ [ ( x 1 x ) k f , Ω = 0 ] T ˜ obj ( x k f ) 2
= T ˜ obj ( x k f ) 2 d x 1 γ [ ( x 1 x ) k f , Ω = 0 ] 2 const × T ˜ obj ( x k f ) 2
ρ SA = δ q PDC δ q p .
G T ( x 1 , x 2 ) d x 1 Γ ( x 1 x 2 , Ω = 0 ) T obj ( x 1 ) e i x 1 · x 1 k f 2
T obj ( x 2 ) 2 γ ( x 1 k f , Ω = 0 ) 2 .
T obj ( x 2 ) 2 γ ( x 1 k f , Ω = 0 ) 2 .
G ̄ T ( x 2 ) = d x 1 G T ( x 1 , x 2 ) d x 1 Γ ( x 1 x 2 , Ω = 0 ) 2 T obj ( x 1 ) 2 ,
ε f ( n ) = ( x G f n ( x ) G f ( x ) 2 ) 1 2
ε fit ( n ) = ( d 0 n ) 1 2 + d 1
G T ( x 1 , x 2 ) d Ω d x 1 Γ ( x 1 x 2 , Ω ) T obj ( x 1 ) e i x 1 · x 1 k f 2 , coherent ( pixel-link D 1 )
G T ( x 2 ) d Ω d x 1 Γ ( x 1 x 2 , Ω ) 2 T obj ( x 1 ) 2 , incoherent ( bucket D 1 ) .
Γ B ( x 1 x 2 ) d Ω 2 π Γ ( x 1 x 2 , Ω ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.