Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantifying high-dimensional spatial entanglement with a single-photon-sensitive time-stamping camera

Open Access Open Access

Abstract

High-dimensional entanglement is a promising resource for quantum technologies. Being able to certify it for any quantum state is essential. However, to date, experimental entanglement certification methods are imperfect and leave some loopholes open. Using a single-photon-sensitive time-stamping camera, we quantify high-dimensional spatial entanglement by collecting all output modes and without background subtraction, two critical steps on the route toward assumptions-free entanglement certification. We show position-momentum Einstein–Podolsky–Rosen (EPR) correlations and quantify the entanglement of formation of our source to be larger than 2.8 along both transverse spatial axes, indicating a dimension higher than 14. Our work overcomes important challenges in photonic entanglement quantification and paves the way toward the development of practical quantum information processing protocols based on high-dimensional entanglement.

© 2023 Optica Publishing Group

High-dimensional entangled states offer several advantages over qubits. They have a greater information capacity [1], an improved computational power [2], and increased resistance to noise and losses [3]. They are also good candidates for use in device-independent quantum communication protocols [4]. In particular, pairs of photons entangled in their transverse spatial degree of freedom have such a high-dimensional structure. When harnessed in the continuous bases of position and momentum [5], these states can lead to many applications, such as quantum key distribution [6], continuous-variable quantum computation [7], and quantum imaging [8].

To fully take advantage of high-dimensional entanglement, it is essential to certify its presence using as few measurements as possible and without making any assumptions. The most common experimental method consists in analyzing the quantum state sequentially using single-outcome projective measurements [9]. In particular, this method has been used to measure high-dimensional entanglement using different approaches, such as measuring Einstein–Podolsky–Rosen (EPR)-type correlations [10,11], using compressive sensing [12], quantum steering [13], or by detecting photons in mutually unbiased bases (MUBs) [14].

However, this approach has major shortcomings. First, it is time consuming. For example, in the case of a bipartite state with local dimension $d$, it requires performing at least $2 d^2$ measurements (e.g., using two MUBs), making this task impractical in high dimensions and effectively limiting the key rate in quantum communications scenarios [15]. Second, it necessarily leaves the fair-sampling loophole open. Indeed, for the latter to be closed, one must ensure that all possible output states are measured simultaneously [16].

Due to the limitations of single-outcome projective measurements, researchers are exploring new approaches to detect all outputs simultaneously rather than measuring them one at a time. One promising path is to use single-photon-sensitive cameras to detect photons in all spatial modes in parallel. High-dimensional spatial entanglement was thus recently measured using electron multiplying charge coupled device (EMCCD) cameras [1720], intensified scientific complementary metal-oxide semiconductor (i-sCMOS) cameras [21,22], and single-photon avalanche diode (SPAD)–CMOS sensor cameras [23,24]. However, all of these methods require post-processing of the recorded data, which necessarily opens a loophole in the certification protocol [25]. Specifically, none of the results reported in Refs. [1724] were sufficient to violate the criterion of entanglement without modeling and subtracting accidental coincidences.

The presence of such a high rate of accidental coincidences is due to the technical imperfections of these camera technologies, such as their low temporal resolution (EMCCD), poor quantum efficiency (SPAD array), or the presence of noise (CMOS) [8]. In our work, we quantify high-dimensional spatial entanglement without post-processing the data using a recently developed single-photon-sensitive time-stamping camera [26]. We achieve violation of an EPR-type criterion and measure an entanglement of formation (EoF) of $2.81(3)$ and $3.02(3)$ along the $x$ and $y$ transverse spatial axes, respectively, thus showing an entanglement dimension higher than $14$. Figure 1(a) shows the experimental setup. Spatially entangled photon pairs are produced via type-I spontaneous parametric downconversion (SPDC) in a $\beta$-barium-borate (BBO) crystal using a $405$-nm continuous wave pump laser. After magnification, a beam splitter divides the optical path into two paths imaged onto two separate parts of a single-photon-sensitive time-stamping camera. In our work, two distinct imaging configurations are used. To detect photons in the momentum basis, the Fourier plane of the crystal surface is imaged onto the camera by the lens $f_4$ (far-field (FF) configuration). To detect photons in the position basis, we insert the lens $f_3$ to form a 4$f$-telescope (i.e., $f_3$ and $f_4$) and image the output surface of the crystal onto the camera (near-field (NF) configuration). Figures 1(b) and 1(c) show intensity images measured by the camera using the FF and NF configurations, respectively.

 figure: Fig. 1.

Fig. 1. Experimental setup. (a) A horizontally polarized continuous-wavelength collimated $405$-nm laser diode illuminates a $0.5$-mm-long $\beta$-barium-borate (BBO) crystal to produce spatially entangled photon pairs by type-I spontaneous parametric downconversion (SPDC). The pump laser power is $50$ mW. Crystal surface is imaged and magnified ($\times 2$) into an intermediate optical plane $P$ (dashed line) by a set of lenses $f_1$$f_2$. A beam splitter (BS) separates the photon-pairs beam into two that are re-positioned side-by-side using a half-wave plate at $45^\circ$ and a polarizing beam splitter (PBS). In the far-field (FF) configuration, the crystal is Fourier imaged onto the camera by lens $f_4$. In the near-field (NF) configuration, the crystal surface is imaged onto the camera using lenses $f_3=30$ mm and $f_4=150$ mm. The camera is composed of an image intensifier and a time-stamping camera Tpx3Cam. Long pass (LPF) and bandpass filters (BPF) block the pump after the crystal to select near-degenerate photon pairs at $810 \pm 5$ nm. (b) and (c) Intensity images measured on the camera in the FF and NF configurations, respectively. For clarity, the spatial positions with an index “1” correspond to pixels on the left side of the camera and those with an index “2” to pixels on the right side. Spatial axes units are in pixels and intensity is in photon counts. Acquisition time is $200$ seconds. The losses in the experiment are mainly due to the quantum efficiency of the image intensifier, which is approximately $0.2$.

Download Full Size | PDF

A central element in our experiment is the single-photon-sensitive time-stamping camera. It consists of a Tpx3Cam camera (Amsterdam Scientific Instruments) and an image intensifier (Photonis). In contrast to the CCD and CMOS cameras, which acquire the images frame by frame, the Tpx3Cam is an event-driven camera, which time-stamps the single photons continuously. After performing an acquisition and after post-processing of the raw data with a centroiding algorithm [27], a list of all the recorded single photons together with their time-stamps and spatial coordinates is formed. The list is then filtered using a pairing algorithm to keep only coincidence events, i.e., pairs of photons detected within a $6$-ns time-window (see Supplement 1 for more details). Similar approaches were used in recent quantum and correlation imaging experiments [2732]. In our work, we use it to retrieve the full spatial two-photon joint probability distribution (JPD) of detected photons. In the FF configuration, it is noted $\Gamma (k_{x_1},k_{y_1},k_{x_2},k_{y_2})$ and represents the joint probability of detecting a photon at spatial position $(k_{x_1},k_{y_1})$ (left half of the camera) and a photon at position $(k_{x_2},k_{y_2})$ (right half). In the NF configuration, $\Gamma ({x_1},{y_1},{x_2},{y_2})$ is the joint probability of detecting a photon at position $({x_1},{y_1})$ (left half) and a photon at position $({x_2},{y_2})$ (right half).

Figures 2(a)–2(h) show bi-dimensional projections of measured JPDs. In the momentum basis (FF configuration), JPD projections in Figs. 2(b)–2(d) show that photons are anti-correlated on the camera, i.e., when a photon is detected at position $(k_{x_2},k_{y_2})$, its twin is detected with a high probability at $(k_{x_1},k_{y_1}) = (-k_{x_2},-k_{y_2})$. In the position basis (NF), we observe that photons are strongly correlated, i.e., they are always detected next to each other [Figs. 2(f)–2(h)]. As position and momentum bases are mutually unbiased, the existence of strong spatial correlations suggest the presence of spatial entanglement [33]. Formally, its presence can be demonstrated using separability criteria. In our work, we use the EPR-Reid criterion [34] based on the inequality

$$\Delta_{min}[x]\Delta_{min}[k_x] \geq \frac{1}{2},$$
where $\Delta _{min}[x]$ and $\Delta _{min}[k_x]$ are the minimum inferred uncertainties for position and momentum measurements, respectively. The same inequality exists also for the $y$ axis after substituting $x \rightarrow y$ and $k_x \rightarrow k_y$. Uncertainties are expressed from measurable quantities by the following definition:
$$\Delta^{2}_{min}[x] = \int d x_1 d x_2 \Gamma_m(x_2) \Delta^2[x_1|x_2],$$
where $\Delta [x_1|x_2]$ is the uncertainty in detecting a photon at $x_1$ conditioned on a detection at $x_2$ and $\Gamma _m(x_2)$ is the marginal probability of detecting a photon at $x_2$. The same definition applies for $\Delta _{min}[y]$, $\Delta _{min}[k_x]$, and $\Delta _{min}[k_y]$. Conditional uncertainties are estimated from conditional probability distribution, such as those shown in Figs. 2(b) and 2(f), by Gaussian fitting [35]. All uncertainty values are reported in Table 1. In particular, we find $\Delta _{min}[x] \Delta _{min}[k_x] = 0.0333(6)$ and $\Delta _{min}[y] \Delta _{min}[k_y] = 0.0366(6)$, showing clear violation of the inequality in Eq. (1) along both spatial axes. This shows the presence of EPR correlations, and thus spatial entanglement, in the detected quantum state.

 figure: Fig. 2.

Fig. 2. Bi-dimensional projections of the measured spatial joint probability distribution (JPD). (a) and (e) Marginal probability distributions $\Gamma _m(k_{x_2},k_{y_2})$ and $\Gamma _m(x_2,y_2)$ of detecting one photon of the pair in the FF and NF configurations, respectively. (b) and (f) Conditional probability distributions $\Gamma (k_{x_2},k_{y_2}|k_{x_1},k_{y_1})$ and $\Gamma ({x_2},{y_2}|{x_1},{y_1})$ of detecting one photon of a pair when its twin was detected at pixel $(k_{x_1},k_{y_1})=({x_1},{y_1})=(-25,-10)$ (white dashed lines) in the FF and NF configurations, respectively. (c) and (g) Joint probability distribution $\Gamma (k_{y_2},k_{y_1},k_{x_2},k_{x_1})$ and $\Gamma ({y_2},{y_1},{x_2},{x_1})$ of detecting photon pairs on the central column of the sensor i.e., $(k_{x_1},k_{x_2}) = (0,0)$ and $(x_1,x_2) = (0,0)$, in the FF and NF configurations, respectively. (d) Sum-coordinate projection $\Gamma _+(k_{x_1}+k_{x_2},k_{y_1}+k_{y_2})$ of the JPD measured in the FF configuration. (h) Minus-coordinate projection $\Gamma _-({x_1}-{x_2},{y_1}-{y_2})$ of the JPD measured in the NF configuration. Definitions of $\Gamma _m$, $\Gamma _-$, and $\Gamma _+$ are provided in the Supplement 1. Inserted images are zooms around the peaks ($10\times 10$ pixels). Acquisition time is $200$ seconds in each configuration. Spatial axes units are in pixels. The measured JPD are not normalized and their units are numbers of coincidences. Totals of $1.4\times 10^6$ and $2.1\times 10^6$ coincidences were detected by the sensor during acquisition in the FF and NF configuration, respectively.

Download Full Size | PDF

Tables Icon

Table 1. Measured Values of Uncertainties, Lower Bounds of Entanglement of Formation, and Low Bound of Dimensiona

To quantify high-dimensional entanglement, we measure the EoF. The EoF is a measure of how many Bell states would need to be used to transform a single copy of our high-dimensional entangled state using only local operations and classical communication. As described in Ref. [36], a lower bound of the EoF along the $x$ axis is expressed as follows:

$$E_x \geq{-}\text{log}_2 (e \Delta[x_1-x_2] \Delta[k_{x_1}+k_{x_2}]),$$
where $\Delta [x_1-x_2]$ and $\Delta [k_{x_1}+k_{x_2}]$ are the uncertainty for measurements in the transformed variables $x_2-x_1$ and $k_{x_1}+k_{x_2}$, respectively. The same inequality exists for $E_y$, the lower bound of EoF along the $y$ axis, after substituting $x_1-x_2 \rightarrow y_1-y_2$ and $k_{x_1}+k_{x_2} \rightarrow k_{y_1}+k_{y_2}$. Uncertainty values are estimated from the sum and minus-coordinate JPD projections shown in Figs. 2(d) and 2(h), respectively, by Gaussian fitting. In our work, we find $-\text {log}_2 (e \Delta [x_1-x_2] \Delta [k_{x_1}+k_{x_2}] = 3.03(2)$ and $-\text {log}_2 (e \Delta [y_1-y_2] \Delta [k_{y_1}+k_{y_2}]) = 2.81(2)$. EoF values then provide lower bounds for entanglement dimensionality $d_x \geq 8$ ($x$ axis) and $d_y \geq 6$ ($y$ axis) using the formula $d \geq 2 ^ {E}$ [36]. The dimension of the state measured is thus higher than $14$.

We demonstrated the use of a single-photon-sensitive time-stamping camera to quantify high-dimensional spatial entanglement without performing accidental background subtraction. Our approach improves on all previous camera-based techniques [1724], in which such a post-processing step is required to demonstrate the presence of entanglement. Furthermore, by accurately detecting all output modes in parallel, we come closer to an ideal experimental configuration for certifying high-dimensional entanglement without assumptions, a situation that would be unattainable using single-outcome measurement approaches. From a practical point of view, we achieved high-dimensional entanglement quantification without accidental subtraction in $400$ seconds by detecting more than $6000$ spatial output modes (i.e., illuminated pixels) in two bases, which is to our knowledge one of the fastest approaches in term of acquisition time per number of modes. Nevertheless, our approach still needs improvement to reach assumptions-free high-dimensional entanglement certification. From the theory side, it is important to note that the entanglement criteria used in our study [i.e., Eqs. (1) and (3)] do not enable certification of high-dimensional entanglement in the strict sense of the term, because they use assumptions about the state, e.g., purity and double-Gaussian approximation [35,36]. In our work, we thus prefer to use the terms detection and quantification. To achieve proper certification, one would need to adapt more robust protocols, such as those developed for single-outcome measurements [14], to the case of camera detection. This is not straightforward, but could be done by specific design of the measurement bases [37] and by using multi-plane light converters [38]. From the technical side, we would like to point out that the image intensifier that we use has a quantum efficiency of approximately $20\%$, which is insufficient to close the fair-sampling loophole [16]. The commercial image intensifiers do not achieve a quantum efficiency of more than $40\%$, so one would need to employ instead a single-photon-sensitive sensor with higher quantum efficiency, for example, single-photon avalanche detector, SPAD [39], or superconducting nanowire single-photon detectors, SNSPDs, [40]. These detectors would need to be combined with a data-driven readout as in Tpx3Cam.

Furthermore, even if their rates are very low, we still detect accidental coincidences in the measured JPDs, i.e., approximately $1.4{\times }10^{-4}$ accidental coincidences per pixel pair per second in the FF configuration, and $1.8{\times }10^{-4}$ accidental coincidences per pixel pair per second in the NF configuration. These accidentals increase the measured uncertainties and therefore decreases the amount of entanglement dimension that can be quantified. They could be further reduced by diminishing the system losses, e.g., using a type-II SPDC source to split photon paths with a polarizing beam splitter, and by enhancing the effective camera temporal resolution [32].

Funding

Brookhaven National Laboratory (LDRD grant 22-22); Centrum pokročilých aplikovaných přírodních věd (CZ.02.1.01/0.0/0.0/16-019/0000778); European Research Council (101039375, 724473).

Acknowledgments

We thank Rene Glazenborg and the company Photonis for loaning us the image intensifier.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. H. Bechmann-Pasquinucci and W. Tittel, Phys. Rev. A 61, 062308 (2000). [CrossRef]  

2. C. Reimer, S. Sciara, P. Roztocki, M. Islam, L. Romero Cortés, Y. Zhang, B. Fischer, S. Loranger, R. Kashyap, A. Cino, S. T. Chu, B. E. Little, D. J. Moss, L. Caspani, W. J. Munro, J. Aza na, M. Kues, and R. Morandotti, Nat. Phys. 15, 148 (2019). [CrossRef]  

3. S. Ecker, F. Bouchard, L. Bulla, F. Brandt, O. Kohout, F. Steinlechner, R. Fickler, M. Malik, Y. Guryanova, and R. Ursin, Phys. Rev. X 9, 041042 (2019). [CrossRef]  

4. A. Acín, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani, Phys. Rev. Lett. 98, 230501 (2007). [CrossRef]  

5. S. P. Walborn, C. H. Monken, S. Pádua, and P. H. Souto Ribeiro, Phys. Rep. 495, 87 (2010). [CrossRef]  

6. M. P. Almeida, S. P. Walborn, and P. H. Souto Ribeiro, Phys. Rev. A 72, 022313 (2005). [CrossRef]  

7. D. S. Tasca, R. M. Gomes, F. Toscano, P. H. Souto Ribeiro, and S. P. Walborn, Phys. Rev. A 83, 052325 (2011). [CrossRef]  

8. P.-A. Moreau, E. Toninelli, T. Gregory, and M. J. Padgett, Nat. Rev. Phys. 1, 367 (2019). [CrossRef]  

9. A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, Nature 412, 313 (2001). [CrossRef]  

10. J. C. Howell, R. S. Bennink, S. J. Bentley, and R. W. Boyd, Phys. Rev. Lett. 92, 210403 (2004). [CrossRef]  

11. L. Achatz, E. A. Ortega, K. Dovzhik, R. F. Shiozaki, J. Fuenzalida, S. Wengerowsky, M. Bohmann, and R. Ursin, Phys. Scr. 97, 015101 (2022). [CrossRef]  

12. J. Schneeloch, C. C. Tison, M. L. Fanto, P. M. Alsing, and G. A. Howland, Nat. Commun. 10, 2785 (2019). [CrossRef]  

13. S. Designolle, V. Srivastav, R. Uola, N. H. Valencia, W. McCutcheon, M. Malik, and N. Brunner, Phys. Rev. Lett. 126, 200404 (2021). [CrossRef]  

14. J. Bavaresco, N. Herrera Valencia, C. Klöckl, M. Pivoluska, P. Erker, N. Friis, M. Malik, and M. Huber, Nat. Phys. 14, 1032 (2018). [CrossRef]  

15. N. H. Valencia, V. Srivastav, M. Pivoluska, M. Huber, N. Friis, W. McCutcheon, and M. Malik, Quantum 4, 376376 (2020). [CrossRef]  

16. B. G. Christensen, K. T. McCusker, J. B. Altepeter, B. Calkins, T. Gerrits, A. E. Lita, A. Miller, L. K. Shalm, Y. Zhang, S. W. Nam, N. Brunner, C. C. W. Lim, N. Gisin, and P. G. Kwiat, Phys. Rev. Lett. 111, 130406 (2013). [CrossRef]  

17. D. S. Tasca, F. Izdebski, G. S. Buller, J. Leach, M. Agnew, M. J. Padgett, M. P. Edgar, R. E. Warburton, and R. W. Boyd, Nat. Commun. 3, 984 (2012). [CrossRef]  

18. P.-A. Moreau, J. Mougin-Sisini, F. Devaux, and E. Lantz, Phys. Rev. A 86, 010101 (2012). [CrossRef]  

19. P.-A. Moreau, F. Devaux, and E. Lantz, Phys. Rev. Lett. 113, 160401 (2014). [CrossRef]  

20. M. Reichert, H. Defienne, and J. W. Fleischer, Sci. Rep. 8, 7925 (2018). [CrossRef]  

21. M. Dabrowski, M. Parniak, and W. Wasilewski, Optica 4, 272 (2017). [CrossRef]  

22. M. Dabrowski, M. Mazelanik, M. Parniak, A. Leszczynski, M. Lipka, and W. Wasilewski, Phys. Rev. A 98, 042126 (2018). [CrossRef]  

23. B. Ndagano, H. Defienne, A. Lyons, I. Starshynov, F. Villa, S. Tisa, and D. Faccio, npj Quantum Inf. 6, 94 (2020). [CrossRef]  

24. B. Eckmann, B. Bessire, M. Unternährer, L. Gasparini, M. Perenzoni, and A. Stefanov, Opt. Express 28, 31553 (2020). [CrossRef]  

25. F. Zhu, M. Tyler, N. H. Valencia, M. Malik, and J. Leach, AVS Quantum Sci. 3, 011401 (2021). [CrossRef]  

26. A. Nomerotski, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 937, 26 (2019). [CrossRef]  

27. C. Ianzano, P. Svihra, M. Flament, A. Hardy, G. Cui, A. Nomerotski, and E. Figueroa, Sci. Rep. 10, 6181 (2020). [CrossRef]  

28. P. Svihra, Y. Zhang, P. Hockett, S. Ferrante, B. Sussman, D. England, and A. Nomerotski, Appl. Phys. Lett. 117, 044001 (2020). [CrossRef]  

29. Y. Zhang, D. England, A. Nomerotski, and B. Sussman, Opt. Express 29, 28217 (2021). [CrossRef]  

30. X. Gao, Y. Zhang, A. D’Errico, F. Hufnagel, K. Heshami, and E. Karimi, Opt. Express 30, 21276 (2022). [CrossRef]  

31. Y. Zhang, A. Orth, D. England, and B. Sussman, Phys. Rev. A 105, L011701 (2022). [CrossRef]  

32. V. Vidyapin, Y. Zhang, D. England, and B. Sussman, Sci. Rep. 13, 1009 (2023). [CrossRef]  

33. C. Spengler, M. Huber, S. Brierley, T. Adaktylos, and B. C. Hiesmayr, Phys. Rev. A 86, 022311 (2012). [CrossRef]  

34. M. D. Reid, P. D. Drummond, W. P. Bowen, E. G. Cavalcanti, P. K. Lam, H. A. Bachor, U. L. Andersen, and G. Leuchs, Rev. Mod. Phys. 81, 1727 (2009). [CrossRef]  

35. J. Schneeloch and J. C. Howell, J. Opt. 18, 053501 (2016). [CrossRef]  

36. J. Schneeloch and G. A. Howland, Phys. Rev. A 97, 042338 (2018). [CrossRef]  

37. D. S. Tasca, P. Sánchez, S. P. Walborn, and Ł. Rudnicki, Phys. Rev. Lett. 120, 040403 (2018). [CrossRef]  

38. G. Labroille, B. Denolle, P. Jian, P. Genevaux, N. Treps, and J.-F. Morizur, Opt. Express 22, 15599 (2014). [CrossRef]  

39. M.-J. Lee and E. Charbon, Jpn. J. Appl. Phys. 57, 1002A3 (2018). [CrossRef]  

40. E. E. Wollman, V. B. Verma, A. E. Lita, W. H. Farr, M. D. Shaw, R. P. Mirin, and S. W. Nam, Opt. Express 27, 35279 (2019). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary document

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1.
Fig. 1. Experimental setup. (a) A horizontally polarized continuous-wavelength collimated $405$-nm laser diode illuminates a $0.5$-mm-long $\beta$-barium-borate (BBO) crystal to produce spatially entangled photon pairs by type-I spontaneous parametric downconversion (SPDC). The pump laser power is $50$ mW. Crystal surface is imaged and magnified ($\times 2$) into an intermediate optical plane $P$ (dashed line) by a set of lenses $f_1$$f_2$. A beam splitter (BS) separates the photon-pairs beam into two that are re-positioned side-by-side using a half-wave plate at $45^\circ$ and a polarizing beam splitter (PBS). In the far-field (FF) configuration, the crystal is Fourier imaged onto the camera by lens $f_4$. In the near-field (NF) configuration, the crystal surface is imaged onto the camera using lenses $f_3=30$ mm and $f_4=150$ mm. The camera is composed of an image intensifier and a time-stamping camera Tpx3Cam. Long pass (LPF) and bandpass filters (BPF) block the pump after the crystal to select near-degenerate photon pairs at $810 \pm 5$ nm. (b) and (c) Intensity images measured on the camera in the FF and NF configurations, respectively. For clarity, the spatial positions with an index “1” correspond to pixels on the left side of the camera and those with an index “2” to pixels on the right side. Spatial axes units are in pixels and intensity is in photon counts. Acquisition time is $200$ seconds. The losses in the experiment are mainly due to the quantum efficiency of the image intensifier, which is approximately $0.2$.
Fig. 2.
Fig. 2. Bi-dimensional projections of the measured spatial joint probability distribution (JPD). (a) and (e) Marginal probability distributions $\Gamma _m(k_{x_2},k_{y_2})$ and $\Gamma _m(x_2,y_2)$ of detecting one photon of the pair in the FF and NF configurations, respectively. (b) and (f) Conditional probability distributions $\Gamma (k_{x_2},k_{y_2}|k_{x_1},k_{y_1})$ and $\Gamma ({x_2},{y_2}|{x_1},{y_1})$ of detecting one photon of a pair when its twin was detected at pixel $(k_{x_1},k_{y_1})=({x_1},{y_1})=(-25,-10)$ (white dashed lines) in the FF and NF configurations, respectively. (c) and (g) Joint probability distribution $\Gamma (k_{y_2},k_{y_1},k_{x_2},k_{x_1})$ and $\Gamma ({y_2},{y_1},{x_2},{x_1})$ of detecting photon pairs on the central column of the sensor i.e., $(k_{x_1},k_{x_2}) = (0,0)$ and $(x_1,x_2) = (0,0)$, in the FF and NF configurations, respectively. (d) Sum-coordinate projection $\Gamma _+(k_{x_1}+k_{x_2},k_{y_1}+k_{y_2})$ of the JPD measured in the FF configuration. (h) Minus-coordinate projection $\Gamma _-({x_1}-{x_2},{y_1}-{y_2})$ of the JPD measured in the NF configuration. Definitions of $\Gamma _m$, $\Gamma _-$, and $\Gamma _+$ are provided in the Supplement 1. Inserted images are zooms around the peaks ($10\times 10$ pixels). Acquisition time is $200$ seconds in each configuration. Spatial axes units are in pixels. The measured JPD are not normalized and their units are numbers of coincidences. Totals of $1.4\times 10^6$ and $2.1\times 10^6$ coincidences were detected by the sensor during acquisition in the FF and NF configuration, respectively.

Tables (1)

Tables Icon

Table 1. Measured Values of Uncertainties, Lower Bounds of Entanglement of Formation, and Low Bound of Dimensiona

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

Δ m i n [ x ] Δ m i n [ k x ] 1 2 ,
Δ m i n 2 [ x ] = d x 1 d x 2 Γ m ( x 2 ) Δ 2 [ x 1 | x 2 ] ,
E x log 2 ( e Δ [ x 1 x 2 ] Δ [ k x 1 + k x 2 ] ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.