Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantum random number generator with discarding-boundary-bin measurement and multi-interval sampling

Open Access Open Access

Abstract

A quantum random number generator (QRNG) provides a reliable means for the generation of true random numbers. The inherent randomness of the vacuum fluctuations makes the quantum vacuum state a superior source of entropy. However, in practice, the raw sequences of QRNG are inevitably contaminated by classical technical noise, which compromises the security of the QRNG. Min-entropy conditioned on the classical noise is a useful method that can quantify the side-information independent randomness. To improve the extractable randomness from the raw sequences arising from the quantum vacuum-based QRNG, we propose and experimentally demonstrate two approaches, discarding-boundary-bin measurement and multi-interval sampling. The first one increases the conditional min-entropy at a low quantum-to-classical-noise ratio. The latter exploits parallel sampling using multiple analog-to-digital converters (ADCs) and effectively overcomes the finite resolution limit and uniform sampling of a single ADC. The maximum average conditional min-entropy can reach 9.2 per sample when combining these two approaches together in contrast to 6.93 with a single 8-bit ADC.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Quantum random number generator (QRNG) plays an essential role in various applications, such as simulation, cryptography, and fundamental science [1,2]. For instance, quantum key distribution requires a reliable randomness source to guarantee its information-theoretical security [37]. If the quantum states are not prepared in a truly random manner, the quantum key distribution protocol will inevitably suffer from attacks, which has been shown for the BB84 protocol [8]. Therefore, in such applications, the random sequence must be genuinely unpredictable and has sufficient security.

The quality of the entropy source fundamentally determines the performance of a random number generator. Random number generators based on noise in electronic circuits can be used to generate high-speed random numbers and easily integrated into application systems [9,10]. However, the randomness of most of the noise in electronic circuits is not theoretically provable. More precisely, noise in such systems come basically from two types, shot noise and thermal noise. Shot noise arises from quantum effects due to the discrete charges nature of the current, which shows quantum fluctuations. In contrast, thermal noise arises from the thermal agitation of the carriers and is subject to thermally activated statistical motion. In practice, it is difficult to characterize precisely the amount of quantum fluctuations inside electric noise.

QRNG offers a perfect source of entropy due to the inherent randomness of quantum mechanics [1114]. At present, various optical QRNGs have been proposed and implemented, such as vacuum fluctuations [1518], phase noise [1921], stimulated Raman scattering [22], photon arrival times [2326], photon path [27], and photon number counting [28,29]. These QRNGs can generate information-theoretical randomness only when model assumptions fulfill practical implementation. The imperfection of the practical devices of a QRNG, such as classical noise in detector, the unused port of the interferometer, may leak side-information correlated with the random sequence [2]. Such side-information could be exploited by a potential attacker to guess the measurement outcomes and compromise the security.

To address the security issues, several protocols have been proposed. The device-independent (DI) QRNG utilizes the violation of loophole-free Bell inequality to generate true randomness in a self-testing way [3033]. However, the extremely low rate and the sophisticated experimental setup of DI-QRNG severely limit its practical applications. The semi-device-independent (SDI) QRNG [11,18,3440] relies on weaker assumptions to bound the side information and can achieve a higher generation rate than DI-QRNG. Different from the DI-QRNGs and SDI-QRNGs, the practical QRNGs calibrate the devices in a fully trusted way to bound the classical side-information and ensure the security of random sequence [41,42].

A typical QRNG consists of three parts: quantum entropy source, measurement of quantum states, and extraction of true random numbers. To extract genuine randomness independent of classical noise from a practical QRNG, one should properly model the implementation of the QRNG. Ma et al. presented a framework for quantifying the quantum randomness by min-entropy and provided information-theoretically provable randomness extractors for post-processing [43]. Haw et al. proposed a method to maximize the conditional min-entropy from a given quantum-to-classical-noise ratio [42]. Huang et al. systematically analyzed the effects of the local oscillator fluctuation under imbalanced homodyne detection on the security of QRNG [44].

In this work, we proposed and demonstrated two approaches to improve the extractable randomness in a quantum vacuum fluctuations based QRNG. The first one is the discarding-boundary-bin measurement (DBBM), which increases the upper bound of the conditional min-entropy. The other one is the multi-interval sampling (MIS) technique, which uses parallel sampling based on multiple Analog-to-Digital Converters (ADCs) and effectively overcomes the finite resolution limit and uniform sampling of a single ADC. By combining these two methods together, we show that the conditional min-entropy is significantly improved, and more secure randomness can be extracted from the output signal of the homodyne detector.

2. Randomness and conditional min-entropy

2.1 Randomness quantification

The min-entropy of variable $X$ with a probability distribution ${{P}_{X}}({{x}_{i}})$ is defined by [45]

$${{H}_{\min }}(X)={-}{{\log }_{2}}[{}_{{{x}_{i}}\in X}^{\max }{{P}_{X}}({{x}_{i}})],$$
which is widely used for quantifying the randomness and associated with the maximum guessing probability for an adversary about variable $X$.

In practical QRNGs, the randomness of the entropy source can be evaluated by the conditional min-entropy, which depends on the specific attacks that how the adversary interacts with the classical side-information. We assume that the adversary has no limit on computational power and fully knows the classical noise with arbitrary precision. Two attack scenarios (worst-case scenario and average-case scenario) are considered hereafter. In the worst-case scenario, the adversary has full control of the classical noise to maximize his ability to predict random sequences. In this case, the maximum conditional probability is exploited to estimate the amount of information she successfully captured, and the worst-case min-entropy of random sequences conditioned on the classical side-information is given by [46]

$${{\tilde{H}}_{\min }}(X|E)={-}{{\log }_{2}}[{}_{{{e}_{j}}\in [{{e}_{\min }},{{e}_{\max }}]}^{\max }{}_{{{x}_{i}}\in X}^{\max }{{P}_{X|E}}({{x}_{i}}|{{e}_{j}})].$$

In the average-case scenario, the adversary can freely monitor the classical noise with infinite sampling range and digitization resolution but cannot modify it. Therefore, we estimate the extractable randomness of the device using the average conditional min-entropy

$${{\bar{H}}_{\min }}(X|E)={-}{{\log }_{2}}[{{P}_{guess}}({{X}_{dis}}|E)],$$
where ${{P}_{guess}}({{X}_{dis}}|E)$ is the average guessing probability of an adversary that can correctly guess the bit information with the best strategy [45],
$${{P}_{guess}}({{X}_{dis}}|E)=\int_{-\infty }^{\infty }{{{P}_{E}}(e){}_{{{\textrm{x}}_{i}}\in {{X}_{dis}}}^{\max }{{P}_{{{X}_{dis}}|E}}({{x}_{i}}|e)de}.$$

According to Eqs. (2)–(4), we know that extractable secure randomness is bounded by the worst-case conditional min-entropy or the average conditional min-entropy, which directly relates to the maximum (average) guessing probability of an adversary conditioned on the classical noise. A smaller maximum conditional probability means a lower amount of an adversary’s knowledge on the random sequence and therefore more extractable randomness. Given the entropy source, the guessing probability of an adversary strongly depends on the measurement model, as we will show in the following section.

2.2 Conventional measurement model

For QRNG based on the vacuum fluctuations noise, the output signal $M$ of the balanced homodyne detector (BHD) is a mixture of vacuum fluctuations noise $Q$ and classical electronic noise $E$ [15]. We suppose both the noises are time-independent and follow Gaussian distributions centered at zero, $N(0,\sigma _{E}^{2})$ and $N(0,\sigma _{Q}^{2})$, where $\sigma _{Q}^{2}$ is the variance of quantum noise and $\sigma _{E}^{2}$ is the variance of classical electronic noise. The quantum-to-classical noise ratio is defined as QCNR, i.e., QCNR $=10{{\log }_{10}}(\sigma _{Q}^{2}/\sigma _{E}^{2})$ dB.

The probability density functions (PDF) of the vacuum quantum noise $Q$ and the classical noise $E$ can be expressed as

$${{p}_{Q}}\left( q \right)=\frac{1}{\sqrt{2\pi }{{\sigma }_{Q}}}\exp \left[ -\frac{{{q}^{2}}}{2\sigma _{Q}^{2}} \right], {{p}_{E}}\left( e \right)=\frac{1}{\sqrt{2\pi }{{\sigma }_{E}}}\exp \left[ -\frac{{{e}^{2}}}{2\sigma _{E}^{2}} \right].$$

By performing a convolution of ${{p}_{Q}}$ and ${{p}_{E}}$, the PDF of the measurements $M$ is

$${{p}_{M}}\left( m \right)=\frac{1}{\sqrt{2\pi }{{\sigma }_{M}}}\exp \left[ -\frac{{{m}^{2}}}{2\sigma _{M}^{2}} \right]=\frac{1}{\sqrt{2\pi (\sigma _{Q}^{2}+\sigma _{E}^{2})}}\exp \left[ -\frac{{{m}^{2}}}{2(\sigma _{Q}^{2}+\sigma _{E}^{2})} \right].$$

Then the PDF of the measurements $M$ conditional on the classical noise $E$ is given by

$${{p}_{M|E}}\left( m|e \right)=\frac{1}{\sqrt{2\pi }\left( \sigma _{M}^{2}-\sigma _{E}^{2} \right)}\exp \left[ -\frac{{{\left( m-e \right)}^{2}}}{2\left( \sigma _{M}^{2}-\sigma _{E}^{2} \right)} \right]=\frac{1}{\sqrt{2\pi }{{\sigma }_{Q}}}\exp \left[ -\frac{{{\left( m-e \right)}^{2}}}{2\sigma _{Q}^{2}} \right].$$

Hereafter we normalize all the relevant quantities by the quantum noise ($\sigma _{Q}^{2}=1$). When we sample the output signal of BHD with an $n$-bit ADC with dynamical range $[-R\textrm {+}\delta ,R-3\delta /2]$, the conditional probability distribution of measurements $M$ on classical noise $E$ is [42]

$${{P}_{{{M}_{dis}}|E}}\left( {{m}_{i}}|e \right)=\left\{ \begin{matrix} \int{_{-\infty }^{{-}R+\delta /2}{{p}_{M|E}}\left( m|e \right){{d}_{m}}}, & i={{i}_{\min }}, \\ \int{_{{{m}_{i}}-{\delta}/2}^{{{m}_{i}}+{\delta }/2}{{p}_{M|E}}\left( m|e \right){{d}_{m}},} & {{i}_{\min }}+1\le i\le {{i}_{\max }}-1, \\ \int{_{R-3\delta /2}^{\infty }{{p}_{M|E}}\left( m|e \right){{d}_{m}},} & i={{i}_{\max }}. \\ \end{matrix} \right.$$

Combine Eqs. (2) and (8), the worst-case min-entropy can be expressed by

$${{\tilde{H}}_{\min }}({{M}_{dis}}|E)={-}{{\log }_{2}}[\max ({{c}_{1}},{{c}_{2}})],$$
where ${{c}_{1}}$ is the maximum conditional probability of the boundary bins ($i={{i}_{\max }},{{i}_{\min }}$) and ${{c}_{2}}$ is the maximum conditional probability within the sampling range (${{i}_{\min }}+1\le i\le {{i}_{\max }}-1$). Starting from Eq. (9) and given the QCNR, we find that ${{c}_{1}}$ and ${{c}_{2}}$ are two key parameters to determine the maximum conditional min-entropy [42]. Besides, the classical noise $e$ in Eq. (2) can be bounded for practical purposes in the worst-case scenario. When the value range of the variable $e$ is bounded by $[-5{{\sigma }_{E}},5{{\sigma }_{E}}]$, it is valid for 99.9999% of the time and enough for the practical scenario. Here, we chose the range $[-5{{\sigma }_{E}},5{{\sigma }_{E}}]$ as the smallest confidence interval of the classical noise $e$. By optimizing the dynamic ADC range $R$, we can maximize the worst-case conditional min-entropy, which is obtained when ${{c}_{1}}={{c}_{2}}$.

3. Modified model of entropy quantification

In this section, we present a modified model of entropy quantification for a practical QRNG. Two approaches are employed in the modified model, DBBM and MIS. We will show that the proposed methods effectively improve the extractable randomness from the entropy source in comparison with the conventional model (CM) of entropy quantification.

3.1 Discarding-boundary-bin measurement

From previous section, we know that the maximum conditional min-entropy relies on the maximum conditional probability of the boundary bins ${{c}_{1}}$ and the maximum conditional probability within the sampling the range ${{c}_{2}}$. In the CM, the first bin $i={{i}_{\min }}$ and last bin $i={{i}_{\max }}$ of an ADC cover the input signals with a range of $[\textrm {-}\infty ,-R+\delta /2]$ and $[R-3\delta /2,\infty ]$, respectively. If we discard the boundary bins, the conditional probability distribution of measurements $M$ on classical noise $E$ will change from Eqs. (8) to (10). By further optimizing $R$, we can set the maximum conditional probability of the boundary bins ${{c}_{1}}$ equal to ${{c}_{2}}$. In this case, the new ${{c}_{1}}$ will be lower than before, which results in a higher maximum conditional min-entropy. The basic reason is that the boundary bins cover an infinite range of input signals. If we discard them, we can decrease the optimized $R$, which results in a lower maximum conditional probability and therefore a higher maximum conditional min-entropy can be obtained. The schematic of DBBM is depicted in Fig. 1.

 figure: Fig. 1.

Fig. 1. Discarding-boundary-bin measurement model. The output signal of the BHD is discretized into ${{2}^{n}}$ sampling bins by $n$-bit ADC with dynamical range $[-R-\delta /2,R-\delta /2]$. The bin width and discrete measurement outcomes are $\delta =R/{{2}^{n\textrm {-1}}}$ and ${{m}_{i}}=\delta \times i$. The measured data that distributed in the boundary bins (shaded) are discarded. Therefore, the first bin ${{i}_{L}}$ and the last bin ${{i}_{H}}$ of the remaining bins centered round $-R+\delta$ and $R-2\delta$, respectively.

Download Full Size | PDF

Based on the DBBM, the conditional probability of measurements ${{M}_{dis}}$ on classical noise $E$ is given by

$${{P}_{{{M}_{dis}}|E}}\left( {{m}_{i}}|e \right)=\begin{matrix} \int{_{{{m}_{i}}-\delta /2}^{{{m}_{i}}+\delta /2}{{p}_{M|E}}\left( m|e \right){{d}_{m}},} & {{i}_{L}}\le i\le {{i}_{H}} \\ \end{matrix},$$
where ${{i}_{L}}=-{{2}^{n-1}}+1$ and ${{i}_{H}}={{2}^{n-1}}-2$.

For each measurement, there will be a probability that the measurement results fall in the boundary bins (${{i}_{\min }}$ and ${{i}_{\max }}$). Those results are discarded and the total conditional probability of effective measurements is

$${{P}_{T}}({{M}_{dis}}|e)=1-{{P}_{{{M}_{dis}}|E}}\left( {{m}_{{{i}_{\max }}}}|e \right)-{{P}_{{{M}_{dis}}|E}}\left( {{m}_{{{i}_{\min }}}}|e \right),$$
where ${{i}_{\min }}=-{{2}^{n-1}}$ and ${{i}_{\max }}={{2}^{n-1}}-1$. Then, the normalized conditional probability of the effective measurements is
$$P_{{{M}_{dis}}|E}^{T}\left( {{m}_{i}}|e \right)={{P}_{{{M}_{dis}}|E}}\left( {{m}_{i}}|e \right)/{{P}_{T}}({{M}_{dis}}|e).$$

In terms of Eq. (12), the maximum conditional probability distribution is written as

$$_{{{m}_{i}}\in {{M}_{dis}}}^{\max }P_{{{M}_{dis}}|E}^{T}\left( {{m}_{i}}|e \right)=\max \{P_{{{M}_{dis}}|E}^{T}\left( {{m}_{{{i}_{mid}}}}|e \right),P_{{{M}_{dis}}|E}^{T}\left( {{m}_{{{i}_{L}}}}|e \right),P_{{{M}_{dis}}|E}^{T}\left( {{m}_{{{i}_{H}}}}|e \right)\},$$
where $P_{{{M}_{dis}}|E}^{T}\left ( {{m}_{{{i}_{mid}}}}|e \right )=\left \{ \begin {matrix} erf[\delta /2\sqrt {2}]/{{P}_{T}}(e,R), & {{i}_{L}}<{{i}_{mid}}<{{i}_{H}}, \\ 0, & other. \\ \end {matrix} \right .$

Given the bound of the classical noise $[{{e}_{\min }},{{e}_{\max }}]$, which can be manipulated by an adversary, Eq. (13) can be rewritten as

$$\begin{aligned} & {}_{e\in [{{e}_{min}},{{e}_{max}}]}^{\max }{}_{{{m}_{i}}\in {{M}_{dis}}}^{\max }P_{{{M}_{dis}}|E}^{T}\left( {{m}_{i}}|e \right)=\max \{ \\ & {}_{e\in [{{e}_{min}},{{e}_{max}}]}^{\max }P_{{{M}_{dis}}|E}^{T}\left( {{m}_{{{i}_{mid}}}}|e \right), \\ & {}_{e\in [{{e}_{min}},{{e}_{max}}]}^{\max }P_{{{M}_{dis}}|E}^{T}\left( {{m}_{{{i}_{L}}}}|e \right), \\ & {}_{e\in [{{e}_{min}},{{e}_{max}}]}^{\max }P_{{{M}_{dis}}|E}^{T}\left( {{m}_{{{i}_{H}}}}|e \right)\}. \end{aligned}$$

Notice that the DBBM approach decreases the number of effective measurements, we should carefully choose the optimal $R$ to achieve the balance between the conditional min-entropy and the amount of effective data to maximize the throughput of our QRNG. To resolve this problem, we introduce the concept of equivalent conditional min-entropy.

In the worst-case scenario, the equivalent conditional min-entropy is expressed as

$$\tilde{H}_{eq\min }^{T}({{M}_{dis}}|E)=\tilde{H}_{\min }^{T}({{M}_{dis}}|E)\times {{\tilde{P}}_{T}}({{M}_{dis}}|E),$$
where ${{\tilde {P}}_{T}}({{M}_{dis}}|E)$ is the total probability of the effective measurements corresponding to the maximum conditional min-entropy $\tilde {H}_{\min }^{T}({{M}_{dis}}|E)$,
$$\tilde{H}_{\min }^{T}({{M}_{dis}}|E)={-}{{\log }_{2}}\{{}_{e\in [{{e}_{min}},{{e}_{max}}]}^{\max }{}_{{{m}_{i}}\in {{M}_{dis}}}^{\max }P_{{{M}_{dis}}|E}^{T}({{m}_{i}}|e)\}.$$

In the average-case scenario, the equivalent conditional min-entropy is

$$\bar{H}_{eq\min }^{T}({{M}_{dis}}|E)=\bar{H}_{\min }^{T}({{M}_{dis}}|E)\times {{\bar{P}}_{T}}({{M}_{dis}}|E),$$
where ${{\bar {P}}_{T}}({{M}_{dis}}|E)$ is the mean of the total probability of valid measurements changing with $e$,
$${{\bar{P}}_{T}}({{M}_{dis}}|E)=\int_{-\infty }^{+\infty }{{{P}_{T}}({{M}_{dis}}|e)de},$$
and
$$\bar{H}_{\min }^{T}({{M}_{dis}}|E)={-}{{\log }_{2}}[\int_{-\infty }^{+\infty }{{{p}_{E}}(e)_{{{m}_{i}}\in {{M}_{dis}}}^{\max }P_{{{M}_{dis|}}E}^{T}({{m}_{i}}|e)de}].$$

From Eqs. (14), (15), and (16), we can optimize the ADC dynamical range $R$ and obtain the maximum value of the worst-case equivalent conditional min-entropy in the DBBM. When the confidence interval of the classical noise $e$ is $[-10{{\delta }_{E}},10{{\delta }_{E}}]$ ($[-20{{\delta }_{E}},20{{\delta }_{E}}]$), the worst-case equivalent conditional min-entropy of DBBM is 14.43 (14.06) for $n = 16$ and QCNR = 20 dB. In Fig. 2, we plot the discretized conditional probability distribution ${{P}_{{{M}_{dis}}|E}}({{m}_{i}}|e)$ with optimized dynamical range $R$ for the CM (Fig. 2(a)) and the DBBM (Fig. 2(b)). Compared to the CM, the maximum conditional probability distribution in the DBBM can be effectively reduced from $5.91\times {{10}^{\textrm {-}5}}$ to $4.05\times {{10}^{\textrm {-}5}}$, which increases the worst-case conditional min-entropy from 14.05 to 14.43. Furthermore, we notice that unlike the CM, the maximum conditional probability ${{c}_{1}}$ of the boundary bins is no longer equal to the maximum conditional probability ${{c}_{2}}$ within the sampling range for the DBBM. Instead, the amount of effective data and the conditional min-entropy are balanced to maximize the throughput of the QRNG.

 figure: Fig. 2.

Fig. 2. The discretized worst-case conditional probability distribution ${{P}_{{{M}_{dis}}|E}}({{m}_{i}}|e)$ with optimized dynamical range $R$ for (a) CM and (b) DBBM. The parameter used are $e=\{-10{{\delta }_{E}},0,10{{\delta }_{E}}\}$ (from left to right), QCNR = 20 dB, and $n = 16$. Given the optimized dynamic range $R=4.85$, the maximum conditional probability of the CM is $5.91\times {{10}^{\textrm {-}5}}$ and the corresponding condition min-entropy is 14.05. Given the optimized dynamic range $R=3.3$, the maximum conditional probability of the DBBM is $4.05\times {{10}^{\textrm {-}5}}$, and the optimized equivalent conditional min-entropy is 14.43.

Download Full Size | PDF

Combining Eqs. (17)–(19), we calculate the optimized equivalent conditional min-entropy for DBBM with $n = 8$ and $n = 16$ (average-case scenario). The simulated results are listed in Table 1. For a 16-bit ADC and QCNR = 20 dB, the equivalent conditional min-entropy can reach 14.79 for DBBM instead of 14.28 with the CM. In the case of QCNR = 0 dB, the equivalent conditional min-entropy obtained using the DBBM is 14.30, which is an improvement of 5.38% compared to the CM. Note that the improvement on average min-entropy decreases with the increasing QCNR, and the improvement is better for a higher resolution ADC. The above results indicate that more secure randomness can be extracted using the DBBM approach.

Tables Icon

Table 1. Optimized ${{\bar {H}}_{\min }}({{M}_{dis}}|E)$ (and $R$) for 8-bit and 16-bit ADCs.

3.2 Multi-interval sampling model

Figure 3 shows the schematic of our MIS model. The output signal of the BHD is divided into $L$ measurement intervals and sampled synchronously by $L$ parallel ADCs with the total sampling range $2R=\sum \limits_{w=1}^{L}{2{{R}_{w}}}$. By using the MIS model, an equivalent ADC resolution of ${{n}_{eq}}$ can be obtained, where ${{2}^{{{n}_{eq}}}}=\sum \limits_{w=1}^{L}{{{2}^{{{n}_{w}}}}}$ and ${{n}_{w}}$ is the ADC resolution of each channel. In this way, we can overcome the finite resolution limit of a single ADC.

 figure: Fig. 3.

Fig. 3. The multi-interval sampling model. The output signal of the BHD is divided into $L$ measurement ranges (In our experiment we set $L=4$) and sampled synchronously by $L$ parallel ADCs with the total sampling range $2R=\sum \limits_{w=1}^{L}{2{{R}_{w}}}$. The total equivalent ADC resolution is ${{2}^{{{n}_{eq}}}}=\sum \limits_{w=1}^{L}{{{2}^{{{n}_{w}}}}}$, where ${{n}_{w}}$ is ADC resolution of each channel.

Download Full Size | PDF

Notice that the conditional min-entropy is directly related to the conditional probability distribution function, which is associated with the PDF of the sampling signal and the sampling bin $\delta$. According to Eq. (19), for a given QCNR and sampling range $R$, a larger bin width will lead to a higher conditional probability and thus a lower conditional min-entropy. To suppress the conditional min-entropy and enhance the randomness, it is desired that one has a small bin width in the high probability density region and a large bin width in the low probability density region. Such unequal bin width sampling configuration can be implemented using the MIS and is impossible using the CM where only equal bin sampling is feasible.

For MIS, the conditional probability distribution function $P_{{{M}_{dis}}|E}^{1}\left ( {{m}_{i}}|e \right )$ of the measurement outcomes for the first sampling range is given by

$$P_{{{M}_{dis}}|E}^{1}\left( {{m}_{i}}|e \right)=\left\{ \begin{matrix} \int_{-\infty }^{{-}R+\delta /2}{{{p}_{M|E}}(m|e)dm,} & i=i_{\min }^{1}, \\ \int_{m_{i}^{1}-\delta /2}^{m_{i}^{1}+\delta /2}{{{p}_{M|E}}(m|e)dm,} & i_{\min }^{1}<i<i_{\max }^{1}, \\ 0 & i=i_{\max }^{1}. \\ \end{matrix} \right.$$

Accordingly, the conditional probability distribution function $P_{{{M}_{dis}}|E}^{L}\left ( {{m}_{i}}|e \right )$ of the measurement outcomes for the last sampling interval is

$$P_{{{M}_{dis}}|E}^{L}\left( {{m}_{i}}|e \right)=\left\{ \begin{matrix} 0 & i=i_{\min }^{L}, \\ \int_{m_{i}^{L}-\delta /2}^{m_{i}^{L}+\delta /2}{{{p}_{M|E}}(m|e)dm,} & i_{\min }^{L}<i<i_{\max }^{L}, \\ \int_{R-3\delta /2}^{+\infty }{{{p}_{M|E}}(m|e)dm,} & i=i_{\max }^{L}. \\ \end{matrix} \right.$$

For other sampling ranges, the conditional probability distribution function can be modeled as the DBBM.

Finally, the worst-case (average) conditional min-entropy of the MIS model can be expressed as

$${{\tilde{H}}_{\min }}({{M}_{dis}}|E)={-}{{\log }_{2}}[{}_{w\in [1,L]}^{\max }{}_{e\in [{{e}_{\min }},{{e}_{\max }}]}^{\max }{}_{{{m}_{i}}\in {{M}_{dis}}}^{\max }P_{{{M}_{dis}}|E}^{w}({{m}_{i}}|e)],$$
and
$${{\bar{H}}_{\min }}({{M}_{dis}}|E)={-}{{\log }_{2}}[\int_{-\infty }^{+\infty }{{{p}_{E}}(e){}_{w\in [1,L]}^{\max }{}_{{{m}_{i}}\in {{M}_{dis}}}^{\max }P_{{{M}_{dis}}|E}^{w}({{m}_{i}}|e)de}].$$

Based on the MIS, we calculate the conditional min-entropy when $L=4$ and the corresponding digitization resolutions are $\{{{n}_{1}}={{n}_{2}}={{n}_{3}}={{n}_{4}}=8\}$. We have maximized the extractable min-entropy by optimizing the total sampling range $R$ and the scale factor $k$ of the measurement interval ${{R}_{2}}={{R}_{3}}=k\times {{R}_{1}}=k\times {{R}_{4}}$.

To illustrate the improvement of the conditional min-entropy using MIS in the worst-case scenario, we plot the conditional probability distributions of both the CM and MIS with optimized $R$ and $k$ in Fig. 4. The optimized worst-case conditional min-entropy is obtained when the maximum conditional probability of the right-hand boundary-bin is equal to that in the middle interval. We can see that the maximum conditional probability of the MIS (Fig. 4(c)) is smaller than that of the CM sampling of a single ADC (Figs. 4(a) and 4(b)). Notice that the optimized bin width of the measurement intervals from left to right are 0.0083, 0.0066, 0.0066, and 0.0083. This verifies that non-uniform sampling is superior to uniform sampling.

 figure: Fig. 4.

Fig. 4. The worst-case ${{P}_{{{M}_{dis}}|E}}\left ( m|e \right )$ for (a) the CM ($n=8$) and (b) the CM ($n=10$) and (c) the MIS (${{n}_{eq}}=10$) with $e=\{-10{{\delta }_{E}},0,10{{\delta }_{E}}\}$, QCNR = 20 dB, and optimized $R$. In (a), the maximum conditional probability is 0.0104 and the corresponding optimized conditional min-entropy is 6.58 (the optimized dynamical range $R$ is 3.35). In (b), the optimized maximum conditional probability reaches 0.0104 and the maximum conditional min-entropy can be extracted is 8.41 (the optimized dynamical range $R$ is 3.77). In (c), with the optimized total sampling range $R$ of 3.39 and the scale factor $k={{R}_{2}}/{{R}_{1}}$ (${{R}_{1}}={{R}_{4}},{{R}_{2}}={{R}_{3}}$) of 0.8, the maximum conditional probability is 0.00265 and the corresponding conditional min-entropy is 8.56, which is higher than that in the CM with a single 8 or 10-bit ADC.

Download Full Size | PDF

Table 2 lists the simulation results of the optimized average conditional min-entropy of the CM ($n=8$ and $n=10$) and MIS (${{n}_{eq}}=10$). It shows that the optimized average conditional min-entropy of our MIS is larger than that of the CM with a single 8 or 10-bit ADC. For example, when QCNR = 20 dB, the average conditional min-entropy that can be extracted by MIS (${{n}_{eq}}=10$) can reach 9.11, which is higher than 6.93 by CM ($n=8$), and about 31.46% improvement is achieved. Compared with the average conditional min-entropy in CM ($n=10$), our MIS (${{n}_{eq}}=10$) also improves the average conditional min-entropy by 4.47%. Therefore, MIS not only breaks the finite resolution limit of a single ADC but also improves the robustness of QRNG against the potential attacks, and more randomness independent of classical noise can be extracted. Note that the improvement on average min-entropy by MIS decreases with QCNR. Similar to the worst-case scenario, non-uniform sampling is beneficial to the extraction of the randomness in the average-case scenario.

Tables Icon

Table 2. Optimized ${{\bar {H}}_{\min }}({{M}_{dis}}|E)$ (and $k$, $R$) for $n$ = 8-bit, 10-bit.

4. Experimental implementation and results

To verify our proposed approaches, we experimentally demonstrate the QRNG with both the DBBM and MIS (DBB + MIS). As shown in Fig. 5(a), the continuous-wave laser from a fiber-coupled laser at 1550 nm is acted as the local oscillator (LO) and injected into one port of the 50:50 fiber beam splitter (BS). The vacuum fluctuations are interfered with the LO beam and amplified, and the output beams are coupled to a fiber-based BHD and converted into random electric signals. When the LO is physically blocked, the output signal of the BHD is classical electronic noise. In contrast, it is a mixture of quantum vacuum fluctuations and classical electronics noise if the LO is active. Figure 5(b) shows the power spectral density of the electric signal with the LO active and not. To accurately estimate the QCNR of the output signal of the BHD and reduce the interference of low-frequency signals on the randomness, we select a 200 MHz sideband signal centered around 300 MHz, and the average QCNR is approximately 10 dB.

 figure: Fig. 5.

Fig. 5. (a) Schematic diagram of our QRNG based on quantum vacuum fluctuations. (b) Spectrum of the technical noise and vacuum fluctuation noise obtained from the balanced homodyne detector (BHD). The spectrum range highlighted in orange with clearance of 10 dB between the vacuum noise and technical noise is used for the generation of random numbers.

Download Full Size | PDF

The signal generated by BHD is mixed down at 300 MHz and low pass filtered with a cutoff frequency of 100 MHz. By using a power splitter, the filtered signal is split into four output signals which are simultaneously sampled and digitalized by a multi-channel high bandwidth oscilloscope with a sampling rate of 200 MSample/s. For the given number of sampling intervals and digitization resolution in our model, the sampling range of each interval is optimized to improve the maximum conditional min-entropy.

In the data post-processing of DBB + MIS, the data from the boundary bins of each sampling interval is discarded by the data filter unit. The data output from each measurement interval does not include the position information of the sampling intervals, which will result in a data overlap problem and degradation of the randomness. To resolve this issue, a data mixer unit is constructed, which inserts two flag bits before the most significant bit of each data according to the position of the corresponding sampling interval. In this way, the data acquired from different sampling channels is identified and mixed together to generate raw data. Finally, the random bits are extracted with the Toeplitz extractor based on the conditional min-entropy [43].

In our experiment, we investigate two different digitization resolutions, ${{n}_{1}}={{n}_{2}}={{n}_{3}}={{n}_{4}}=8$ (${{n}_{eq}}=10$) and ${{n}_{1}}={{n}_{2}}={{n}_{3}}={{n}_{4}}=16$ (${{n}_{eq}}=18$). Figure 6 plots the extractable randomness independent of classical noise with equivalent conditional min-entropy for the worst-case under the confidence interval of $5{{\delta }_{E}}\le \left | e \right |\le 20{{\delta }_{E}}$. For comparison, the optimized worst-case conditional min-entropy for the CM is also shown [42]. As we can see from Figs. 6(a) and 6(b), the DBB + MIS method effectively improves the worst-case conditional min-entropy. For both $n = 10$ and $n = 18$, the improvements of the worst-case conditional min-entropy are more obvious at higher QCNR. We also notice that higher digitization resolution $n$ is beneficial to the extractable secure randomness.

 figure: Fig. 6.

Fig. 6. The extractable secure randomness of CM and DBB + MIS as a function of QCNR for (a) 10-bit measurement and (b) 18-bit measurements in the worst-case under the confidence interval of $5{{\delta }_{E}}\le \left | e \right |\le 20{{\delta }_{E}}$.

Download Full Size | PDF

Figure 7 depicts the extractable randomness independent of classical noise with equivalent conditional min-entropy for the average-case. We find that there exist upper bounds for the extractable secure randomness in both the average-case and worst-case at a high QRNG regime, and the two bounds are approximately equal when QCNR$\to \infty$. Compared with the CM for $n = 8$ ($n = 16$), the upper bound of the conditional min-entropy of the DBB + MIS for ${{n}_{eq}}=10$ (${{n}_{eq}}=18$) increases approximately by 31.30% (19.62%). For the same digitization resolution , the DBB + MIS method can still achieve improvement of 4.77% for $n = 10$ and 5.71% for $n = 18$, respectively. The final random bit generation rate in our experiment can reach 3.2 Gbits/s with ${{n}_{eq}}=18$ and QCNR = 10 dB.

 figure: Fig. 7.

Fig. 7. The extractable secure randomness of CM and DBB + MIS in the average-case.

Download Full Size | PDF

To evaluate the randomness of the extracted bit sequence from our QRNG, we calculate the autocorrelation coefficients using a typical record of $5\times {{10}^{7}}$ bits. As shown in Fig. 8, both the average-case and worst-case are plotted for 10-bit ADC and QCNR of 10 dB, and the confidence interval of classical noise is $-5{{\delta }_{E}}\le e\le 5{{\delta }_{E}}$. The low autocorrelation coefficients indicate the extracted bitstream from our QRNG has a good randomness and is close to independent and identically distributed.

 figure: Fig. 8.

Fig. 8. Autocorrelation coefficients calculated from $5\times {{10}^{7}}$ extracted random bits for (a) the average-case and (b) the worst-case when QCNR = 10 dB and $-5{{\delta }_{E}}\le e\le 5{{\delta }_{E}}$. The theoretical standard deviation of the autocorrelation for truly random $5\times {{10}^{7}}$ bits is shown as dashed lines.

Download Full Size | PDF

To further verify the randomness, we employ the NIST SP 800-22 suite [47] for data’s statistical tests. The total amount of extracted random numbers used for the test is 1 Gbits. As shown in Fig. 9, the test results show that the NIST statistical test suite is passed successfully, which indicates that the generated random sequence has a good statistical characteristic.

 figure: Fig. 9.

Fig. 9. The NIST test results.

Download Full Size | PDF

5. Conclusion

In this work, we propose two novel approaches to improve the extractable randomness in a vacuum fluctuations based QRNG independent of classical noise in the worst-case and average-case scenarios. By discarding the boundary bins after discretization of vacuum fluctuations, we suppress the influence of the conditional probability of the boundary bins on the conditional min-entropy and enhance the randomness of the output sequence. It is a challenge to operate an ADC at both high speed and high resolution. We release the bottleneck of the finite resolution limit of a single ADC by adopting a multi-interval sampling. Combine with an unequal-bins-sampling, the conditional probability distributions of different sampling intervals are well balanced. We show that unequal-bins-sampling can provide more randomness independent of classical noise than the equal-bins-sampling. Finally, we apply the above two approaches to quantum vacuum fluctuations based random number generators. The secure random bits generation rate of our QRNG can reach 3.2 Gbits/s. Compared with the previous QRNGs based on a homodyne measurement of vacuum state, our methods can extract more secure randomness from the output signal of the balanced homodyne detector.

For further study, it will be interesting to apply the proposed protocol to other kinds of QRNGs, such as the Source-DI QRNGs introduced in Ref. [38], where the bound to the conditional min-entropy of the random numbers is determined by the resolution of the trusted measurement apparatus.

Funding

Key Research and Development Program of Guangdong Province (2020B0303040002); Key Research and Development Projects of Shanxi Province (201803D121065); Shanxi 1331KSC; National Key Research and Development Program of China (2016YFA0301403); National Natural Science Foundation of China (11774209).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. M. Herrero-Collantes and J. C. Garcia-Escartin, “Quantum random number generators,” Rev. Mod. Phys. 89(1), 015004 (2017). [CrossRef]  

2. X. Ma, X. Yuan, Z. Cao, B. Qi, and Z. Zhang, “Quantum random number generation,” npj Quantum Inf. 2(1), 16021 (2016). [CrossRef]  

3. N. Wang, S. Du, W. Liu, X. Wang, Y. Li, and K. Peng, “Long-distance continuous-variable quantum key distribution with entangled states,” Phys. Rev. Appl. 10(6), 064028 (2018). [CrossRef]  

4. E. Diamanti, H.-K. Lo, B. Qi, and Z. Yuan, “Practical challenges in quantum key distribution,” npj Quantum Inf. 2(1), 16025 (2016). [CrossRef]  

5. F. Xu, X. Ma, Q. Zhang, H.-K. Lo, and J.-W. Pan, “Secure quantum key distribution with realistic devices,” Rev. Mod. Phys. 92(2), 025002 (2020). [CrossRef]  

6. Y. Zhou, J. Yu, Z. Yan, X. Jia, J. Zhang, C. Xie, and K. Peng, “Quantum secret sharing among four players using multipartite bound entanglement of an optical field,” Phys. Rev. Lett. 121(15), 150502 (2018). [CrossRef]  

7. S. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, J. L. Pereira, M. Razavi, J. S. Shaari, M. Tomamichel, V. C. Usenko, G. Vallone, P. Villoresi, and P. Wallden, “Advances in quantum cryptography,” Adv. Opt. Photonics 12(4), 1012–1236 (2020). [CrossRef]  

8. H. Li, Z. Yin, S. Wang, Y. Qian, W. Chen, G. Guo, and Z. Han, “Randomness determines practical security of BB84 quantum key distribution,” Sci. Rep. 5(1), 16200 (2015). [CrossRef]  

9. P. Z. Wieczorek and K. Gołofit, “True random number generator based on flip-flop resolve time instability boosted by random chaotic source,” IEEE Trans. Circuits Syst. I 65(4), 1279–1292 (2018). [CrossRef]  

10. M. Bucci and R. Luzzi, “Fully digital random bit generators for cryptographic applications,” IEEE Trans. Circuits Syst. I 55(3), 861–875 (2008). [CrossRef]  

11. B. Xu, Z. Chen, Z. Li, J. Yang, Q. Su, W. Huang, Y. Zhang, and H. Guo, “High speed continuous variable source-independent quantum random number generation,” Quantum Sci. Technol. 4(2), 025013 (2019). [CrossRef]  

12. M. Fiorentino, C. Santori, S. M. Spillane, R. G. Beausoleil, and W. J. Munro, “Secure self-calibrating quantum random-bit generator,” Phys. Rev. A 75(3), 032334 (2007). [CrossRef]  

13. Q. Zhang, X. Deng, C. Tian, and X. Su, “Quantum random number generator based on twin beams,” Opt. Lett. 42(5), 895–898 (2017). [CrossRef]  

14. C. S. Calude, M. J. Dinneen, M. Dumitrescu, and K. Svozil, “Experimental evidence of quantum randomness incomputability,” Phys. Rev. A 82(2), 022102 (2010). [CrossRef]  

15. C. Gabriel, C. Wittmann, D. Sych, R. Dong, W. Mauerer, U. L. Andersen, C. Marquardt, and G. Leuchs, “A generator for unique quantum random numbers based on vacuum states,” Nat. Photonics 4(10), 711–715 (2010). [CrossRef]  

16. Z. Zheng, Y. Zhang, M. Huang, Z. Chen, S. Yu, and H. Guo, “Bias-free source-independent quantum random number generator,” Opt. Express 28(15), 22388–22398 (2020). [CrossRef]  

17. Y. Shi, B. Chng, and C. Kurtsiefer, “Random numbers from vacuum fluctuations,” Appl. Phys. Lett. 109(4), 041101 (2016). [CrossRef]  

18. D. G. Marangon, G. Vallone, and P. Villoresi, “Source-device-independent ultrafast quantum random number generation,” Phys. Rev. Lett. 118(6), 060503 (2017). [CrossRef]  

19. F. Xu, B. Qi, X. Ma, H. Xu, H. Zheng, and H.-K. Lo, “Ultrafast quantum random number generation based on quantum phase fluctuations,” Opt. Express 20(11), 12366–12377 (2012). [CrossRef]  

20. C. Abellán, W. Amaya, M. Jofre, M. Curty, A. Acín, J. Capmany, V. Pruneri, and M. W. Mitchell, “Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode,” Opt. Express 22(2), 1645–1654 (2014). [CrossRef]  

21. J. Yang, J. Liu, Q. Su, Z. Li, F. Fan, B. Xu, and H. Guo, “5.4 Gbps real time quantum random number generator with simple implementation,” Opt. Express 24(24), 27475–27481 (2016). [CrossRef]  

22. P. J. Bustard, D. G. England, J. Nunn, D. Moffatt, M. Spanner, R. Lausten, and B. J. Sussman, “Quantum random bit generation using energy fluctuations in stimulated raman scattering,” Opt. Express 21(24), 29350–29357 (2013). [CrossRef]  

23. Y. Nie, H. Zhang, Z. Zhang, J. Wang, X. Ma, J. Zhang, and J. Pan, “Practical and fast quantum random number generation based on photon arrival time relative to external reference,” Appl. Phys. Lett. 104(5), 051110 (2014). [CrossRef]  

24. M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, and O. Benson, “An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements,” Appl. Phys. Lett. 98(17), 171105 (2011). [CrossRef]  

25. L. Nguyen, P. Rehain, Y. M. Sua, and Y.-P. Huang, “Programmable quantum random number generator without postprocessing,” Opt. Lett. 43(4), 631–634 (2018). [CrossRef]  

26. P. Bierhorst, E. Knill, S. Glancy, Y. Zhang, A. Mink, S. Jordan, A. Rommal, Y.-K. Liu, B. Christensen, S. W. Nam, M. J. Stevens, and L. K. Shalm, “Experimentally generated randomness certified by the impossibility of superluminal signals,” Nature 556(7700), 223–226 (2018). [CrossRef]  

27. T. Jennewein, U. Achleitner, G. Weihs, H. Weinfurter, and A. Zeilinger, “A fast and compact quantum random number generator,” Rev. Sci. Instrum. 71(4), 1675–1680 (2000). [CrossRef]  

28. M. R. Coleman, K. G. Ingalls, J. T. Kavulich, S. J. Kemmerly, N. C. Salinas, E. V. Ramirez, and M. Schlosshauer, “Parity-based, bias-free optical quantum random number generation with min-entropy estimation,” J. Opt. Soc. Am. B 37(7), 2088–2094 (2020). [CrossRef]  

29. H. Fürst, H. Weier, S. Nauerth, D. G. Marangon, C. Kurtsiefer, and H. Weinfurter, “High speed optical quantum random number generation,” Opt. Express 18(12), 13029–13037 (2010). [CrossRef]  

30. T. Lunghi, J. B. Brask, C. C. W. Lim, Q. Lavigne, J. Bowles, A. Martin, H. Zbinden, and N. Brunner, “Self-testing quantum random number generator,” Phys. Rev. Lett. 114(15), 150501 (2015). [CrossRef]  

31. Y. Liu, X. Yuan, M. Li, W. Zhang, Q. Zhao, J. Zhong, Y. Cao, Y. Li, L. Chen, H. Li, T. Peng, Y. Chen, C. Peng, S. Shi, Z. Wang, L. You, X. Ma, J. Fan, Q. Zhang, and J. Pan, “High-speed device-independent quantum random number generation without a detection loophole,” Phys. Rev. Lett. 120(1), 010503 (2018). [CrossRef]  

32. P. R. Smith, D. G. Marangon, M. Lucamarini, Z. L. Yuan, and A. J. Shields, “Simple source device-independent continuous-variable quantum random number generator,” Phys. Rev. A 99(6), 062326 (2019). [CrossRef]  

33. B. G. Christensen, K. T. McCusker, J. B. Altepeter, B. Calkins, T. Gerrits, A. E. Lita, A. Miller, L. K. Shalm, Y. Zhang, S. W. Nam, N. Brunner, C. C. Lim, N. Gisin, and P. G. Kwiat, “Detection-loophole-free test of quantum nonlocality, and applications,” Phys. Rev. Lett. 111(13), 130406 (2013). [CrossRef]  

34. Z. Cao, H. Zhou, X. Yuan, and X. Ma, “Source-independent quantum random number generation,” Phys. Rev. X 6(1), 011020 (2016). [CrossRef]  

35. P. Mironowicz, G. Cañas, J. Cariñe, E. S. Gómez, J. F. Barra, A. Cabello, G. B. Xavier, G. Lima, and M. Pawłowski, “Quantum randomness protected against detection loophole attacks,” Quantum Inf. Process. 20(1), 39 (2021). [CrossRef]  

36. J. B. Brask, A. Martin, W. Esposito, R. Houlmann, J. Bowles, H. Zbinden, and N. Brunner, “Megahertz-rate semi-device-independent quantum random number generators based on unambiguous state discrimination,” Phys. Rev. Appl. 7(5), 054018 (2017). [CrossRef]  

37. D. Rusca, T. van Himbeeck, A. Martin, J. B. Brask, W. Shi, S. Pironio, N. Brunner, and H. Zbinden, “Self-testing quantum random-number generator based on an energy bound,” Phys. Rev. A 100(6), 062338 (2019). [CrossRef]  

38. M. Avesani, D. G. Marangon, G. Vallone, and P. Villoresi, “Source-device-independent heterodyne-based quantum random number generator at 17 Gbps,” Nat. Commun. 9(1), 5365 (2018). [CrossRef]  

39. F. Xu, J. H. Shapiro, and F. N. C. Wong, “Experimental fast quantum random number generation using high-dimensional entanglement with entropy monitoring,” Optica 3(11), 1266–1269 (2016). [CrossRef]  

40. T. Michel, J. Y. Haw, D. G. Marangon, O. Thearle, G. Vallone, P. Villoresi, P. K. Lam, and S. M. Assad, “Real-time source-independent quantum random-number generator with squeezed states,” Phys. Rev. Appl. 12(3), 034017 (2019). [CrossRef]  

41. Y. Shen, L. Tian, and H. Zou, “Practical quantum random number generator based on measuring the shot noise of vacuum states,” Phys. Rev. A 81(6), 063814 (2010). [CrossRef]  

42. J. Y. Haw, S. M. Assad, A. M. Lance, N. H. Y. Ng, V. Sharma, P. K. Lam, and T. Symul, “Maximization of extractable randomness in a quantum random-number generator,” Phys. Rev. Appl. 3(5), 054004 (2015). [CrossRef]  

43. X. Ma, F. Xu, H. Xu, X. Tan, B. Qi, and H.-K. Lo, “Postprocessing for quantum random-number generators: Entropy evaluation and randomness extraction,” Phys. Rev. A 87(6), 062327 (2013). [CrossRef]  

44. W. Huang, Y. Zhang, Z. Zheng, Y. Li, B. Xu, and S. Yu, “Practical security analysis of a continuous-variable quantum random-number generator with a noisy local oscillator,” Phys. Rev. A 102(1), 012422 (2020). [CrossRef]  

45. R. Konig, R. Renner, and C. Schaffner, “The operational meaning of min- and max-entropy,” IEEE Trans. Inf. Theory 55(9), 4337–4347 (2009). [CrossRef]  

46. M. Tomamichel, C. Schaffner, A. Smith, and R. Renner, “Leftover hashing against quantum side information,” IEEE Trans. Inf. Theory 57(8), 5524–5535 (2011). [CrossRef]  

47. E. Barker and J. Kelsey, “Recommendation for the entropy sources used for random bit generation,” NIST Draft Special Publication 800-90B, second draft (2016).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Discarding-boundary-bin measurement model. The output signal of the BHD is discretized into ${{2}^{n}}$ sampling bins by $n$-bit ADC with dynamical range $[-R-\delta /2,R-\delta /2]$. The bin width and discrete measurement outcomes are $\delta =R/{{2}^{n\textrm {-1}}}$ and ${{m}_{i}}=\delta \times i$. The measured data that distributed in the boundary bins (shaded) are discarded. Therefore, the first bin ${{i}_{L}}$ and the last bin ${{i}_{H}}$ of the remaining bins centered round $-R+\delta$ and $R-2\delta$, respectively.
Fig. 2.
Fig. 2. The discretized worst-case conditional probability distribution ${{P}_{{{M}_{dis}}|E}}({{m}_{i}}|e)$ with optimized dynamical range $R$ for (a) CM and (b) DBBM. The parameter used are $e=\{-10{{\delta }_{E}},0,10{{\delta }_{E}}\}$ (from left to right), QCNR = 20 dB, and $n = 16$. Given the optimized dynamic range $R=4.85$, the maximum conditional probability of the CM is $5.91\times {{10}^{\textrm {-}5}}$ and the corresponding condition min-entropy is 14.05. Given the optimized dynamic range $R=3.3$, the maximum conditional probability of the DBBM is $4.05\times {{10}^{\textrm {-}5}}$, and the optimized equivalent conditional min-entropy is 14.43.
Fig. 3.
Fig. 3. The multi-interval sampling model. The output signal of the BHD is divided into $L$ measurement ranges (In our experiment we set $L=4$) and sampled synchronously by $L$ parallel ADCs with the total sampling range $2R=\sum \limits_{w=1}^{L}{2{{R}_{w}}}$. The total equivalent ADC resolution is ${{2}^{{{n}_{eq}}}}=\sum \limits_{w=1}^{L}{{{2}^{{{n}_{w}}}}}$, where ${{n}_{w}}$ is ADC resolution of each channel.
Fig. 4.
Fig. 4. The worst-case ${{P}_{{{M}_{dis}}|E}}\left ( m|e \right )$ for (a) the CM ($n=8$) and (b) the CM ($n=10$) and (c) the MIS (${{n}_{eq}}=10$) with $e=\{-10{{\delta }_{E}},0,10{{\delta }_{E}}\}$, QCNR = 20 dB, and optimized $R$. In (a), the maximum conditional probability is 0.0104 and the corresponding optimized conditional min-entropy is 6.58 (the optimized dynamical range $R$ is 3.35). In (b), the optimized maximum conditional probability reaches 0.0104 and the maximum conditional min-entropy can be extracted is 8.41 (the optimized dynamical range $R$ is 3.77). In (c), with the optimized total sampling range $R$ of 3.39 and the scale factor $k={{R}_{2}}/{{R}_{1}}$ (${{R}_{1}}={{R}_{4}},{{R}_{2}}={{R}_{3}}$) of 0.8, the maximum conditional probability is 0.00265 and the corresponding conditional min-entropy is 8.56, which is higher than that in the CM with a single 8 or 10-bit ADC.
Fig. 5.
Fig. 5. (a) Schematic diagram of our QRNG based on quantum vacuum fluctuations. (b) Spectrum of the technical noise and vacuum fluctuation noise obtained from the balanced homodyne detector (BHD). The spectrum range highlighted in orange with clearance of 10 dB between the vacuum noise and technical noise is used for the generation of random numbers.
Fig. 6.
Fig. 6. The extractable secure randomness of CM and DBB + MIS as a function of QCNR for (a) 10-bit measurement and (b) 18-bit measurements in the worst-case under the confidence interval of $5{{\delta }_{E}}\le \left | e \right |\le 20{{\delta }_{E}}$.
Fig. 7.
Fig. 7. The extractable secure randomness of CM and DBB + MIS in the average-case.
Fig. 8.
Fig. 8. Autocorrelation coefficients calculated from $5\times {{10}^{7}}$ extracted random bits for (a) the average-case and (b) the worst-case when QCNR = 10 dB and $-5{{\delta }_{E}}\le e\le 5{{\delta }_{E}}$. The theoretical standard deviation of the autocorrelation for truly random $5\times {{10}^{7}}$ bits is shown as dashed lines.
Fig. 9.
Fig. 9. The NIST test results.

Tables (2)

Tables Icon

Table 1. Optimized H ¯ min ( M d i s | E ) (and R ) for 8-bit and 16-bit ADCs.

Tables Icon

Table 2. Optimized H ¯ min ( M d i s | E ) (and k , R ) for n = 8-bit, 10-bit.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

H min ( X ) = log 2 [ x i X max P X ( x i ) ] ,
H ~ min ( X | E ) = log 2 [ e j [ e min , e max ] max x i X max P X | E ( x i | e j ) ] .
H ¯ min ( X | E ) = log 2 [ P g u e s s ( X d i s | E ) ] ,
P g u e s s ( X d i s | E ) = P E ( e ) x i X d i s max P X d i s | E ( x i | e ) d e .
p Q ( q ) = 1 2 π σ Q exp [ q 2 2 σ Q 2 ] , p E ( e ) = 1 2 π σ E exp [ e 2 2 σ E 2 ] .
p M ( m ) = 1 2 π σ M exp [ m 2 2 σ M 2 ] = 1 2 π ( σ Q 2 + σ E 2 ) exp [ m 2 2 ( σ Q 2 + σ E 2 ) ] .
p M | E ( m | e ) = 1 2 π ( σ M 2 σ E 2 ) exp [ ( m e ) 2 2 ( σ M 2 σ E 2 ) ] = 1 2 π σ Q exp [ ( m e ) 2 2 σ Q 2 ] .
P M d i s | E ( m i | e ) = { R + δ / 2 p M | E ( m | e ) d m , i = i min , m i δ / 2 m i + δ / 2 p M | E ( m | e ) d m , i min + 1 i i max 1 , R 3 δ / 2 p M | E ( m | e ) d m , i = i max .
H ~ min ( M d i s | E ) = log 2 [ max ( c 1 , c 2 ) ] ,
P M d i s | E ( m i | e ) = m i δ / 2 m i + δ / 2 p M | E ( m | e ) d m , i L i i H ,
P T ( M d i s | e ) = 1 P M d i s | E ( m i max | e ) P M d i s | E ( m i min | e ) ,
P M d i s | E T ( m i | e ) = P M d i s | E ( m i | e ) / P T ( M d i s | e ) .
m i M d i s max P M d i s | E T ( m i | e ) = max { P M d i s | E T ( m i m i d | e ) , P M d i s | E T ( m i L | e ) , P M d i s | E T ( m i H | e ) } ,
e [ e m i n , e m a x ] max m i M d i s max P M d i s | E T ( m i | e ) = max { e [ e m i n , e m a x ] max P M d i s | E T ( m i m i d | e ) , e [ e m i n , e m a x ] max P M d i s | E T ( m i L | e ) , e [ e m i n , e m a x ] max P M d i s | E T ( m i H | e ) } .
H ~ e q min T ( M d i s | E ) = H ~ min T ( M d i s | E ) × P ~ T ( M d i s | E ) ,
H ~ min T ( M d i s | E ) = log 2 { e [ e m i n , e m a x ] max m i M d i s max P M d i s | E T ( m i | e ) } .
H ¯ e q min T ( M d i s | E ) = H ¯ min T ( M d i s | E ) × P ¯ T ( M d i s | E ) ,
P ¯ T ( M d i s | E ) = + P T ( M d i s | e ) d e ,
H ¯ min T ( M d i s | E ) = log 2 [ + p E ( e ) m i M d i s max P M d i s | E T ( m i | e ) d e ] .
P M d i s | E 1 ( m i | e ) = { R + δ / 2 p M | E ( m | e ) d m , i = i min 1 , m i 1 δ / 2 m i 1 + δ / 2 p M | E ( m | e ) d m , i min 1 < i < i max 1 , 0 i = i max 1 .
P M d i s | E L ( m i | e ) = { 0 i = i min L , m i L δ / 2 m i L + δ / 2 p M | E ( m | e ) d m , i min L < i < i max L , R 3 δ / 2 + p M | E ( m | e ) d m , i = i max L .
H ~ min ( M d i s | E ) = log 2 [ w [ 1 , L ] max e [ e min , e max ] max m i M d i s max P M d i s | E w ( m i | e ) ] ,
H ¯ min ( M d i s | E ) = log 2 [ + p E ( e ) w [ 1 , L ] max m i M d i s max P M d i s | E w ( m i | e ) d e ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.