Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical authentication scheme based on all-optical neural network

Open Access Open Access

Abstract

Diffractive deep neural network is architectural designs based on the principles of neural networks, which consists of multiple diffraction layers and has the remarkable ability to perform machine learning tasks at the speed of light. In this paper, a novel optical authentication system was presented that utilizes the diffractive deep neural network principle. By carefully manipulating a light beam with both a public key and a private key, we are able to generate a unique and secure image representation at a precise distance. The generated image can undergo authentication by being processed through the proposed authentication system. Leveraging the utilization of invisible terahertz light, the certification system possesses inherent characteristics of concealment and enhanced security. Additionally, the entire certification process operates solely through the manipulation of the light beam, eliminating the need for electronic calculations. As a result, the system offers rapid certification speed. The proposed optical authentication scheme is further validated through computer simulations, which showcase its robust security and high precision. This method holds immense potential for diverse applications in optical neural network authentication, warranting a broad scope of future prospects.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Given the rapid advancement of information technology, the significance of safeguarding digital information through information security technology has garnered increasing attention. In 1995, the introduction of double random phase coding (DRPE) [1] prompted researchers to explore the distinct advantages of optical image encryption and authentication technology in the realm of information security protection. Building upon this foundation, Suzuki et al. successfully integrated DRPE into an authentication system by implementing a random phase mode, yielding favorable outcomes [2]. Nevertheless, it has been discovered through further research that DRPE, being a linear process, lacks sufficient security against certain types of attacks [35]. Consequently, alternative optical authentication technologies have emerged, such as multi-factor optical encryption authentication (MOEA) [69], asymmetric cryptographic systems based on phase truncated Fourier transform [1012], optical near-field authentication schemes [1315], ghost imaging-based authentication schemes [1618], and single pixel imaging-based optical authentication schemes [19,20]. These advancements reflect the ongoing pursuit of more robust and effective optical authentication methods within the field of information security.

In recent years, artificial neural networks have experienced significant advancement due to the rapid progress in computer performance. These networks have found wide applications in various fields such as natural language processing [21], target detection [22], and medical diagnosis [23]. The powerful data learning capability of neural networks has also led to their utilization in optical image encryption systems [2428]. However, as the complexity of neural network structures and the number of parameters have increased, traditional electronic computing methods struggle to meet the escalating demand for neural network computing resources. As a result, optical-based computing systems have gained popularity among researchers due to their low power consumption and the ability to process data using light beams. The exploration of metamaterials has opened up possibilities for optical computing chips based on silicon photonics [29], which hold the potential to overcome the limitations of traditional computing components and introduce novel concepts. However, it is important to note that current photonic neural networks are primarily capable of performing basic matrix computations [30]. In 2018, Lin et al. presented a diffractive deep neural network model [31], demonstrating its capability to process intricate optical images and effectively classify MNIST handwritten digit datasets [32]. This research demonstrates the capability of optical methods in efficiently handling complex tasks in neural network computation. The diffractive deep neural network model is composed of five layers of passive diffraction structures, and the data for these layers is obtained through computer computation and subsequently transformed into physical structures using advanced 3D printing techniques. Upon its publication, the diffractive deep neural network garnered widespread attention and emerged as a prominent research topic in the field. Currently, the diffractive deep neural network has not only achieved significant advancements in integrated photonics circuits [33], but has also expanded into the Fourier domain [34]. Moreover, the concept of convolution operations has been introduced [35]. To cater to the diverse requirements of different tasks, spatial light modulators have been proposed [36]. These modulators are capable of adjusting diffraction modulation parameters based on specific task requirements.

This paper introduces an optical image authentication technique that relies on a diffractive deep neural network. The process begins by selecting a group of images with similar characteristics to form the database. A random phase plate, matching the size of the images, is designated as the public key. Additionally, a series of private keys are generated through calculations, with each key corresponding to an image in the database. By arranging the public and private keys in a specific configuration, the beam passing through them generates an image at a specific location that matches one in the database. Utilizing terahertz light in the authentication system ensures that the resulting image remains invisible, thereby enhancing the system’s security and making it challenging for potential attackers to detect. After that, the image beams possessing the aforementioned characteristics on the input layer undergo training to converge towards a specific point on the output layer. An optical sensor is positioned at this point, and the authentication process is deemed successful when the sensor detects a sufficient intensity of light. This study incorporates the in situ propagation algorithm introduced by Zhou et al. [37] in the diffractive deep neural network (D2NN) framework and utilizes the adaptive moment estimation optimizer (Adam) [38] during the training process. Thanks to these enhancements, the training speed is faster, and the overall training accuracy surpasses that of other methods.

2. Basic principles of the authentication system

Figure 1 shows the device for optical authentication. This encryption system consists of a public key, a private key, and a diffraction deep neural network used for authentication operations. The public key is openly available as a ciphertext, while the private key used for authentication is provided to select authorized individuals. The authentication operation of the diffraction deep neural network is performed using both the public and private keys. Firstly, a set of images with identical characteristics are selected to undergo authentication. In this study, a series of handwritten digits 1 are selected from the MNIST handwritten digit database. In this process, the pixel dimensions of the authenticated image are 28 × 28. Subsequently, a public key R(x, y) is generated by using a 28 × 28 pixel random phase plate. The crucial aspect of authentication, i.e., the private key, needs to be generated based on the authentication image and the public key, with the same pixel dimensions of the public key. By utilizing a terahertz laser to emit a beam of light, after the beam passes through the specifically arranged public and private keys, the authentication image is generated at a predetermined distance and input to the authentication D2NN. For every authentication image Ik(ξ, η), the corresponding private key $R_k(u, v)$ satisfies the relationship as follows:

$${I_k}(\xi ,\eta ) = |{FS{T_{{D_2}}}} \{ FS{T_{{D_1}}}[R(x,y)]{R_k}(u,v) \} |, $$
where FST is Fresnel diffraction, D1 and D2 are the distance of Fresnel diffraction, and k is the serial number of the private key. The private key is generated by computer calculation. The specific steps are as follows:
  • (1) A complex amplitude field consisting of randomly generated phase and amplitude (φ, ρ) is taken as the initial beam information, t is set as 1, and an appropriate number of iterations N and the target value of mean square error R are set.
  • (2) Since the public key and diffraction distance have been determined, the optical field before reaching the private key is fixed. Calculate the optical field U(u, v) and record the amplitude distribution A(u, v) and phase distribution P1(u, v), which can be expressed as:
    $$U(u,v) = FS{T_{{D_1}}}[{{e^{j\varphi }}\ast R(x,y)} ], $$
    $$A(u,v) = |{U(u,v)} |, $$
    $${P_1}(u,v) = \arg [{U(u,v)} ], $$
    where “arg” denotes the phase intercept operation.
  • (3) The beam is propagated forward to the input layer of D2NN, then the complex amplitude of the optical field is calculated by Eq. (5), and the root mean square error (RMSE) between the beam amplitude and the certified image is calculated by Eq. (6).
    $${U_t}(\xi ,\eta ) = FS{T_{{D_2}}}[{U(u,v)} ], $$
    $$RMSE({U_t},{I_k})\textrm{ = }\sqrt {\frac{1}{{mn}}\sum\limits_{p = 1}^n {\sum\limits_{q = 1}^m {{{|{|{{U_t}(p,q)} |- {I_k}(p,q)} |}^2}} } }, $$
    where m and n are pixel values of matrix Ut and Ik.
  • (4) If $U_t=\rho_t e^{j \varphi t}$, introduce the restriction condition of D2NN input layer, keep the beam phase unchanged, and replace the amplitude of D2NN input layer with authentication image, as shown in Eq. (7).
    $${U^{\prime}_t} = {I_k}\ast \frac{{{\rho _t}{e^{j{\varphi _t}}}}}{{|{{\rho_t}{e^{j{\varphi_t}}}} |}}\textrm{ = }{I_k}{e^{j{\varphi _t}}}. $$
  • (5) Propagate the beam to the private key position in reverse, and output the beam phase distribution, as shown in Eq. (8).
    $${P_2}(u,v) = \arg \{{IFS{T_{{D_2}}}[{{{U^{\prime}}_t}(\xi ,\eta )} ]} \}. $$
  • (6) If RMSE value is less than R or t is greater than N, then the iteration is stopped and the private key Rk(u, v) is calculated using Eq. (9); otherwise, it continues.
    $${R_k}(u,v) = {P_2}(u,v) + 2\pi - {P_1}(u,v). $$
  • (7) Introduce the restriction condition of the private key position, keep the beam phase unchanged, and replace the amplitude of the private key position with A(u, v), as shown in Eq. (10).
    $$U(u,v) = A(u,v){e^{j{P_2}(u,v)}}. $$
  • (8) Make t = t + 1 and skip to step 3.

 figure: Fig. 1.

Fig. 1. Installation diagram of authentication scheme. Phase plate 1 is the public key, generated by a random phase plate. Phase plate 2 is the private key, generated through a combination of authentication image and the public key.

Download Full Size | PDF

Follow the preceding steps to generate a private key based on a randomly generated public key. Figure 2 shows the public key used by the optical authentication scheme, which is generated by the program. Figure 3 shows the private keys generated from the public key and their corresponding images to be authenticated.

 figure: Fig. 2.

Fig. 2. Public key used by the authentication scheme.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The private keys generated from the public key and their corresponding images to be authenticated. (a)-(c) are the private keys, (d)-(f) are the images to be authenticated generated by (a)-(c) respectively.

Download Full Size | PDF

Figure 4 represents a fully connected neural network, while Fig. 5 depicts a diffractive deep neural network built on a similar principle as Fig. 4. Although they share similar principles, the fully connected neural network experiences slower computational speed due to the separation of storage and computation and poor parallelism when running on computers. In contrast, the diffractive deep neural network circumvents the limitations of computational platforms, offering increased neuronal capacity, lower energy consumption requirements, and more convenient deployment conditions. D2NN network is composed of multiple diffraction layers, with each neuron in the preceding layer diffracting onto every neuron in the next layer, the strength of which is dependent on the phase. Therefore, this paper utilizes the phase-only modulated D2NN architecture. According to Huygens’ principle, every point on a spherical wave is a wavelet source of a secondary spherical wave, so every neuron can be regarded as a wavelet source. Rayleigh-sommerfeld diffraction equation is used to describe the wavelet (xi, yi, zi) generated by neurons with the coordinate i in layer l, which can be described as:

$$U_l^i(x,y,z) = M_l^i({x_i},{y_i},{z_i})\ast W_l^i(x,y,z)\ast \sum\nolimits_n {U_{l - 1}^n} ({x_i},{y_i},{z_i}), $$

Assuming that $M_l^i({x_i},{y_i},{z_i})$ represents the modulation of the neuron, which refers to the amount of phase change in the signal; $W_l^i(x,y,z)$ represents the Raleigh-Sommerfeld diffraction of the neuron, and ${U_{l - 1}^n} ({x_i},{y_i},{z_i})$ represents the sum of inputs from all neurons in the preceding layer, which is essentially the input to the neuron. This process can be described as follows:

$$M_l^i({x_i},{y_i},{z_i})\textrm{ = }{e^{j\varphi _l^i({x_i},{y_i},{z_i})}}, $$
$$W_l^i(x,y,z)\textrm{ = }\frac{{z - {z_i}}}{{{r^2}}}(\frac{1}{{2\pi r}} + \frac{1}{{j\lambda }}){e^{\frac{{j2\pi r}}{\lambda }}}, $$
$$r = \sqrt {{{({x - {x_i}} )}^2} + {{({y - {y_i}} )}^2} + {{({z - {z_i}} )}^2}}, $$

 figure: Fig. 4.

Fig. 4. Fully connected neural network model.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. D2NN model

Download Full Size | PDF

For an L-layer D2NN, the optical field on the output surface can be expressed as:

$$U_L^i = \sum\nolimits_i {W_L^i(\prod\limits_{n = L - 1}^1 {W_n^iM_n^i} )U_0^i}, $$
where $U_0^i$ represents the complex value of the i-th element of the input optical field matrix.

Then the parameters are modulated to complete specific tasks, including amplitude and phase change. In this paper, only phase is adjusted to simplify the system. Each neuron of each layer of neural network is iteratively adjusted by the algorithm, so that the output results meet our expectations. This process is described by an optimization problem, that is, the error function is minimized by adjusting each neuron parameter. The algorithm of adaptive moment estimation (Adam) is used to solve this problem. The algorithm can automatically adjust the size of the step and also deal with the problem that the gradient is disturbed by noise. The specific steps are as follows.

  • (1) Taking the RMSE as the loss function, set t = 1 to calculate the error between the output surface optical field UL and the target optical field intensity G,
    $$L({U_L},G)\textrm{ = }\sqrt {\frac{1}{n}\sum\nolimits_n {({{|{U_L^n} |}^2}} - {G^n}{)^2}}, $$
    then the optimization problem required in the solution can be expressed as:
    $$\mathop {\min }\limits_{\varphi _l^i \in [{0,} 2\pi )} L({U_L},G). $$
  • (2) Use Eq. (18) to calculate the partial derivative gt of the loss function to the phase adjustment value of each neuron, and judge whether the error function converges. If it does, the parameters can be output; otherwise, continue.
    $${g_t}\textrm{ = }\frac{{\partial L}}{{\partial \varphi _l^i}}. $$
  • (3) Equation (19) is used to calculate the first moment estimation of the gradient at time t in the form of momentum, β1 is the exponential decay rate of the first moment estimation, and m0 = 0 is substituted,
    $${m_t} = {\beta _1}{m_{t - 1}} + (1 - {\beta _1}){g_t}. $$
  • (4) Equation (20) is used to calculate the second-moment estimation of the gradient in the form of momentum, where β2 is the exponential decay rate of the second-moment estimation, and v0 = 0 is juxtaposed,
    $${v_t} = {\beta _2}{v_{t - 1}} + (1 - {\beta _2}){g^2}_t. $$
  • (5) Equation (21) is used to correct the deviation of the first-order moment estimation,
    $${\hat{m}_t} = \frac{{{m_t}}}{{1 - {\beta _2}^t}}. $$
  • (6) Equation (22) is used to correct the deviation of the second moment estimation,
    $${\hat{v}_t} = \frac{{{v_t}}}{{1 - {\beta _2}^t}}. $$
  • (7) Update the parameter, namely the phase modulation value of each neuron, with Eq. (23), and set t = t + 1, η as the step size, and ε as the minimum to prevent the denominator from being zero,
    $$\varphi _l^i = \varphi _l^i - \frac{\eta }{{\sqrt {{{\hat{v}}_t} + \varepsilon } }}{\hat{m}_t}. $$

3. Authentication results and analysis

This scheme uses Matlab2020 to calculate D2NN parameters. Utilizing a computer memory with 8GB RAM and NVIDIAGTX1050 graphics card for the training process. A diffractive deep neural network with 4 diffraction layers is designed in this scheme to complete the expected certification task. In this scheme, 0.4THz beam is used for illumination, and the maximum half-cone diffraction angle can be calculated by the following equation:

$${\varphi _{\max }} = {\sin ^{ - 1}}(\lambda {f_{\max }}) = {\sin ^{ - 1}}(\frac{\lambda }{{2{d_f}}}). $$
Where, fmax is the maximum spatial frequency and df is the scale of the diffraction unit. Therefore, the side length of each diffracted neuron can be set as 400um, the scale of each layer of network is 8cm × 8 cm, and the interval between two adjacent layers of network is 7 cm. In each diffraction layer have 200 × 200 neurons, resulting in a total of 160,000 neurons in the D2NN which comprises 4 diffraction layers. Since every neuron on the adjacent diffraction layer can connect to each other through the diffraction layer, a total of (200 × 200)2 × 4, that is 6.4 billion neural connections, are generated from the input layer to the output layer, providing D2NN with powerful computing power.

In the training stage, 5000 images labeled “1” selected from the MNIST handwritten digit database were used as the authenticated images, that is, the light energy was concentrated in the center of the output surface, and other 5000 images were selected as the training set of the non-authenticated images. This training set includes additional numbers, MNIST fashion images, and randomly generated images. By concentrating the optical energy of the authenticated image in the center of the input surface, a sensor that can receive terahertz beam information is placed here. If the optical energy received by the sensor is higher than a certain threshold, the authentication is considered to be passed. In addition, in order to avoid the impact of images that fail to pass authentication, the light energy concentration points of these images can be set at four corners of the output surface. Figure 6(a) is the training target image corresponding to the image that can pass the certification, Fig. 6(b) is the training target image corresponding to the image that cannot pass the certification, and the black block represents the concentration of light energy.

 figure: Fig. 6.

Fig. 6. The training target images. (a) The training target image corresponding to the image that can pass the certification; (b) the training target image corresponding to the image that cannot pass the certification.

Download Full Size | PDF

After 10,000 training iterations of the parameters adopted in this scheme, the relationship between loss function and iteration times is shown in Fig. 7. It can be seen from the figure that the convergence of the loss function is relatively good. After training, using the other images from the MINST database to test the authentication system, including the images “1” and other digital written images, Fig. 8 shows the test results. Figure 8(a) and Fig. 8(d) are the wrong private keys. Figure 8(b) and Fig. 8(e) are the input images corresponding to Fig. 8(a) and Fig. 8(d), which have not been trained with D2NN, and Fig. 8(c) and Fig. 8(f) are their corresponding results respectively. Figure 8(g) and Fig. 8(j) are the correct private keys, and Fig. 8(h) and Fig. 8(k) are the input images corresponding to Fig. 8(g) and Fig. 8(j). Finally, Fig. 8(i) and Fig. 8(l) are their authentication results respectively. It can be found that the proposed system can correctly authenticate the input image as the result.

 figure: Fig. 7.

Fig. 7. Convergence relationship between loss function and training times.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Test result of authentication system.

Download Full Size | PDF

The system will inevitably be disturbed by noise in practical application, which may disrupt its normal operation of the system. Nevertheless, it is relatively simple to experiment with Gaussian noise under different illumination conditions. Therefore, this scheme tests the stability under the presence of Gaussian noise, and the results are shown in Fig. 9. Figure 9(a) is the image to be certified without noise pollution; Fig. 9(b) is the certified image after superimposing Gaussian noise; Fig. 9(c) is the certification result of Fig. 9(a); Fig. 9(d) is the certification result of Fig. 9(b). Under the Gaussian noise interference with mathematical expectation of 1 and variance of 0.25, it can be found that even though the image to be authenticated is disturbed by noise, the center of the authentication area still has a high intensity distribution.

 figure: Fig. 9.

Fig. 9. Anti-noise test results of the certification system.

Download Full Size | PDF

To prove the robust security and authentication capabilities of the system proposed in this paper, a series of experiments were conducted. 1000 samples were decrypted and authenticated using both wrong private keys and correct private keys, respectively. A comparative analysis was conducted to compare the squared amplitude of the center 4 pixels, expressed as a percentage of the total 784 pixels. Then using a light power threshold of 0.2 as the basis for evaluation, if the input light intensity is set to 1, and the light power is detected at 0.2, it is considered to be authenticated. The results of this comparative experiment are presented in Fig. 10. From the data displayed in Fig. 10, it is evident that the energy ratios of the detection regions obtained from decrypting images with incorrect keys are consistently lower than those obtained from decrypting images with the correct keys. The accuracy rate is nearly 100%, indicating a near-flawless ability to differentiate images recovered with the correct private keys from those that are not.

 figure: Fig. 10.

Fig. 10. Anti-noise test results of the certification system.

Download Full Size | PDF

4. Conclusions

This paper presents the design of a certification system based on diffractive deep neural network. By calculating parameters on a computer, the system can be 3D printed, and eliminating the need for computer-based certification. The proposed diffractive deep neural network is capable of authenticating and recognizing image features, enabling the system to assess the authenticity of input images. Leveraging light beams for the authentication process allows for faster operation, as electronic calculations are not required. Training of the diffractive deep neural network incorporates the in situ propagation principle and the Adam algorithm to optimize parameters, resulting in accelerated training speed. Computer simulations demonstrate the system’s robustness, as it can authenticate images even in the presence of noise in the ciphertext. This novel approach of employing a diffractive deep neural network to design an optical authentication system enables optical arithmetic operations in image authentication and encryption, offering a promising avenue for enhancing optical information security.

Funding

National Natural Science Foundation of China (61505046); Natural Science Foundation of Zhejiang Province (LY19A040010); National Key Research and Development Program of China (2022YFA1104600).

Acknowledgement

This work was supported by the National Key Research and Development Program of China under Grant No 2022YFA1104600, the Natural Science Foundation of Zhejiang Province under Grant No LY19A040010 and the National Natural Science Foundation of China under Grant No 61505046.

Disclosures

The authors declare that they have no conflict of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef]  

2. H. Suzuki, M. Yamaguchi, M. Yachida, et al., “Experimental evaluation of fingerprint verification system based on double random phase encoding,” Opt. Express 14(5), 1755–1766 (2006). [CrossRef]  

3. Y. Frauel, A. Castro, and T. Naughton, “Security analysis of optical encryption,” Proc. SPIE 5986(3), 598603 (2005). [CrossRef]  

4. X. Peng, P. Zhang, and H. Wei, “Known-plaintext attack on optical encryption based on double random phase keys,” Opt. Lett. 31(8), 1044–1046 (2006). [CrossRef]  

5. U. Gopinathan, D. Monaghan, and T. Naughton, “A known-plaintext heuristic attack on the Fourier plane encryption algorithm,” Opt. Express 14(8), 3181–3186 (2006). [CrossRef]  

6. X. Peng, H. Wei, and P. Zhang, “Chosen-plaintext attack on lensless double-random phase encoding in the Fresnel domain,” Opt. Lett. 31(22), 3261–3263 (2006). [CrossRef]  

7. B. Javidi and J. L. Horner, “Optical pattern recognition for validation and security verification,” Opt. Eng. 33(6), 1752–1756 (1994). [CrossRef]  

8. E. Perez-Cabre, M. S. Millan, and B. Javidi, “Near infrared multifactor identification tags,” Opt. Express 15(23), 15615–15627 (2007). [CrossRef]  

9. M. S. Millan and E. Perez-Cabre, “Multifactor authentication reinforces optical security,” Opt. Lett. 31(6), 721–723 (2006). [CrossRef]  

10. W. Qin and X. Peng, “Asymmetric cryptosystem based on phase-truncated Fourier transforms,” Opt. Lett. 35(2), 118–120 (2010). [CrossRef]  

11. A. Alfalou and C. Brosseau, “Optical image compression and encryption methods,” Adv. Opt. Photon. 1(3), 589–636 (2009). [CrossRef]  

12. Y. Qin, Y. H. Wan, Q. Gong, et al., “Deep-learning-based cross-talk free and high-security compressive encryption with spatially incoherent illumination,” Opt. Express 31(6), 9800–9816 (2023). [CrossRef]  

13. M. Naruse, N. Tate, and M. Ohtsu, “Optical security based on near-field processes at the nanoscale,” J. Opt. 14(9), 094002 (2012). [CrossRef]  

14. N. Tate, H. Sugiyama, M. Naruse, et al., “Quadrupole–dipole transform based on optical near-field interactions in engineered nanostructures,” Opt. Express 17(13), 11113–11121 (2009). [CrossRef]  

15. A. Carnicer, A. Hassanfiroozi, P. Latorre-Carmona, et al., “Security authentication using phase-encoded nanoparticle structures and polarized light,” Opt. Lett. 40(2), 135–138 (2015). [CrossRef]  

16. W. Chen and X. Chen, “Ghost imaging for three-dimensional optical security,” Appl. Phys. Lett. 103(22), 221106 (2013). [CrossRef]  

17. J. Y. Ma, Z. Li, S. M. Zhao, et al., “Encrypting orbital angular momentum 11717holography with ghost imaging,” Opt. Express 31(7), 11717–11728 (2023). [CrossRef]  

18. L. J. Kong, Y. Li, S. X. Qian, et al., “Encryption of ghost imaging,” Phys. Rev. A 88(1), 013852 (2013). [CrossRef]  

19. W. Chen and X. Chen, “Grayscale object authentication based on ghost imaging using binary signals,” Europhys. Lett. 110(4), 44002 (2015). [CrossRef]  

20. W. Chen and X. Chen, “Marked ghost imaging,” Appl. Phys. Lett. 104(25), 251109 (2014). [CrossRef]  

21. R. Collobert and J. A. Weston, “A unified architecture for natural language processing: deep neural networks with multitask learning,” In: Proceedings of the 25th International Conference on Machine Learning160–167 (2008).

22. Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

23. G. Litjens, T. Kooi, B. E. Bejnordi, et al., “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017). [CrossRef]  

24. L. F. Chen, B. Y. Peng, W. W. Gan, et al., “Plaintext attack on joint transform correlation encryption system by convolutional neural network,” Opt. Express 28(19), 28154–28163 (2020). [CrossRef]  

25. S. S. Lin, X. G. Wang, A. G. Zhu, et al., “Steganographic optical image encryption based on single-pixel imaging and an untrained neural network,” Opt. Express 30(20), 36144–36154 (2022). [CrossRef]  

26. P. F. Jiang, J. L. Liu, L. Wu, et al., “Fourier single pixel imaging reconstruction method based on the U-net and attention mechanism at a low sampling rate,” Opt. Express 30(11), 18638–18654 (2022). [CrossRef]  

27. Y. Qin, H. Wan, and Q. Gong, “Learning-based chosen-plaintext attack on diffractive-imaging-based encryption scheme,” Opt. and Laser. Eng. 127, 105979 (2020). [CrossRef]  

28. X. Wang, W. Wang, and H. Wei, “Holographic and speckle encryption using deep learning,” Opt. Lett. 46(23), 5794–5797 (2021). [CrossRef]  

29. T. W. Hughes, M. Minkov, M. Shi, et al., “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica 5(7), 864–871 (2018). [CrossRef]  

30. Y. Zhang, J. Zhou, and J. He, “Temperature sensor with enhanced sensitivity based on silicon Mach-Zehnder interferometer with waveguide group index engineering,” Opt. Express 26(20), 26057–26064 (2018). [CrossRef]  

31. X. Lin, Y. Rivenson, N. T. Yardimci, et al., “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

32. T. Yan, R. Yang, Z. Y. Zheng, et al., “All-optical graph representation learning using integrated diffractive photonic computing units,” Sci. Adv. 8(24), eabn7630 (2022). [CrossRef]  

33. L. Deng, “The MNIST database of handwritten digit images for machine learning research,” IEEE Signal Process. Mag. 29(6), 141–142 (2012). [CrossRef]  

34. T. Yan, J. Wu, T. Zhou, et al., “Fourier-space diffractive deep neural network,” Phys. Rev. Lett. 123(2), 023901 (2019). [CrossRef]  

35. K. Liao, Y. Chen, Z. Yu, et al., “All-optical computing based on convolutional neural networks,” Opto-Electron. Adv. 4(11), 200060 (2021). [CrossRef]  

36. S. Li, B. Ni, X. Feng, et al., “All-optical image identification with programmable matrix transformation,” Opt. Express 29(17), 26474–26485 (2021). [CrossRef]  

37. T. Zhou, L. Fang, T. Yan, et al., “In situ optical backpropagation training of diffractive optical neural networks,” Photonics Res. 8(6), 940–953 (2020). [CrossRef]  

38. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” In International Conference for Learning Representations (2015).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Installation diagram of authentication scheme. Phase plate 1 is the public key, generated by a random phase plate. Phase plate 2 is the private key, generated through a combination of authentication image and the public key.
Fig. 2.
Fig. 2. Public key used by the authentication scheme.
Fig. 3.
Fig. 3. The private keys generated from the public key and their corresponding images to be authenticated. (a)-(c) are the private keys, (d)-(f) are the images to be authenticated generated by (a)-(c) respectively.
Fig. 4.
Fig. 4. Fully connected neural network model.
Fig. 5.
Fig. 5. D2NN model
Fig. 6.
Fig. 6. The training target images. (a) The training target image corresponding to the image that can pass the certification; (b) the training target image corresponding to the image that cannot pass the certification.
Fig. 7.
Fig. 7. Convergence relationship between loss function and training times.
Fig. 8.
Fig. 8. Test result of authentication system.
Fig. 9.
Fig. 9. Anti-noise test results of the certification system.
Fig. 10.
Fig. 10. Anti-noise test results of the certification system.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

I k ( ξ , η ) = | F S T D 2 { F S T D 1 [ R ( x , y ) ] R k ( u , v ) } | ,
U ( u , v ) = F S T D 1 [ e j φ R ( x , y ) ] ,
A ( u , v ) = | U ( u , v ) | ,
P 1 ( u , v ) = arg [ U ( u , v ) ] ,
U t ( ξ , η ) = F S T D 2 [ U ( u , v ) ] ,
R M S E ( U t , I k )  =  1 m n p = 1 n q = 1 m | | U t ( p , q ) | I k ( p , q ) | 2 ,
U t = I k ρ t e j φ t | ρ t e j φ t |  =  I k e j φ t .
P 2 ( u , v ) = arg { I F S T D 2 [ U t ( ξ , η ) ] } .
R k ( u , v ) = P 2 ( u , v ) + 2 π P 1 ( u , v ) .
U ( u , v ) = A ( u , v ) e j P 2 ( u , v ) .
U l i ( x , y , z ) = M l i ( x i , y i , z i ) W l i ( x , y , z ) n U l 1 n ( x i , y i , z i ) ,
M l i ( x i , y i , z i )  =  e j φ l i ( x i , y i , z i ) ,
W l i ( x , y , z )  =  z z i r 2 ( 1 2 π r + 1 j λ ) e j 2 π r λ ,
r = ( x x i ) 2 + ( y y i ) 2 + ( z z i ) 2 ,
U L i = i W L i ( n = L 1 1 W n i M n i ) U 0 i ,
L ( U L , G )  =  1 n n ( | U L n | 2 G n ) 2 ,
min φ l i [ 0 , 2 π ) L ( U L , G ) .
g t  =  L φ l i .
m t = β 1 m t 1 + ( 1 β 1 ) g t .
v t = β 2 v t 1 + ( 1 β 2 ) g 2 t .
m ^ t = m t 1 β 2 t .
v ^ t = v t 1 β 2 t .
φ l i = φ l i η v ^ t + ε m ^ t .
φ max = sin 1 ( λ f max ) = sin 1 ( λ 2 d f ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.