Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical encryption via monospectral integral imaging

Open Access Open Access

Abstract

Optical integral imaging (II) uses a lenslet array and CCD sensor as the 3D acquisition device, in which the multispectral information is acquired by a color filter array (CFA). However, color crosstalk exists in CFA that diminishes color gamut, resulting in the reduced resolution. In this paper, we present a monospectral II encryption approach with a monospectral camera array (MCA). The monospectral II system captures images with the MCA that can eliminate color crosstalk among the adjacent spectral channels. It is noteworthy that the captured elemental images (EIs) from the colored scene belong to grayscale; the colored image encryption is converted to grayscale encryption. Consequently, this study will significantly save the calculation load in image encoding and decoding (nearly reduced 2/3) compared with the similar works. Afterwards, an optimized super-resolution reconstruction algorithm is introduced to improve the viewing resolution.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical techniques for image security have triggered much interest owing to their unique advantages, such as parallel processing and multiple dimensional capabilities [1–5]. The previous optical image encryption approaches confirmed that these algorithms are secure and robust to the natural image processing attacks [6–8]. However, these algorithms will meet the common problem when they are used to encrypt the optical color image. The encoding algorithm needs to deal with each channel of color image to achieve complete encryption [9–17]. It will result in additional time consumption for color image encoding algorithm itself.

Integral imaging (II) [18–20] is a true three-dimensional (3D) imaging, based on integral photography approach through which we can capture multiple two-dimensional (2D) images with different perspectives from a 3D scene utilizing a lenslet array and CCD sensor or camera array (see Fig. 1). These 2D images are referred to as the EIs which contain the direction and intensity information of a 3D scene. The 3D scene can be reconstructed from these recorded EIs by optical or computational method. Recent years, optical image encryption based on the integral imaging technique have gained increasing attraction due to the distributed memory characteristic of the EIs [21,22]. However, from the previous published literatures, we learn that there are two thorny issues need to be resolved in the II-based encryption algorithm. One is the resolution of the decrypted image and the other is the encryption calculation load.

 figure: Fig. 1

Fig. 1 Pickup device of the conventional optical integral imaging system.

Download Full Size | PDF

In the lenslet-based II systems, the achievable resolution is limited by the size of lenslet and the number of pixels allocated to each lenslet. In essence, resolution of each elemental image is limited by three parameters: the pixel size, the lenslet point spread function, and the lenslet depth of focus [23–27]. In addition, aberrations and diffraction are significant because the size of the lenslet is relatively small. In contrast to the lenslet-based systems, II can be performed either in a synthetic aperture mode or with an array of high-resolution (HR) imaging sensors [28,29]. Each perspective image can be recorded by a full-size CCD or CMOS sensor, in which the multispectral information is captured by a filter array. The most common array is the Bayer color filter array (CFA), with RGB patterns such as RGGB, RRGB or RBGB (see Fig. 2(a)). The CFA allows only one color to be measured at each pixel. This means that the camera must estimate the two color values at each pixel. This estimation process is known as demosaicking. However, the demosaicking process results in color crosstalk, and color crosstalk diminishes the color signal of affected color channels and increases the overlap in the spectral responses of the different color channels.

 figure: Fig. 2

Fig. 2 Principle of Bayer CFA and MCA capturing: (a) Bayer CFA, (b) MCA.

Download Full Size | PDF

In this work, we study three major improvements for the above mentioned difficulties in the integral imaging image encoding algorithm. First, to eliminate the color crosstalk that is endemic to all sensors with Bayer CFA, each camera in this work is sensitive to a monochromatic spectral and is optically isolated from its neighbored camera. We use a monospectral camera array to capture the image information which effectively reduces the color crosstalk existing in the Bayer CFA. The resultant images are in the form of a sub-image array, which composed of red, green, and blue spectral information. Following that, we study a new designed image reconstruction technique to enhance the viewing quality of the reconstructed images. At last, because the encoding algorithm is directly applied to the monospectral EIs, the colored image encoding algorithm is converted to grayscale. Thereby, this study can optimize the color image encoding calculation load.

2. Pickup by heterogeneous monospectral cameras

In order to eliminate the color crosstalk that is endemic to all color image sensors with CFA, in this paper, an image reconstruction algorithm based on monospectral imaging system is proposed. We describe here a monospectral imaging system consisting of 8 × 8, low cost cameras assembled into a 2D plane as shown in Fig. 3. Each individual camera has a focal length of 2.8 mm and provides 128 × 128 pixels in a same aspect ratio. Each monospectral camera centers are separated by 4.5 cm. Each camera in the monospectral imaging system is optically isolated from its neighbors and only sensitive to just one monospectral color (see Fig. 2(b)): red, green, or blue, with the aim of eliminating color crosstalks, improving color fidelity and increasing the resolution of color image. In addition, the relatively narrow spectral sensitivity of each monospectral image sensor is specified (e.g. blue=400∼500 nm, green=480∼580 nm and red=560∼670 nm). The PC host controls the system configuration (such as initialization, synchronous triggering) and data post-processing (such as image stitching). Sixty-four plastic filters with red, green, or blue color bands are mounted with cameras (see Fig. 3) to capture each monospectral information of the color object. Due to the human eye is more sensitive to the green spectrum than any other spectrum, in this camera array system, half of the monospectral cameras belong to green-spectrum.

 figure: Fig. 3

Fig. 3 The real 8 × 8 planar camera array.

Download Full Size | PDF

Because the imaging process of the EIs is always influenced by the optical devices and outside conditions. Each of the low resolution (LR) images captured by the monospectral imaging sensor can be considered as a blurred, down sampled and noised image. Image reconstruction of this study is implemented by the proposed optimized super-resolution (SR) method, in which the LR EIs are utilized to construct an output HR image. Many SR techniques have been reported, such as non-uniform interpolation, frequency domain, projection onto convex sets, iterative back projection (IBP), and moving least square which are preeminent [30–33]. The more recent efforts in utilizing SR for integral imaging have been reported [31–33]. In [31], a sequence of EIs was captured by the moving lenslet array, and the reconstruction was digitally achieved by using IBP algorithm. An edge directive moving least square based SR method for computational II reconstruction has been presented [33]. The advantage of the IBP algorithm is that the capability of reconstructing an optimal image in the case of incomplete data. However, this algorithm has no unique solution due to the ill-posed nature of the inverse problem [30]. The main drawback of the IBP algorithm is that the deblurring kennel p [34] of IBP makes it very sensitive to the noise. It will result in the ringing artifacts problem of the reconstructed HR image.

3. Reconstruction by monochromatic integral imaging

To address the problem of the deblurring kennel p of IBP which makes it very sensitive to the noise, we propose a new optimized iterative algorithm with the wavelet filter banks. At first, we import the wavelet filter banks [35,36] into the iterative process. From [35], the signal x passes through the multiple-channel wavelet filter banks, the resulted signal can be expressed as

x^=[g*[l*x]M]M+[q*[h*x]M]M,
where l and h represent the low pass filter and high pass filter in the analysis filter bank, respectively, and g and q are the dual low pass filter and dual high pass filter in the synthesis bank, respectively. ↓ M and ↑ M denote the downsampling and upsampling operators with rate M, ‘*’ denotes convolution.

Given multiple LR signals yk with different motions k, for each LR signal yk obtained from analysis bank can be expressed as follows:

yk=[l*x]M+k[h*x]M.

Combining Eqs. (1) and (2), the wavelet-based iterative reconstruction after nth iteration by utilizing LR signals can be expressed as

x^(n+1)=[q*[h*x^(n)]M]M+g*1Kk=1K[ykk[h*x^(n)]]M]M.
Recall that, the optimized SR iteration equation used in our work can be rewritten as
x^(n+1)=x^(n)g*1Kk=1K[l*x^(n)]M]M+g*1Kk=1K[ykk[h*x^(m)]]M]M=x^(n)+g*(1Kk=1K[yk[[l*x^(n)+k[h*x^n]]M]M])=x^(n)+g*1Kk=1K[ykyk(n)]M].
The error function used in our iterative algorithm can be calculated by
en=1Kk=1Kykyk(n).

The iteration process is stopped until that the relative error is less than a given threshold value, or the iteration number reaches the given maximum iteration number.

Next, we compare our proposed iterative algorithm in Eq. (4) with Irani’s iterative algorithm [34]. The Irani’s iterative algorithm to estimate the HR image after nth iteration is expressed by

x^(n+1)=x^(n)+1Kk=1KTk1([ykykn]M*p),
where Tk1 is the inverse of the geometric transformation operator Tk.

The deblurring kernels g and p used in our proposed iterative algorithm and Irani’s algorithm [34] are different: Irani’s method utilizes p with the condition that p is approximately the inverse kernel of l, that is ‖p * l‖ = 1. Since from [35], we know that the filters (l, h) and (g, q) should satisfy g*l=q*h12. Thereby, our method makes possible to design the deblurring kernel g that has much smaller norm than the deblurring kernel p in [34]. This leads to much better resistance to noise because of the noise propagation is proportional to the norm of the deblurring kernel. Meanwhile, the smaller norm of deblurring kernel is the faster of the algorithm converges [35].

The monospectral reconstruction results with our proposed monospectral II are shown in Fig. 4. Figures 4(a) and 4(b) show a captured image by the Bayer CFA pattern and a recovered image by the demosaicking algorithm, respectively. Figure 4(c) shows one captured elemental image by one red-spectral camera, and Fig. 4(d) shows the reconstructed HR image (red-spectrum component) by the optimized SR technique. The final colored HR image can be composed by synthesizing these three-spectrum HR images.

 figure: Fig. 4

Fig. 4 (a) Captured image by a Bayer CFA pattern, and (b) demosaicked image; (c) captured image by one red-spectral LR camera from the monospectral camera array, and (d) HR image reconstructed from red-spectral LR images by the optimized SR algorithm.

Download Full Size | PDF

The convergence error-curves of two SR image reconstruction algorithms are shown in Fig. 5. The results show that the convergence speed of the proposed optimized method is obviously to be much faster than the conventional IBP method [32]. From the results of Fig. 5, we can know that the optimized value of our method can be obtained when the iterative time is 16. However, the conventional iterative method needs more iterative times. Meanwhile, the lower convergence error also means that the optimized method effectively avoid interference caused by ringing artifacts problem in [31,32].

 figure: Fig. 5

Fig. 5 Error-curves of increasing iterative times for one spectral image (red spectrum).

Download Full Size | PDF

To further validate the proposed monospectral II algorithm, we experimentally compare our proposed algorithm with the computational II algorithm. Reconstruction results are shown in Fig. 6. Figure 6(a) shows the captured EIs by a lenslet array. Figure 6(b) shows the 8 × 8 EIs by MCA. Figure 6(c) shows the reconstructed image with the computational II algorithm and Fig. 6(d) shows the reconstructed image with our proposed monospectral II algorithm and the iterative number is 16. The results show that the increase in resolution and color fidelity is evident.

 figure: Fig. 6

Fig. 6 Reconstruction results of 2D plane image (1024 ×1024): (a) EIs captured by a lenslet array; (b) EIs captured by the designed MCA; (c) reconstructed color image by the computational II algorithm; (d) reconstructed color image by the proposed monospectral II algorithm.

Download Full Size | PDF

The structural similarity index (SSIM) is designed by modeling any image distortion as a combination of three factors that are distortion of structure, luminance distortion and contrast distortion. Figures 7(a) and 7(b) show the degradation in visual quality of the reconstructed plane images with two methods (see Figs. 6(c) and 6(d)). It is clear from Fig. 7 that the image-quality has been greatly improved.

 figure: Fig. 7

Fig. 7 SSIM value maps: (a) Computational II algorithm (SSIM = 0.8008); (b) proposed monospectral II algorithm (SSIM = 0.9380).

Download Full Size | PDF

4. Simulation results and discussion

To confirm the superiority of the proposed monospectral II in encryption, we introduce the wildly used discrete fractional Fourier transform (FrFT) algorithm [37–42] to encrypt the captured EIs. The proposed color image encoding and decoding processes are shown in Fig. 8. The color image is first captured by MCA and the monochromatic EIs are obtained. The input EIs is multiplied by a random matrix, and then is transformed by FrFT. Figure 9(a) shows the encrypted image with the fractional orders (0.21, 0.25). Figure 9(b) shows the reconstructed color image with correct keys. Figures 9(c) and 9 (d) show the decrypted color images with the incorrect orders (0.22, 0.25) and (0.21, 0.255), respectively. The results show that the proposed monospectral II encoding algorithm provides high security.

 figure: Fig. 8

Fig. 8 The encoding and decoding processes of the proposed color encryption method.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Security analysis with the encryption keys: (a) Encrypted image; (b) reconstructed image with correct keys; (c) and (d) reconstructed images with the incorrect fractional orders (0.22, 0.25) and (0.21, 0.255).

Download Full Size | PDF

Statistical analysis has been performed on the encoding algorithm, demonstrating its superior confusion and diffusion properties which strongly defend against statistical attacks. The autocorrelations of the gray-scaled EIs before and after encrypting are shown in Figs. 10(a) and 10(b), respectively. From the simulation results, it is clear that there is strong self-correlation in the plaintext. By contrast, Fig. 10(b) shows that the correlation is much weakness than that of the plaintext. Therefore, the encryption method possesses excellent decorrelation capability to resist statistical attacks.

 figure: Fig. 10

Fig. 10 Autocorrelations analysis. (a) Autocorrelation of elemental images; (b) autocorrelation of the elemental images after encryption.

Download Full Size | PDF

Figures 11(a) and 11(b) show the histograms of the encrypted image and the decrypted image. It is clear that the histogram of the cipher image is significantly different from that of the decrypted image. Thereby this encoding method does not provide any clue to employ any statistical attack on the proposed image encryption method. Meanwhile, we find that, unlike previous color image encryption methods, our proposed encrypted color image belongs to the gray-scaled image, and only provides one-channel histogram information (see Fig. 11 (a)). That means that our proposed color image encoding method not only reduces the complexity in computing, but avoids the needlessly and repeatedly operation in verification.

 figure: Fig. 11

Fig. 11 (a) Histogram of the encrypted ‘color’ image, (b) histogram of the decrypted color image.

Download Full Size | PDF

The simulations are performed to test the robustness of the proposed method against additive noise attack. Figures 12(a) and 12(b) show the reconstructed color images with Gaussian noise and salt & pepper noise attacks.

 figure: Fig. 12

Fig. 12 Robustness test against noise attacks (a) Gaussian noise with the zero mean and the variance of 0.05; (b) salt & pepper noise with the noise density of 0.05.

Download Full Size | PDF

At last, the calculation of our proposed color encryption method is tested. From the statistical analysis of the encrypted image (see Fig. 10(b) and Fig. 11(a)), we can find that the data of the encrypted ‘color’ image belongs to the gray-scaled image that is seen as a 2D image. Because the captured 2D monospectral EIs possess all three spectral (R-spectrum, G-spectrum, and B-spectrum) information, the image is colorfully reconstructed by utilizing the three spectrums information. However, the conventional encryption algorithm encrypts the color image generally needs to respectively encrypt each of three-channels (R, G, and B) of color image. To estimate the time consumption of the encryption algorithm, we comparatively analysize our proposed method with the FrFT-based color image encryption method [37] and the Fresnel transform (FT) based encryption method [43]. Here, in order to effectively compare the time-consume of our proposed method with the other two encoding methods, the same color scene is captured, and the average encryption time with two methods is recorded in Table 1. This means that, compared with the previous color image encryption method, the proposed color image encryption method can significantly optimize the encryption calculation, which benefits from the property of the captured monospectral EIs.

Tables Icon

Table 1. Time consume with two different methods.

5. Conclusion

In conclusion, we apply the monochromatic II in image encryption and reconstruction. We found that image quality improved with monochromatic II and optimized SR used in image reconstruction. Importantly, this study can convert colored image encryption into gray-scaled image encryption with the help of the monochromatic II algorithm. Unlike most of the colored image encryption methods, we do not need to process three channels of colored image. As it is easy to combine monochromatic II with many current optical image systems, it has potential for varied security purposes, such as 3D information encryption and watermarking.

Funding

National Natural Science Foundation of China (NSFC) (61705146, 61535007); Equipment Research Program in Advance of China (JZX2016-0606/Y267).

References and links

1. Z. Liu and S. Liu, “Double image encryption based on iterative fractional Fourier transform,” Opt. Commun. 275(2), 324–329 (2007). [CrossRef]  

2. Y. Zhang, D. Xiao, W. Wen, and H. Liu, “Vulnerability to chosen-plaintext attack of a general optical encryption model with the architecture of scrambling-then-double random phase encoding,” Opt. Lett. 38(21), 4506–4509 (2013). [CrossRef]   [PubMed]  

3. Y. Shi, T. Li, Y. Wang, Q. Gao, S. Zhang, and H. Li, “Optical image encryption via ptychography,” Opt. Lett. 38(9), 1425–1427 (2013). [CrossRef]   [PubMed]  

4. W. Qin and X. Peng, “Asymmetric cryptosystem based on phase-truncated Fourier transforms,” Opt. Lett. 35(2), 581–583 (2008).

5. A. Alfalou and C. Brosseau, “Dual encryption scheme of images using polarized light,” Opt. Lett. 35(13), 2185–2187 (2010). [CrossRef]   [PubMed]  

6. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption system that uses phase conjugation in a photorefractive crystal,” Appl. Opt. 37(35), 8181–8186 (1998). [CrossRef]  

7. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). [CrossRef]   [PubMed]  

8. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical multiple-image encryption in diffractive-imaging-based scheme using spectral fusion and nonlinear operation,” Opt. Express 24(23), 26877–26886 (2016). [CrossRef]  

9. D. Kong, L. Cao, G. Jin, and B. Javidi, “Three-dimensional scene encryption and display based on computer-generated holograms,” Appl. Opt. 55(29), 8296–8300 (2016). [CrossRef]   [PubMed]  

10. D. Kong, L. Cao, X. Shen, H. Zhang, and G. Jin, “Image encryption based on interleaved computer-generated holograms,” IEEE Trans. Ind. Inform. (to be published).

11. X. Li, D. Xiao, and Q. Wang, “Error-free holographic frames encryption with CA pixel-permutation encoding algorithm,” Opt. Laser. Eng. 100, 200–207 (2018). [CrossRef]  

12. W. Chen, X. Chen, and C. Sheppard, “Optical image encryption based on diffractive imaging,” Opt. Lett. 35(22), 3817–3819 (2010). [CrossRef]   [PubMed]  

13. W. Chen and X. Chen, “Space-based optical image encryption,” Opt. Express 18(26), 27095–27104 (2010). [CrossRef]  

14. X. Wang and D. Zhao, “Amplitude-phase retrieval attack free cryptosystem based on direct attack to phase-truncated Fourier-transform-based encryption using a random amplitude mask,” Opt. Lett. 38(18), 3684–3686 (2013). [CrossRef]   [PubMed]  

15. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]   [PubMed]  

16. Z. Liu, Q. Guo, L. Xu, M. Ahmad, and S. Liu, “Double image encryption by using iterative random binary encoding in gyrator domains,” Opt. Express 18(11), 12033–12043 (2010). [CrossRef]   [PubMed]  

17. B. Yang, Z. Liu, B. Wang, Y. Zhang, and S. Liu, “Optical stream-cipher-like system for image encryption based on Michelson interferometer,” Opt. Express 19(3), 2634–2642 (2011). [CrossRef]   [PubMed]  

18. B. Javidi, R. Ponce-Díaz, and S. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006). [CrossRef]   [PubMed]  

19. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

20. D. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]   [PubMed]  

21. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

22. X. Li, S. Kim, and Q. Wang, “Copyright protection for elemental image array by hypercomplex Fourier transform and an adaptive texturized holographic algorithm,” Opt. Express 25(15), 17076–17098 (2017). [CrossRef]   [PubMed]  

23. H. Yoo, “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21(7), 8873–8878 (2013). [CrossRef]   [PubMed]  

24. J. Kim, J. Jung, Y. Jeong, K. Hong, and B. Lee, “Real-time integral imaging system for light field microscopy,” Opt. Express 22(9), 10210–10220 (2014). [CrossRef]   [PubMed]  

25. I. Muniraj, B. Kim, and B. Lee, “Encryption and volumetric 3D object reconstruction using multispectral computational integral imaging,” Appl. Opt. 53(27), G25–G32 (2014). [CrossRef]   [PubMed]  

26. X. Li and I. Lee, “Robust copyright protection using multiple ownership watermarks,” Opt. Express 23(3), 3035–3046 (2015). [CrossRef]   [PubMed]  

27. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image array,” Optica 5(1), 332–335 (2014). [CrossRef]  

28. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]   [PubMed]  

29. Y. Wang, Y. Shen, Y. Lin, and B. Javidi, “Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens,” Opt. Lett. 40(15), 3564–3567 (2015). [CrossRef]   [PubMed]  

30. S. Park, M. Park, and M. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Sig. Proc. Mag. 20(3), 21–36 (2003). [CrossRef]  

31. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef]   [PubMed]  

32. J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39(24), 6855–6858 (2014). [CrossRef]   [PubMed]  

33. H. Kim, S. Lee, T. Ryu, and J. Yoon, “Superresolution of 3-D computational integral imaging based on moving least square method,” Opt. Express 22(23), 28606–28622 (2014). [CrossRef]   [PubMed]  

34. M. Irani and S. Peleg, “Motion analysis for image enhancement: Resolution, occlusion, and transparency,” J. Vis. Commun. Image R. 4(4), 324–335 (1993). [CrossRef]  

35. S. Mallat, A wavelet tour of signal processing, (Academic, 1999).

36. S. Narang and A. Ortega, “Perfect reconstruction two-channel wavelet filter banks for graph structured data,” IEEE Trans. Sig. Proc. 60(6), 2786–2799 (2012). [CrossRef]  

37. L. Sui, M. Xin, and A. Tian, “Multiple-image encryption based on phase mask multiplexing in fractional Fourier transform domain,” Opt. Lett. 38(11), 1996–1998 (2013). [CrossRef]  

38. R. Tao, J. Lang, and Y. Wang, “Optical image encryption based on the multiple-parameter fractional Fourier transform,” Opt. Lett. 33(6), 581–583 (2008). [CrossRef]   [PubMed]  

39. R. Tao, Y. Xin, and Y. Wang, “Double image encryption based on random phase encoding in the fractional Fourier domain,” Opt. Express 15(24), 16067–16079 (2007). [CrossRef]   [PubMed]  

40. X. Wang, W. Chen, and X. Chen, “Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding,” Opt. Express 22(19), 22981–22995 (2014). [CrossRef]   [PubMed]  

41. S. Liu, J. Xu, Y. Zhang, L. Chen, and C. Li, “General optical implementations of fractional Fourier transforms,” Opt. Lett. 20(9), 2088–2090 (2007). [CrossRef]  

42. Z. Liu and S. Liu, “Random fractional Fourier transform,” Opt. Lett. 32(15), 1053–1055 (1995). [CrossRef]  

43. W. Chen, X. Chen, and C. JR Sheppard, “Optical color-image encryption and synthesis using coherent diffractive imaging in the Fresnel domain,” Opt. Express 20(4), 3853–3865 (2012). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Pickup device of the conventional optical integral imaging system.
Fig. 2
Fig. 2 Principle of Bayer CFA and MCA capturing: (a) Bayer CFA, (b) MCA.
Fig. 3
Fig. 3 The real 8 × 8 planar camera array.
Fig. 4
Fig. 4 (a) Captured image by a Bayer CFA pattern, and (b) demosaicked image; (c) captured image by one red-spectral LR camera from the monospectral camera array, and (d) HR image reconstructed from red-spectral LR images by the optimized SR algorithm.
Fig. 5
Fig. 5 Error-curves of increasing iterative times for one spectral image (red spectrum).
Fig. 6
Fig. 6 Reconstruction results of 2D plane image (1024 ×1024): (a) EIs captured by a lenslet array; (b) EIs captured by the designed MCA; (c) reconstructed color image by the computational II algorithm; (d) reconstructed color image by the proposed monospectral II algorithm.
Fig. 7
Fig. 7 SSIM value maps: (a) Computational II algorithm (SSIM = 0.8008); (b) proposed monospectral II algorithm (SSIM = 0.9380).
Fig. 8
Fig. 8 The encoding and decoding processes of the proposed color encryption method.
Fig. 9
Fig. 9 Security analysis with the encryption keys: (a) Encrypted image; (b) reconstructed image with correct keys; (c) and (d) reconstructed images with the incorrect fractional orders (0.22, 0.25) and (0.21, 0.255).
Fig. 10
Fig. 10 Autocorrelations analysis. (a) Autocorrelation of elemental images; (b) autocorrelation of the elemental images after encryption.
Fig. 11
Fig. 11 (a) Histogram of the encrypted ‘color’ image, (b) histogram of the decrypted color image.
Fig. 12
Fig. 12 Robustness test against noise attacks (a) Gaussian noise with the zero mean and the variance of 0.05; (b) salt & pepper noise with the noise density of 0.05.

Tables (1)

Tables Icon

Table 1 Time consume with two different methods.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

x ^ = [ g * [ l * x ] M ] M + [ q * [ h * x ] M ] M ,
y k = [ l * x ] M + k [ h * x ] M .
x ^ ( n + 1 ) = [ q * [ h * x ^ ( n ) ] M ] M + g * 1 K k = 1 K [ y k k [ h * x ^ ( n ) ] ] M ] M .
x ^ ( n + 1 ) = x ^ ( n ) g * 1 K k = 1 K [ l * x ^ ( n ) ] M ] M + g * 1 K k = 1 K [ y k k [ h * x ^ ( m ) ] ] M ] M = x ^ ( n ) + g * ( 1 K k = 1 K [ y k [ [ l * x ^ ( n ) + k [ h * x ^ n ] ] M ] M ] ) = x ^ ( n ) + g * 1 K k = 1 K [ y k y k ( n ) ] M ] .
e n = 1 K k = 1 K y k y k ( n ) .
x ^ ( n + 1 ) = x ^ ( n ) + 1 K k = 1 K T k 1 ( [ y k y k n ] M * p ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.