Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ownership protection of holograms using quick-response encoded plenoptic watermark

Open Access Open Access

Abstract

In actual applications of three-dimensional (3D) holographic display, the holograms need to be effectively stored and transmitted through the network. Thereby, there is an urgent demand for protecting the ownership of holograms against piracy and malicious manipulation. This paper realizes an ownership protection for holograms by embedding the watermark into the optimized cellular automata (OCA) domains. This work has the advantages of simultaneously improving the imperceptibility by selecting the “best” rule and OCA domains for watermark embedding and increasing robustness via the property of the multiple memory of the plenoptic image. We present experimental results of the visual quality of watermarked holograms and the robustness of 3D watermark to verify the performance of the proposed watermarking method. Experimental results confirm the imperceptibility and robustness of the proposed method.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holography is regarded as an attractive approach for true three-dimensional (3D) display that can produce full depth cue without using any special glasses. Generally, it allows us to record a 3D scene as a two-dimensional (2D) holograms obtained by a recording device and a numerical method for holograms reconstruction. In recent years, with the increased public awareness and interest in 3D display, the security of multimedia from Internet is drawing more and more attention [1–9]. In a real-world application, the recorded holograms need to be preserved and transmitted through network, which refer to considerable storage capacity, information security and ownership protection, thereby motivating the needs for image compression, encryption and watermarking techniques [10–18].

Recently, many proposes for holograms compression has been proposed. Among them, the most popular methods of holograms compression mainly include scalar quantization and vector quantization. These proposed methods can effectively speed up the transmission of holograms and reduce the archival memory size to store [19, 20]. At this time, there have been a lot of approaches that have dealt with application of the holographic techniques to data security issues [21–26].

The ownership protection is gaining importance for holograms owning to the extensive popularity of the Internet for 3D multimedia distribution in the near future. However, the reports of ownership protection aim at holograms which are scarcely reported. Watermarking is a promising solution that can protect the copyright of multimedia data through embedding a watermark into the multimedia to declare copyright ownership. The watermark can be divided into visible and invisible and the visual watermark can be considered as two formats: binary and non-binary. The binary watermark provides better calculating performance but usually its robustness reduces. The non-binary watermark (gray watermark or 3D watermark) provides high robustness, but the imperceptibility of watermark is weak. On the other hand, for watermark embedding, watermarking based on transform domain is mostly encountered. Among the practical schemes, discrete wavelet transform, discrete cosine transform, and discrete Fourier transform are most popular transform encoding methods for the practical implementations. However, because the number of the transform planes of these methods is limited, the watermark can be divulged easily by malicious manipulation.

1.1. Cellular automata transform: motivation and background

Cellular automata as an interesting and potential technique has been proposed for watermarking because it provides a group of different transform planes according to the cellular automata rules. In [27], X. Li and I.-K. Lee proposed a watermarking method by embedding multiple watermarks into low frequencies of cellular automata domains. This method effectively addressed the low robustness of the multiple watermarks. Additionally, we presented an elemental image array ownership protection method by combining the use of utilizing the hypercomplex Fourier transform (HFT) and 2-level cellular automata transform [28]. Two advantages of these methods are noteworthy. Firstly, HFT can extract the features of each elemental image, the adaptive watermark data is embedded into less noticeable features to increase robustness of the watermarked image. Secondly, cellular automata transform based watermarking has a large number of transform planes available. Unlike the previous transform based watermarking methods, which usually provide one transform plane for data embedding. Previously, we have proposed and confirmed optical watermarking method with cellular automata transform. However, in the previous cellular automata-based watermarking methods, one major disadvantage to these approaches is that, the visual quality of the watermarked image remains poor when the big data watermark is embedded into the transform domains. Another drawback of these previous watermarking methods is that the watermark extraction needs the original image. In other words, we must reserve and store the original data all the time, this will result in severely secure problem in practice applications. The motivation of the present work is to remedy these drawbacks by utilizing the optimization algorithm for watermark embedding.

1.2. Optimization of the proposed method

In this work we propose an optimized cellular automata (OCA) encoding algorithm to improve the performance of watermarking system. This is the first paper, to the best of our knowledge, to report the implementation of ownership protection for holograms by use of OCA and 3D plenoptic image. Its main advantage in our work is that it helps us choose the “best” cellular automata rule and select OCA domains to improve the robustness and imperceptibility, simultaneously. Importantly, we can implement the 3D watermarks embedding and semi-blind extraction. Most of the previous gray watermark or 3D watermark based watermarking methods cannot realize watermark semi-blind extraction because the big data watermark information is very difficult to be detected without the original image information. In this work, the reason of utilizing the 3D plenoptic image for ownership protection it is that the memory distribution properties of the plenoptic image which greatly increase robustness of the watermarking algorithm. Because the plenoptic image consists of many elemental images, even though most of the data of the elemental images are lost, the 3D scene can be reconstructed by the remaining elemental images. We wish to point out that the proposed watermark embedding and extraction procedures differ from that which was considered by us in an earlier work [29] where the watermark data was embedded into the low-frequency of 3D cellular automata transform domains. In their work, the gray watermark data cannot be reconstructed without the original image information. We find that the OCA encoding method provides a good trade-off between imperceptibly and robustness. In addition, an optical watermark can be embedded and 3D reconstructed successfully. The present results will confirm that the big data watermark (even 3D data) can be reconstructed without the original information.

2. Description of the proposed watermark embedding algorithm

The watermark embedding process of the proposed method can be divided into mainly three procedures. Firstly, the generation algorithm of the 3D plenoptic image. Secondly, the watermark embedded by the GA optimized multiple cellular automata encoding algorithm. Thirdly, the analysis of the extraction and reconstruction of the watermark. The detailed description of the proposed watermarking method is presented below.

2.1. Analysis of 3D plenoptic image generation and reconstruction

In our work, the plenoptic image generation is implemented by using a camera array. As shown in Fig. 1, a subset of elemental images [30–35] with different perspectives of 3D scene are recorded by the camera array. In the capturing process, the optical axes of all cameras from the camera array should converge to one point, the central depth plane (CDP) can be determined through the convergence point [36–39]. However, in the real capture process, the convergence point is very difficult to control. The undesirable parallax images from other directions will result in crosstalk. To reduce the crosstalk, the cameras need to be calibrated. Our previous proposed geometry calibration method [40] is introduced to align the captured planar parallaxes. The calibrated parallax images (elemental images) are interleaved to generate the plenoptic image. The plenoptic image 𝒲p can be calculated as follows:

𝒲p(x,y)=(pΔr)(m,n)(i,j)Em,n(p×iΔsm+T,p×jΔsn+T)×σ(xp×iΔs+mT,yp×jΔs+nT),
i=0,1,,floor(M×Δsp)1,
j=0,1,,floor(N×Δsp)1,
where Em,n denotes the (m, n)th calibrated elemental images, Δs denotes the size of each pixel in the elemental images, p represents the pitch of each virtual lenslet, the size of each elemental image is set to M × N, T equals to (p + Δs)/Δs, and σ represents impulse function. The 3D watermark can be optically displayed by using the plenoptic watermark on an integral imaging optical display equipment. Meanwhile, the 3D watermark can be also digitally reconstructed by the computer simulation. If we suppose that the resolution of the reconstructed watermark at the depth z is the same as that of each captured elemental images, and the 3D watermarks at the depth z is digitally reconstructed as follows:
w(x,y,z)=1ψ(x,y)m=0M1n=0N1Em,n(xmM×pcx×γ,ynN×pcy×γ),
where w(x, y, z) denotes the digitally reconstructed 3D watermark at the depth z, cx and cy denote the size of imaging sensor, ψ is the superimposed number matrix, γ denotes the magnification factor and equals to z/g, and g is the focal length.

 figure: Fig. 1

Fig. 1 The camera array used to generate the plenoptic image.

Download Full Size | PDF

2.2. Analysis of the watermark embedding and extraction

In the subsection 2.1, we have described the generation and reconstruction algorithms of the plenoptic image. In this subsection, we will detailedly analyze the watermark embedding and extraction procedures. In order to determine the optimal or near-optimal cellular automata rule and OCA domains for watermark embedding, we use the cellular automata-based watermarking method [28] with an important modification. The watermark insertion is performed in the multi-OCA domains by applying the multi-level cellular automata transform decomposition. However, the researchers find that if the watermark is embedded in the higher frequency bands, even though the quality of the watermarked image is good, it is vulnerable to the common attacks. Thus, we embed watermark into the higher frequency bands, which is not robust, although the watermarked image quality is assured. In contrast, we embed the watermark data into the lower frequency bands, it should be robust against common attacks. However, the watermark embedded in the lower frequency bands will degrade the quality of the watermarked image compared with the original image. This comes from the fact that the energies of most natural images are concentrated in lower frequency bands, and human eyes are more sensitive to the noise caused by modifying the lower frequency coefficients. Therefore, in our work, we should find the optimized cellular automata domains for watermark embedding, with the help of OCA encoding method, the above problem can be well resolved. Next, we briefly describe the proposed watermarking embedding method.

2.2.1. Plenoptic image embedding algorithm

Recently, it is very difficult to embed an image into a QR code because of data capacity restrictions. Generally, an image needs to be embedded into a QR code via a hyperlink, and the user scans the QR code containing the hyperlink which automatically supplies the user to the image [41]. To achieve the 3D watermark semi-blind extraction and reconstruction, before watermark embedding, the 3D plenoptic image needs to be typed in the software to generate the quick response (QR) code. In our experiment, in order to achieve the semi-blind watermarking, the generated QR code as the watermark is used to be embedded and extracted. After the QR code is extracted, the 3D plenoptic image can be appeared when QR code contained the hyperlink is scanned by a smartphone. We can employ the QR code as a “container” to offer binary watermark insertion. A very important attribute of QR code is that it can provide a safe environment for the coded input [42, 43]. There exists a lot of software and online QR code generators to create the QR codes. As shown in Fig. 2, an QR image is generated using the free QR code generator, and when it is read by a smartphone, the plenoptic image appears.

 figure: Fig. 2

Fig. 2 (a) QR code generated by the plenoptic image and (b) the outcome when scanning the QR code with a smartphone.

Download Full Size | PDF

In designing an optical 3D image watermarking system, two conflicting objectives must always be confronted. These are visual quality of the watermarked holograms and robustness of the 3D watermark under attacks. This work employs a genetic algorithm (GA) to search for OCA domains and best cellular automata rule to achieve the optimum performance of the proposed 3D watermarking algorithm. In the following, we briefly describe the proposed watermark embedding process performed by GA and reported in Fig. 3.

 figure: Fig. 3

Fig. 3 Flow chart for illustrating the proposed watermarking method.

Download Full Size | PDF

Step 1 The color host image (holograms) is converted into YCbCr color space, then the Y component is decomposed by multiple cellular automata to provide cellular automata domains for watermark embedding. After embedding, the RGB watermarked image can be obtained by converting the embedded YCbCr image. Cellular automata can provide many transform planes for watermark embedding according to various CA rules. For an M-state, N-site CA, there are MMN rules. In this work, we utilize 8-bit, 2-state, 3-site cellular automata to transform the host holograms, it will generate 256 rules and the values from 0 to 255.

The OCA domains 𝒪kl with the rule Rx can be calculated by

𝒪kl(Rx)=1λkli=0M1j=0N1IijFijkl(Rx),
λkl=i=0M1j=0N1Fijkl2(Rx),
where Fijkl denotes the 2D basis function of cellular automata, Rx is the cellular automata rule with the number x, and Iij denotes the original image.

Step 2 The obtained OCA domains are divided into Mo × No blocks, each block with the size of P × Q, and a portion of blocks are selected to embed watermark information. The watermark is divided into Mw × Nw blocks. The size of each block of watermark is the same as that of OCA domains, which is P × Q. Before watermark embedding, to keep the security of the watermark, the Arnold pixel scrambling method [44] is used to scramble the watermark pixels with a predetermined key: 𝒲s = permute (𝒲q, key). Here, 𝒲q is watermark signal which is generated by QR code. The scrambled watermark 𝒲s embedding can be expressed as following:

𝒪i=𝒪i+α𝒲s(i),
where 𝒪i and 𝒪′i denote the OCA domains of the original image and the watermarked image, α is the watermark embedding strength used to control the watermark energy to be inserted.

Next, the generated QR code as the plenoptic image “container” is embedded into the GA optimized multi-CA domains. GA is used to control and optimize the watermark embedding. Usually, GA starts by defining the optimization parameters, and GA is stopped if the fitness value reaches the predetermined value. Three major operation functions in the GA include selection, crossover, and mutation [45]. Selection function is aim to choose the parents from the blocks of OCA domains. The parent can be considered as one block of OCA domains. In our work, we randomly select Mw × Nw blocks (parents) from OCA domains for watermark data insertion, and these selected blocks can be considered as an individual. Each individual consists of Mw × Nw blocks, which is selected from OCA domains. The individuals can be determined by the highest fitness value. Crossover function is utilized to generate new individuals to watermark embedding for the next generation. The mutation function is utilized to make tiny changes in the individuals and mutate the position in the chromosomes. By utilizing these control parameters, we can employ the GA in our watermarking method.

The goal of the GA with the proposed watermarking method is to search for OCA domains and “best” cellular automata rule for watermark embedding. The imperceptibility and robustness is serving as the elements to be optimized in the GA. For the imperceptibility measure, we calculate the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) of the watermarked image. The SSIM has been proved to be better objective quality assessment metric which exploits the structural similarity in the viewing field. The SSIM can be calculated by

SSIM(x,y)=(2μxμy+c1)(2δx,y+c2)(μx2+μy2+c1)(δx2+δy2+c2),
c1=(k1S)2,c2=(k2S)2,
where μx and μy is the average of x and y, respectively, δ is variance and δx,y is covariance of x and y, S is the dynamic range of the pixel values of image, k1 = 0.01 and k2 = 0.03 by default.

For another quality evaluation parameter PSNR can be obtained by

PSNR(x,y)=10log102552MSE(x,y),
MSE(x,y)=1MNx=0M1y=0N1[O(x,y)O(x,y)]2.

To evaluate the robustness of the watermark, the bit correction ratio (BCR) is used to judge the difference between the original watermark and the extracted watermark, and can be calculated by

BCR(x,y)=(1i=1LM𝒲(x,y)𝒲(x,y)LM),
where LM denotes the pixel size of the watermark. 𝒲 and 𝒲′ represent the original and extracted watermarks.

By reasonably setting the parameters and fitness function of GA, the visual quality of the watermarked holograms and the robustness of the 3D watermark can be improved, simultaneously. In our work, the fitness function of the training process after the kth iteration is given by

fk=αkSSIMk+βkPSNRk+1Ll=1L(γk,lBCRk,l),
where L is the attack numbers, and αk, βk and γk,l are the weighting factors for SSIM, PSNR and BCR values, respectively. In the training process, we only take the Gaussian noise and occlusion attack into consideration, thereby, L is set to 2. The weighting factors αk, βk and γk,l play an important role in balancing the conflicting between imperceptibility and robustness. From experiments, we obtain that the αk and γk,l value lie in the range from 50 to 100, βk is close to 1, which can provide a desirable result.

2.2.2. Watermark extraction and reconstruction algorithm

Semi-blind watermark extraction is achieved without referring to the original image. The gray watermark or 3D watermark based watermarking algorithms are very hard to implement the semi-blind watermarking. The reason is that the big data information cannot be detected without the original image. In our work, benefiting of the QR code, the big data watermark (plenoptic image) is firstly coded as a QR code. The QR code as the watermark is embedded and extracted. The QR code can be semi-blindly extracted as follows:

The watermark data detection is implemented by comparing the correlation to an adaptive threshold value θ. For the predetermined watermarked coefficients 𝒪′ and the embedded watermark 𝒲s, the correlation ρ can be calculated by

ρ=i=0N1(𝒪i𝒪¯)(𝒲s(i)𝒲s¯)i=0N1(𝒪i𝒪¯)2i=0N1(𝒲s(i)𝒲s¯)2,
where 𝒪¯ and 𝒲s¯ are the mean value of 𝒪′ and 𝒲s, respectively, and N is the pixel size of the embedded watermark. Here, once the OCA domains are determined in the embedding process, the best CA rule Rx and the corresponding GA individual Ix should be remembered as the secret key for watermark extraction. In the watermark extraction, the same CA rule Rx is used for transforming the watermarked image, and after transforming, the GA individual Ix is used to determine the position of watermarked blocks (watermarked coefficients 𝒪′).

Thereby, the preembedded scrambled watermark 𝒲s can be detected by comparing the correlation ρ to the adaptive threshold value θ.

{𝒲s=1,ifρ>θ𝒲s=0.ifρθ
where the adaptive threshold θ is determined by the mean values of ρ, θ = α × ρ̄, and α = (2, 2.5).

After the scrambled watermark is detected, the QR code can be recovered by 𝒲q = permute (𝒲s, T - key), and T is the period of the iteration. The extracted plenoptic image can be decoded by reading the recovered QR code with a read software. By utilizing the extracted plenoptic image, the 3D watermark scene can be computationally or optically reconstructed using the corresponding reconstruction methods (see subsection 2.1).

3. Performance analysis of the proposed algorithm

In this section, some experimental results are demonstrated to show the effectiveness of the proposed image watermarking scheme. The experimental setup for 3D plenoptic image generation is shown in Fig. 1. The camera array consists of 64 network cameras and each has 540 × 760 pixels. The distance between adjacent cameras is set to 8 cm both horizontally and vertically. The 3D scene used for the experiment is composed of two real objects: the dolls “Tree” and “House”, and they are placed at the distance of 550 mm and 600 mm from the camera array, respectively. In our simulation experiment, the holograms are used as the host images, we take two kinds of host holograms, which are generated by the well-known Fresnel transform (FrT) [23] and Fourier transform (FT) computer generated holographic algorithms as the original source [24,25]. The 2D image of 3D scene “Train” and “Lena” as the original sources to generate host holograms. The size of the host holograms are 512 × 512 pixels, the pixel size of the plenoptic image is 1024 × 768 pixels. The QR code used in our work with the size of 29 × 29 modules. In the watermark embedding process, in order to better embed the QR code into the host hologram, the resolution of QR code image is resized to 128 × 128 pixels. The embedding parameter α is 0.25 and the size of each block is P × Q = 8 × 8 = 64 pixels. The scramble key is 33 and the period of the iteration T is 96. The fitness value is determined by three weighting factors. For simplicity, we use αk = 60, βk = 1, and γk,l = 80 in Eq. (13). In the GA training process, we select six individuals for each iteration, with the crossover rate is set to 0.25, the mutation rate is 0.05, the selection is 0.5, and the training iterations are 50. Figures 4(a), 4(b), and 4(c) show the reconstructed images from the watermarked hologram at the 5th, 20th and 50th iterations in GA.

 figure: Fig. 4

Fig. 4 Reconstructed images from the watermarked holograms with different iterations: (a) 5th iteration, (b) 20th iteration, (c) 50th iteration.

Download Full Size | PDF

3.1. Evaluation of imperceptibility

We test the visual quality of the output image by watermarking with our proposed method. Then, we utilize SSIM and PSNR to compare the visual quality between the reconstructed images from the holograms with the watermark data and without the watermark data. Here, we want to reiterate that we do not consider the advantage of the holographic technique itself, which are used in our work, we only analysis the visual quality difference between the host and the watermarked holograms. Figure 5 shows the host and the watermarked vision by holograms, which generated by FT-based holography. Figures 5(a) and 5(b) show the host and the watermarked holograms, respectively, with the average PSNR of three color channels which is 43.03 dB. Figure 5(c) shows the SSIM map between the host and the watermarked holograms. Figures 5(d) and 5(e) show the reconstructed images, respectively, with the host and watermarked images with the averagely PSNR of three color channels which is 42.98 dB. Figure 5(f) shows the SSIM map between the reconstructed images from the host and the watermarked holograms.

 figure: Fig. 5

Fig. 5 Imperceptibility test of the proposed method with FT generated host hologram “Train”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM from (e).

Download Full Size | PDF

Figure 6 shows another simulation result by the FrT generated host holograms. Figures 6(a) and 6(b) show the host and watermarked holograms, and Fig. 6(c) shows the corresponding calculated SSIM map. The averagely PSNR of three color channels between Figs. 6(a) and 6(b) is 46.31 dB. Figures 6(d) and 6(e) show the reconstructed images from the host and watermarked holograms, and Fig. 6(f) shows the corresponding calculated SSIM map. The average PSNR of three color channels between Figs. 6(d) and 6(e) is 44.83 dB. From the presented results of Figs. 5 and 6, we can see that the watermarked holograms are not distinguishable from the host holograms. Meanwhile, the visual quality of the reconstructed image from the watermarked hologram is satisfied.

 figure: Fig. 6

Fig. 6 Imperceptibility test of the proposed method with FrT generated host hologram “Train”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM map from (d) and (e).

Download Full Size | PDF

Figures 7 and 8 show another sample for imperceptibility test with image “Lena”. Figures 7(a) and 7(b) show the FT generated host image and the corresponding watermarked hologram, respectively, with the average PSNR of three color channels which is 41.26 dB. Figures 7(d) and 7(e) show the reconstructed images with the host and watermarked holograms, respectively, with the average PSNR of three color channels which is 41.67 dB. Figures 7(c) and 7(f) show the calculated SSIM maps from watermarked hologram and reconstituted image, respectively. Figure 8 shows the simulation results of “Lena” by the FrT generated hologram.

 figure: Fig. 7

Fig. 7 Imperceptibility test of the proposed method with FT generated host hologram “Lena”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM from (e).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Imperceptibility test of the proposed method with FrT generated host hologram “Lena”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM from (e).

Download Full Size | PDF

We also compare the performance of our watermarking method with the previous similar watermarking methods: the hypercomplex Fourier transform method [28] and the computational integral imaging method [27]. For convenience, only the FT generated hologram as the watermarking object is tested. Figure 9(a) shows the reconstructed images by the host hologram without watermark information embedded. Figures 9(b) and 9(e) show the reconstructed images from the watermarked holograms with the methods [28] and [27], respectively. Figure 9(d) shows the reconstructed image by the proposed method. Figures 9(c) and 9(f) show the calculated SSIM maps of methods [28] and [27], respectively. Compared with the results presented in Fig. 5, it is clear that the performance of the proposed method outperforms the other two methods.

 figure: Fig. 9

Fig. 9 Comparison analysis with the FT generated hologram: (a) the reconstructed image from the host hologram without watermark embedded, (b) the reconstructed image from the watermarking method [28], (c) the calculated SSIM map by [28], (d) the reconstructed image by our proposed method, (e) the reconstructed image by [27], (f) the calculated SSIM map by [27].

Download Full Size | PDF

Receiver operating characteristic (ROC) curve is a very useful tool in evaluating the overall behavior and reliability of the watermarking algorithm under inspection. In Fig. 10 we show the ROC results with the proposed method and the methods [27] and [28]. The area under curves (AUC) represents the efficient performance of the models. A larger AUC means that the watermarking algorithm has a better performance. The result of Fig. 10 shows that the proposed algorithm provides better performance than the methods [27] and [28]. For a fair comparison, the same host image, QR code and extraction equation are used in these three methods, however, the embedding algorithms of three methods are employed by their own algorithms.

 figure: Fig. 10

Fig. 10 ROC analysis with three different methods.

Download Full Size | PDF

3.2. Evaluation of robustness

To investigate the robustness of the proposed watermarking method, the watermarked holograms with our method are attacked using Gaussian noise and occlusion attacks. Figure 11 shows extracted QR codes from the attacked host holograms. Figures 11(a)–11(c) show the extracted QR codes from Gaussian noise attack with variance 0, 0.1, and 0.2, respectively. Figures 11(d)–11(f) show the extracted QR codes from occlusion attack with occlusion size 10%, 30%, 50%, respectively.

 figure: Fig. 11

Fig. 11 Extracted QR codes under the Gaussian noise and occlusion attacks: (a)–(c) Gaussian noise with v = 0, 0.1, and 0.2, respectively; (d)–(e) occlusion attack with the occlusion size 10%, 30%, and 50%, respectively.

Download Full Size | PDF

Next, we check the robustness of the 3D watermark. The different depths of 3D watermarks reconstructed from the no-attacked watermarked holograms, are shown in Fig. 12. Figure 12 shows the reconstructed 3D watermarks “Tree” and “House” which are located at the different depths of 500 mm, 550 mm, 600 mm, and 650 mm, respectively. The simulation results show that the 3D watermarks “Tree” and “House” can be clearly reconstructed at the depth of 550 mm and 600 mm, respectively.

 figure: Fig. 12

Fig. 12 Reconstructed 3D watermark without attack, which is located at the depths from 500 mm to 650 mm.

Download Full Size | PDF

Noise addition is an important evaluation method to verify the robustness of the watermark. In our experiment, we perform Gaussian noise to the watermarked holograms. Figure 13 verifies the robustness of the watermark which is attacked by Gaussian noise attack with parameters mean = 0 and variance = 0.2. Figures 13(a) and 13(b) reveal the reconstructed watermarks located at the depth of 550 mm and 600 mm, respectively. The results clearly show that the proposed watermarking method provides high robustness, even though the high intensity Gaussian parameter is added. We also perform occlusion attack to the watermarked holograms. Figures 13(c) and 13(d) reveal that the recovered 3D watermarks when the watermarked holograms are occluded 50%. The robustness of the host images “Train” and “Lena” is also evaluated in our work. Figures 14 and 15 show the simulation results with the host images “Train” and “Lena” under noise attack, respectively. The results show that our proposed method has good resistance to this kind of attack.

 figure: Fig. 13

Fig. 13 Robustness analysis of the reconstructed 3D watermarks against attacks: (a) and (b) Gaussian noise with the parameters of mean = 0 and variance = 0.2, (c) and (d) occlusion attack with the occluded size 50%.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Reconstructed image “Train” from Gaussian noise attacked host holograms: (a)–(c) FT generated holograms with variance = 0.02, 0.1, and 0.2, respectively; (d)–(f) FrT generated holograms with variance = 0.02, 0.1, and 0.2, respectively.

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 Reconstructed image “Lena” from Gaussian noise attacked host holograms: (a)–(c) FT generated holograms with variance = 0.02, 0.1, and 0.2, respectively; (d)–(f) FrT generated holograms with variance = 0.02, 0.1, and 0.2, respectively.

Download Full Size | PDF

3.3. Optical display of the extracted 3D watermark

At last, we perform the optical test to the reconstructed 3D watermark. The extracted watermark is displayed on the integral imaging equipment. The parallax images from left (−10°, 0) to right (0, 10°) are shown in Fig. 16. The optical results show that our method provides good performance for 3D watermark optical display.

 figure: Fig. 16

Fig. 16 3D watermarks optically displayed on the integral imaging device from left view to right view.

Download Full Size | PDF

4. Conclusion

In conclusion, we have investigated the problem of ownership protection for holograms. Importantly, this study can realize big data watermark (3D watermark) which is semi-blindly extracted from the watermarking system. Meanwhile, the proposed method effectively improved the imperceptibility and robustness, simultaneously, with the help of the OCA encoding method. Compared with other cellular automata-based watermarking methods, the robustness against various attacks is significantly improved by our proposed method, and the visual quality of holograms after watermarking is still preserved. The simulation results have confirmed that our method can provide a new optical watermarking strategy for ownership protection of holograms.

Funding

National Key R&D Program of China (2017YFB1002900); National Natural Science Foundation of China (NSFC) (61705146, 61535007); Fundamental Research Funds for the Central Universities (YJ201637).

References

1. S. Liu, Q. Mi, Y. Zhang, and B. Zhu, “Optical image encryption with multistage and multichannel fractional Fourier-domain filtering,” Opt. Lett. 26(16), 1242–1244 (2001). [CrossRef]  

2. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]  

3. S. Liu, L. Yu, and B. Zhu, “Optical image encryption by cascaded fractional Fourier transforms with random phase filtering,” Opt. Lett. 187(1), 57–63 (2001).

4. H. Singh, A. Yadav, S. Vashisth, and K. Singh, “Double phase-image encryption using gyrator transforms, and structured phase mask in the frequency plane,” Opt. Laser Eng. 67, 145–156 (2015). [CrossRef]  

5. W. Chen, B. Javidi, and X. Chen, “Advances in optical security systems,” Adv. Opt. Photonics 6(2), 120–155 (2014). [CrossRef]  

6. A. Alfalou and C. Brosseau, “Optical image compression and encryption methods,” Adv. Opt. Photonics 1(3), 589–636 (2009). [CrossRef]  

7. A. Alfalou and C. Brosseau, “Dual encryption scheme of images using polarized light,” Opt. Lett. 35(13), 2185–2187 (2010). [CrossRef]   [PubMed]  

8. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

9. Y. Shi, T. Li, Y. Wang, Q. Gao, S. Zhang, and H. Li, “Optical image encryption via ptychography,” Opt. Lett. 38(9), 1425–1427 (2013). [CrossRef]   [PubMed]  

10. Z. Liu, Q. Guo, L. Xu, M. Ahmad, and S. Liu, “Double image encryption by using iterative random binary encoding in gyrator domains,” Opt. Express 18(11), 12033–12043 (2010). [CrossRef]   [PubMed]  

11. L. Shui, M. Xin, and A. Tian, “Multiple-image encryption based on phase mask multiplexing in fractional Fourier transform domain,” Opt. Lett. 38(11), 1996–1998(2013). [CrossRef]  

12. W. Qin and X. Peng, “Asymmetric cryptosystem based on phase-truncated Fourier transforms,” Opt. Lett. 35(2), 581–583 (2008).

13. Y. Qin, Q. Gong, Z. Wang, and H. Wang, “Optical multiple-image encryption in diffractive-imaging-based scheme using spectral fusion and nonlinear operation,” Opt. Express 24(23), 26877–26886 (2016). [CrossRef]   [PubMed]  

14. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]   [PubMed]  

15. N. Zhou, S. Pan, S. Cheng, and Z. Zhou, “Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing,” Opt. Laser Technol. 82, 121–133 (2016). [CrossRef]  

16. W. Chen, X. Chen, and Colin J. R. Sheppard, “Optical color-image encryption and synthesis using coherent diffractive imaging in the Fresnel domain,” Opt. Express 20(4), 3853–3865 (2012). [CrossRef]   [PubMed]  

17. G. Situ and J. Zhang,“Double random-phase encoding in the Fresnel domain,” Opt. Lett. 29(14), 1584–1586 (2004). [CrossRef]   [PubMed]  

18. G. Situ and J. Zhang, “Multiple-image encryption by wavelength multiplexing,” Opt. Lett. 30(11), 1306–1308 (2005). [CrossRef]   [PubMed]  

19. A. Shortt, T. Naughton, and B. Javidi, “Compression of digital holograms of three-dimensional objects using wavelets,” Opt. Express 14(7), 2625–2630 (2006). [CrossRef]   [PubMed]  

20. L. Bang, Z. Ali, P. Quang, J. Park, and N. Kim, “Compression of digital hologram for three-dimensional object using Wavelet-Bandelets transform,” Opt. Express 19(9), 8019–8031 (2011). [CrossRef]  

21. X. Peng, H. Wei, and P. Zhang, “Chosen-plaintext attack on lensless double-random phase encoding in the Fresnel domain,” Opt. Lett. 15(31), 3261–3263 (2006). [CrossRef]  

22. D. Kong, L. Cao, G. Jin, and B. Javidi, “Three-dimensional scene encryption and display based on computer-generated holograms,” Appl. Opt. 55(29), 8296–8300 (2016). [CrossRef]   [PubMed]  

23. L. Chen and D. Zhao, “Optical color image encryption by wavelength multiplexing and lensless Fresnel transform holograms,” Opt. Express 14(19), 8552–8560 (2006). [CrossRef]   [PubMed]  

24. X. Wang, D. Zhao, F. Jing, and X. Wei, “Information synthesis (complex amplitude addition and subtraction) and encryption with digital holography and virtual optics,” Opt. Express 14(4), 1476–1486 (2006). [CrossRef]   [PubMed]  

25. X. Wang and D. Zhao, “Amplitude-phase retrieval attack free cryptosystem based on direct attack to phase-truncated Fourier-transform-based encryption using a random amplitude mask,” Opt. Lett. 38(18), 3684–3686 (2013). [CrossRef]   [PubMed]  

26. J. Li, “An optimized watermarking scheme using an encrypted gyrator transform computer generated hologram based on particle swarm optimization,” Opt. Express 22(8), 10002–10016 (2014). [CrossRef]   [PubMed]  

27. X. Li and I. Lee, “Robust copyright protection using multiple ownership watermarks,” Opt. Express 23(3), 3035–3046 (2015). [CrossRef]   [PubMed]  

28. X. Li, S. Kim, and Q. Wang, “Copyright protection for elemental image array by hypercomplex Fourier transform and an adaptive texturized holographic algorithm,” Opt. Express 25(15), 17076–17098 (2017). [CrossRef]   [PubMed]  

29. X. Li, S. Kim, and Q. Wang, “Designing three-dimensional cellular automata based video authentication with an optical integral imaging generated memory-distributed watermark,” IEEE J. Sel. Top. Signal Process 11(7), 1200–1212 (2017). [CrossRef]  

30. D. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]   [PubMed]  

31. J. Kim, J. Jung, Y. Jeong, K. Hong, and B. Lee, “Real-time integral imaging system for light field microscopy,” Opt. Express 22(9), 10210–10220 (2014). [CrossRef]   [PubMed]  

32. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image array,” Optica 5(1), 332–335 (2014). [CrossRef]  

33. J. Wang, X. Xiao, and B. Javidi, “Three-dimensional integral imaging with flexible sensing,” Opt. Lett. 39(24), 6855–6858 (2014). [CrossRef]   [PubMed]  

34. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

35. X. Li, Y. Wang, Q. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Laser Eng. 112, 162–169 (2019). [CrossRef]  

36. I. Muniraj, B. Kim, and B. Lee, “Encryption and volumetric 3D object reconstruction using multispectral computational integral imaging,” Appl. Opt. 53(27), G25–G32 (2014). [CrossRef]   [PubMed]  

37. A. Ansari, S. Hong, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Ownership protection of plenoptic images by robust and reversible watermarking,” Opt. Laser Eng. 107, 325–334 (2018). [CrossRef]  

38. Y. Xing, Q. Wang, Z. Xiong, and H. Deng, “Encrypting three-dimensional information system based on integral imaging and multiple chaotic maps,” Opt. Eng. 55(2), 023107 (2016). [CrossRef]  

39. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

40. Z. Xiong, Q. Wang, Y. Xing, H. Deng, and D. Li, “An active integral imaging system based on multiple structured light method,” Opt. Express 23(21), 27095–27104 (2015). [CrossRef]  

41. A. Markman, B. Javidi, and M. Tehranipoor, “Photon-counting security tagging and verification using optically encoded QR codes,” IEEE Photonics J. 6(1), 1–9 (2014). [CrossRef]  

42. J. Barrera, A. Mira, and R. Torroba, “Optical encryption and QR codes: secure and noise-free information retrieval,” Opt. Express 21(5), 5373–5378 (2013). [CrossRef]   [PubMed]  

43. S. Jiao, W. Zou, and X. Li, “QR code based noise-free optical encryption and decryption of a gray scale image,” Opt. Commun. 387, 235–240 (2017). [CrossRef]  

44. X. Shi and D. Zhao, “Color image hiding based on the phase retrieval technique and Arnold transform,” Appl. Opt. 50(14), 2134–2139 (2011). [CrossRef]   [PubMed]  

45. Y. Chen and C. Huang, “Coevolutionary genetic watermarking for owner identification,” Neural Comput. Appl. 26(2), 291–298 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 The camera array used to generate the plenoptic image.
Fig. 2
Fig. 2 (a) QR code generated by the plenoptic image and (b) the outcome when scanning the QR code with a smartphone.
Fig. 3
Fig. 3 Flow chart for illustrating the proposed watermarking method.
Fig. 4
Fig. 4 Reconstructed images from the watermarked holograms with different iterations: (a) 5th iteration, (b) 20th iteration, (c) 50th iteration.
Fig. 5
Fig. 5 Imperceptibility test of the proposed method with FT generated host hologram “Train”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM from (e).
Fig. 6
Fig. 6 Imperceptibility test of the proposed method with FrT generated host hologram “Train”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM map from (d) and (e).
Fig. 7
Fig. 7 Imperceptibility test of the proposed method with FT generated host hologram “Lena”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM from (e).
Fig. 8
Fig. 8 Imperceptibility test of the proposed method with FrT generated host hologram “Lena”: (a) the host hologram, (b) the watermarked hologram, (c) the calculated SSIM from (b), (d) the reconstructed image from (a), (e) the reconstructed image from (b), (f) the calculated SSIM from (e).
Fig. 9
Fig. 9 Comparison analysis with the FT generated hologram: (a) the reconstructed image from the host hologram without watermark embedded, (b) the reconstructed image from the watermarking method [28], (c) the calculated SSIM map by [28], (d) the reconstructed image by our proposed method, (e) the reconstructed image by [27], (f) the calculated SSIM map by [27].
Fig. 10
Fig. 10 ROC analysis with three different methods.
Fig. 11
Fig. 11 Extracted QR codes under the Gaussian noise and occlusion attacks: (a)–(c) Gaussian noise with v = 0, 0.1, and 0.2, respectively; (d)–(e) occlusion attack with the occlusion size 10%, 30%, and 50%, respectively.
Fig. 12
Fig. 12 Reconstructed 3D watermark without attack, which is located at the depths from 500 mm to 650 mm.
Fig. 13
Fig. 13 Robustness analysis of the reconstructed 3D watermarks against attacks: (a) and (b) Gaussian noise with the parameters of mean = 0 and variance = 0.2, (c) and (d) occlusion attack with the occluded size 50%.
Fig. 14
Fig. 14 Reconstructed image “Train” from Gaussian noise attacked host holograms: (a)–(c) FT generated holograms with variance = 0.02, 0.1, and 0.2, respectively; (d)–(f) FrT generated holograms with variance = 0.02, 0.1, and 0.2, respectively.
Fig. 15
Fig. 15 Reconstructed image “Lena” from Gaussian noise attacked host holograms: (a)–(c) FT generated holograms with variance = 0.02, 0.1, and 0.2, respectively; (d)–(f) FrT generated holograms with variance = 0.02, 0.1, and 0.2, respectively.
Fig. 16
Fig. 16 3D watermarks optically displayed on the integral imaging device from left view to right view.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

𝒲 p ( x , y ) = ( p Δ r ) ( m , n ) ( i , j ) E m , n ( p × i Δ s m + T , p × j Δ s n + T ) × σ ( x p × i Δ s + m T , y p × j Δ s + n T ) ,
i = 0 , 1 , , floor ( M × Δ s p ) 1 ,
j = 0 , 1 , , floor ( N × Δ s p ) 1 ,
w ( x , y , z ) = 1 ψ ( x , y ) m = 0 M 1 n = 0 N 1 E m , n ( x m M × p c x × γ , y n N × p c y × γ ) ,
𝒪 k l ( R x ) = 1 λ k l i = 0 M 1 j = 0 N 1 I i j F i j k l ( R x ) ,
λ k l = i = 0 M 1 j = 0 N 1 F i j k l 2 ( R x ) ,
𝒪 i = 𝒪 i + α 𝒲 s ( i ) ,
SSIM ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 δ x , y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( δ x 2 + δ y 2 + c 2 ) ,
c 1 = ( k 1 S ) 2 , c 2 = ( k 2 S ) 2 ,
PSNR ( x , y ) = 10 log 10 255 2 MSE ( x , y ) ,
MSE ( x , y ) = 1 MN x = 0 M 1 y = 0 N 1 [ O ( x , y ) O ( x , y ) ] 2 .
BCR ( x , y ) = ( 1 i = 1 L M 𝒲 ( x , y ) 𝒲 ( x , y ) L M ) ,
f k = α k SSIM k + β k PSNR k + 1 L l = 1 L ( γ k , l BCR k , l ) ,
ρ = i = 0 N 1 ( 𝒪 i 𝒪 ¯ ) ( 𝒲 s ( i ) 𝒲 s ¯ ) i = 0 N 1 ( 𝒪 i 𝒪 ¯ ) 2 i = 0 N 1 ( 𝒲 s ( i ) 𝒲 s ¯ ) 2 ,
{ 𝒲 s = 1 , if ρ > θ 𝒲 s = 0 . if ρ θ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.