Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ownership protection for light-field 3D images: HDCT watermarking

Open Access Open Access

Abstract

Watermarking plays an important role in ownership protection. The embedding strength of the watermark determines the robustness of the watermark and affects the imperceptibility of the watermark. Setting appropriate embedding parameters can balance the robustness and imperceptibility of the watermark. Considering the high-dimensional characteristics of the light-field 3D image, we decide to extract more features from the light-field image to control the embedding parameters accurately to improve the visual quality of the watermark. Therefore, in this paper, we present a method of ownership protection for the light-field image based on high-dimensional color transform (HDCT) watermarking. Our paper introduces an HDCT space to unify multiple different color spaces, rather than selecting a specific color space for processing. By mapping low-dimensional RGB colors into high-dimensional color space, in which the feature vectors can separate the salient region from the background linearly, and extract the embedding parameters accurately. The experimental results show the superiority of the proposed algorithm compared with the existing light-field watermarking algorithm, from the two aspects of imperceptibility and robustness.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a novel three-dimensional (3D) imaging technology, the light-field image can be viewed as an array of images captured by a group of cameras facing the scene [1]. Integral imaging is a practical 3D imaging means of light-field [2]. A high-quality 3D model will receive adequate attention, 3D images have been widely used in various fields. With the development and wide application of light-field imaging technology, more and more researchers focus on the field of light-field imaging [36]. Furthermore, image is gradually becoming one of the main media for people to spread information as ever-accelerated updating of computer technology and multimedia technology. Its safe propagation on the public channel has become a valued issue that people pay attention to [7,8]. If these important data are not protected, it is easy to be stolen without the permission of the copyright owner, which will cause serious losses to the copyright owner. Therefore, it is urgent to find a way to develop a strong robustness copyright protection method for the light-field images to strengthen ownership protection.

A watermark is a kind of hidden and indiscernible texture that is embedded in an appropriate way to guarantee the quality of the content [9]. The watermark is widely used in multimedia information security to enhance copyright protection [10,11] and is one of the technologies to protect the copyright of 3D models. The embedding and extraction of the watermarking algorithm can protect the special information of light-field image copyright. Two-dimensional (2D) digital watermarking is a mature technology, which can also be fulfilled in either frequency domain algorithms or spatial domain algorithms according to the encryption approach [12,13]. Although a lot of research has been done on 2D image watermark [9,1215], the watermark for the light-field image is rarely mentioned. Admittedly, considering the characteristics of the light-field 3D images, the existing digital watermarking technology for 2D images can not be directly applied to the field of light-field 3D imaging: on the one hand, the light-field images obtain the whole light-field information, which is quite different from the images obtained by a traditional 2D camera; on the other hand, the traditional watermarking algorithm is more complex that may lead to more noise and affect the quality of the original data. Therefore, it is of great practical significance to propose a watermarking algorithm for the light-field 3D images.

Generally, regardless of the purpose of the watermark, the watermarking system for the light-field 3D images should satisfy three requirements: capacity, robustness, and imperceptibility [16]. In [17], Amir Ansari et al. have widely considered the 4D inherent characteristics of light-field from spatial and angular information and proposed a 4D wavelet watermarking method, which converts RGB strongly correlated channels into YUV. In [18], the authors proposed a spatiotemporal consistent holographic video watermarking embedding algorithm, considering the temporal dimension and spatial embedding intensity. In [19], the authors proposed an optimized cellular automaton (CA) code to copyright the hologram image, which improved the imperceptibility and robustness of the watermarking system compared to the method in [20].

In watermarking algorithms, a parameter called embedding parameter is usually set to control the strength of watermarking embedding. The embedding parameter is the decisive factor of the robustness, and it also affects the imperceptibility of the watermark. Based on the essential characteristics of the light-field 3D images, the elemental image array is the collection of images captured by a 3D object through the corresponding lens array. For the elemental image array, it can be truly inferred that each elemental image is an ordinary 2D image obtained by resampling the 3D image [20]. Obviously, there is a great amount of visual similarity between images of adjacent elements in horizontal, vertical, and diagonal directions. Speaking generally, in order to protect the visual effect of the 3D model, it is very important to set appropriate watermark embedding parameters of the light-field 3D images.

Therefore, due to the light-field 3D images contain the whole high-dimensional light-field information, our paper proposes a new method of ownership protection for light-field images based on high-dimensional color transform (HDCT) watermarking. Specifically, the performance of the watermark can be improved by selecting the region with a small significant value to embed the watermark in the process of embedding. Therefore, to improve the visual quality of the watermarked light-field 3D images, we decide to extract more abundant features from the light-field 3D images to precisely control the watermark embedding parameters. Our paper introduces an HDCT space to unify multiple color spaces for better feature representation, instead of selecting a specific color space processing [2125]. So far as we known, this is the first time the HDCT method has been introduced into the field of watermarking. The motive of adopting HDCT is to extract and utilize rich color features to separate background and foreground regions of the light-field 3D images effectively. The core idea is to use the usefulness of key feature representations to increase the distinction between foreground and background regions of the light-field 3D images. Compared with the previous saliency extraction methods [20,23], the HDCT method maps low-dimensional RGB colors to feature vectors in high-dimensional color space, which can linearly separate salient regions from the background and extract the watermark embedding parameters accurately. In this paper, the main measure of our method is to set appropriate embedding parameters to balance the robustness and imperceptibility of the watermark. The simulation results show that the algorithm guarantees strong robustness as well as imperceptibility.

2. Theoretical analysis

2.1 Embedding parameter via HDCT

The light-field 3D image has the special characteristic of local autocorrelation, so we need to consider the feature of each elemental image in the process of embedding the watermark. Generally, the performance of the watermark can be improved by selecting the region with a small salient value and embedding the watermark in these regions. Therefore, according to this principle, we decide to extract more abundant features from the light-field image to precisely control the watermarking embedding intensity to improve the visual quality of the watermark. To reduce computations, we segment the image at a super-pixel level. Super-pixel segmentation is to divide a series of pixels that are adjacent to each other and have similar color, brightness, texture features into small regions. It is very important for foreground detection that the super-pixels in the foreground usually have more feature values of feature vectors compared with those in the background. In this paper, the saliency detection method focuses on calculating the feature vector of each super-pixel and inputting the feature vector into the machine learning algorithm, so the image is clearly divided into different regions and effectively separates the salient region from the color features of the background. Based on the resulting salient region to calculate the watermarking embedding parameters. The experimental flowchart is shown in Fig. 1. Specifically, it can be divided into the following steps: 1) Input the original 3D scene; 2) Generate the light-field 3D image; 3) Get the HDCT space; 4) Obtain the saliency map; 5) Get the watermarking embedding parameters.

 figure: Fig. 1.

Fig. 1. The watermarking embedding parameters generation process by the proposed method.

Download Full Size | PDF

As color is considered to be the most discriminating visual cue for humans, various image segmentation techniques depend on the detection of color discrimination in images to a great extent. Each color space has a different color similarity measure, so we introduce the HDCT to unify many different color spaces instead of selecting a specific color space to process. The HDCT is a global approach that converts color information into high-dimensional color space. It should be noted that the feature vector is composed of location feature, color feature, texture feature, color histogram feature, and color contrast feature.

Firstly, the x and y positions of the super-pixel are connected to the feature vector. Location features are used because people tend to focus on objects near the center of the image. The location feature has the average value of the normalized pixel coordinates along with the vertical and horizontal directions in the super-pixel. We then string together color features, because this is one of the most important cues in the human visual system, and some uncertain colors tend to attract more attention than others. Color features consist of the average pixel values in the RGB, CIELab, and HSV color spaces. Texture features include pixel number, gradient histograms, and singular value features. Gradient histograms use the gradient information of pixels to quickly provide visual features, which have 31 dimensions. The singular value feature (SVF) is used to detect the ambiguous section from the test image because this section tends to be the background.

The histogram feature is one of the most effective measures of salient features. Hence, we also use the histogram feature as part of the feature vector. The histogram features are measured by using the chi-square distance of the histogram values between the current and other super-pixels in the RGB, CIELab, and HSV color spaces as shown in the following equation:

$$H_i = \sum_{j=1}^{N}\sum_{k=1}^{B}\frac{(h_{ik}-h_{jk})^2}{(h_{ik}+h_{jk})},$$
where $H_i$ is the RGB/CIELab/HSV value of the histogram feature for $i$-th super-pixel, $N$ is the number of super-pixels, and $B$ is the number of histogram bins. We also use global contrast, local contrast, and the elemental distribution of each component of the color feature as the color contrast feature. We utilize the RGB, CIELAB, hue, and saturation with the eight color channels to compute the color contrast feature, which has eight dimensions. The global contrast and the local contrast are calculated based on the distance between the current pixel and other super-pixels, and elemental distribution is achieved by measuring the compactness of the spatial color variance of the color values. They are defined as follows:
$$G_i = \sum_{j=1}^{N}(c_i-c_j)^2,$$
$$w_{ij} = \frac{1}{Z_i}exp(-\frac{1}{2\sigma_p^2}||p_i-p_j||_2^2),$$
$$L_i = \sum_{j=1}^{N}w_{ij}(c_i-c_j)^2,$$
where $G_i$ and $L_i$ are the global and local contrast values of each color for $i$-th super-pixel, respectively.

The super-pixel feature vectors can be obtained by concatenating the above feature values. After calculating the feature vectors of each super-pixel, the feature vectors are put into the machine learning algorithm to estimate the initial label of the foreground/background, in which we use the random forest algorithm [26,27] to classify. Next, an HDCT space is represented by 11 different color spaces [28]. Figure 2 shows our constructed HDCT space. Finally, a high-dimensional matrix is generated using the HDCT space to represent the color of the image. According to the initial label and the high-dimensional matrix, we calculate the coefficients vector of the salient image. The algorithm describes the entire computation process of the saliency mapping based on HDCT.

 figure: Fig. 2.

Fig. 2. The HDCT space.

Download Full Size | PDF

2.2 Embedding and extraction of the watermark

The proposed HDCT watermarking algorithm mainly consists of two steps: 1) watermarking embedding process and 2) watermarking extraction process. The scheme includes the following steps: generating the synthesized 3D elemental image array from the 2D watermark, embedding the 3D watermark, extracting the 3D watermark, and 3D visualization of the extracted watermark. Using the watermarking embedding operation $Emb[]$, based on the original image $O_{(i,j)}$, watermarking image $W_{(i,j)}$, embedding parameter $e$, and embedding key $k_{emb}$, the watermarked image $O'_{(i,j)}$ can be expressed as:

$$O'_{(i,j)} = Emb[O_{(i,j)}, W_{(i,j)}, k_{emb}, e].$$

For the extraction process, we can recover the extracted watermark $W'_{(i,j)}$ using the watermarked image $O'_{(i,j)}$, extraction key $k_{ext}$, and watermarking extraction method $Ext[]$. The above Eq. (5) and Eq. (6) only describe the process of watermarking embedding and extraction.

$$W'_{(i,j)} = Ext[O'_{(i,j)}, k_{ext}, e].$$

Firstly, the most important step is to process the original light-field 3D image and get the saliency map by using the proposed HDCT method in section 2.1. According to the best embedding parameters $e$ generated by the saliency map, we can get the best position to embed the watermark. The embedding parameters $e_i$ for $i$-th super-pixel are defined as follows:

$$e_i = e_g(e_{min}+\frac{255-S_i}{255}(e_{max}-e_{min})),$$
where the $e_g$ represents the global embedding parament, $e_{min}$ and $e_{max}$ represent the minimum and maximum salient domains of the saliency map, and $S_i$ is the saliency value of $i$-th super-pixel. Using the best embedding parameters $e$, the watermark $W$ is embedded by the following equation:
$$O'_{(i,j)} = O_{(i,j)} + eW_{(i,j)}.$$

Subsequently, as in the previous work [20], we transform the original light-field 3D image from the spatial domain to the frequency domain by performing the CA transform, and the watermark is embedded into the frequency domain of the target light-field image by using the Eq. (8). The watermarking extraction process is quite similar to the embedding process. This process can be regarded as the reverse process of embedding. The watermark data is extracted by using the best embedding parameters $e$.

3. Experiment result and discussion

3.1 Experimental setup

In our experiment, to test the feasibility of the proposed method, a desktop 3D display device was used, as shown in Fig. 3, including a 2D display and an optical diffusion screen. The resolution is 7680$\times$4320. We experiment on the original light-field images with the size of 7680$\times$4320, as shown in Fig. 4(b). The light-field image records the intensity and direction information of the 3D scene based on the real-time rendering technology of backward ray-tracing [29], where the 3D scene is the food in Fig. 4(a). The CA transform filter gateway security keys: basis function type 2, initial configuration 01001101, and Wolfram rule = 14. Figure 4(c) shows the 2D image used in the experiment, which consists of two crossed fingerprints forming a “heart fingerprint” with 1920$\times$1080 pixels. To improve the robustness, the synthesized 3D elemental image array as embedding watermark is shown in Fig. 4(d), which is synthesized from 2D image “heart fingerprint” by the computational integral imaging algorithm [3032].

 figure: Fig. 3.

Fig. 3. Integral imaging desktop 3D display structure.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Light-field image and watermark: (a) 3D scene “Food”, (b) light-filed image, (c) 2D “heart fingerprint”, (d) synthesized 3D elemental image array from “heart fingerprint”.

Download Full Size | PDF

3.2 HDCT watermarking

This part of the experiment is aimed at analyzing our proposed HDCT-based light-field watermarking method. We evaluate the performance of the proposed watermarking algorithm. Firstly, we use the HDCT algorithm to extract the salient image from the light-field image. Figures 5(a) and 5(c) show the original light-field images. Figures 5(b) and 5(d) show the extracted saliency maps of the light-field image from Figs. 5(a) and 5(c), respectively. It can be seen from Fig. 5 that the extracted saliency maps based on the HDCT saliency detection method are similar to the original light-field 3D image structure and retains the local autocorrelation of the light-field image.

 figure: Fig. 5.

Fig. 5. Experiments on light-field images by HDCT method, (a) and (c) original light-field images, (b) and (d) extracted saliency maps from (a) and (c), respectively.

Download Full Size | PDF

Then, we use two previous methods to perform the comparisons [20,23]. Figure 6 shows the computed saliency maps from the same original light-field image and the watermarked light-field image using the previously proposed method. As shown in Fig. 6, the resulting salience is not only ambiguous but also completely loses the original correlation structure of the light-field image. Through the comparison of Fig. 5 and Fig. 6, we can conclude that the proposed method of salient region detection of the HDCT algorithm can preserve the unique properties of the light-field images.

 figure: Fig. 6.

Fig. 6. Saliency maps of light-field images by methods [20] and [23]: (b) and (e) method [20], (c) and (f) method [23].

Download Full Size | PDF

Based on the above analysis results, it is clear that the proposed HDCT algorithm has successfully calculated the salient regions. However, the two previously proposed methods process the light-field images as ordinary 2D images as a whole. The computed saliency map can not highlight the salient region of each elemental image and does not preserve the properties of each elemental image in the original light-field image. The results show that our HDCT method can detect the salient elemental image more successfully and extract more accurate watermarking embedding parameters.

3.3 Performance analysis

In this subsection, the performance of the imperceptibility and robustness to an attack of the proposed method is analyzed, in order to prove the validity of the scheme. Imperceptibility means that the watermark embedded in the image can not cause obvious image distortion at the objective level, and is uneasy to be perceived by the human eye at the subjective level. In our experiment, from the top, middle, bottom, left and right of the five views observing, the original light-field 3D image and the watermarked light-field 3D image are reconstructed by the integral imaging desktop 3D display, as shown in Figs. 710. We can find in Visualization 1, Visualization 2, Visualization 3, and Visualization 4 for the complete different perspective view.

 figure: Fig. 7.

Fig. 7. Different perspectives of the reconstructed original light-field 3D scene “Dog” (complete video in Visualization 1).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Different perspectives of the reconstructed watermarked light-field 3D scene “Dog” (complete video in Visualization 2).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Different perspectives of the reconstructed original light-field 3D scene “Food” (complete video in Visualization 3).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Different perspectives of the reconstructed watermarked light-field 3D scene “Food” (complete video in Visualization 4).

Download Full Size | PDF

Figures 710 show different perspectives of the reconstructed original light-field 3D scenes and watermarked light-field 3D scenes on the light-field display. As can be seen from Figs. 710, the original and watermarked light-field 3D images are very similar and cannot be distinguished by the visual system. To quantitatively evaluate the imperceptibility of our proposed method, some experiments are conducted. And we compare it with the results of the traditional methods [20] and [23]. The peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) are very important parameters to evaluate the imperceptibility between the original and the watermarked images. Normally, for a watermarked image, the PSNR should be as high as possible. Table 1 shows the comparison results about the PSNR and SSIM of our method and similar method [20] and [23]. In the table, it is clear that our method has higher PSNR and SSIM than the other two methods. The results reveal that our method has a better imperceptibility of the embedded watermark.

Tables Icon

Table 1. Comparison of impercepribility using PSNR and SSIM.

Figure 11 shows the extracted watermark plane images reconstructed at different depths. Figure 11(a) shows the extracted synthesized 3D elemental image array of the watermark. To evaluate the similarity between the original watermark and extracted watermark the bit error ratio (BER) is calculated. The BER and SSIM values between the original watermark in Fig. 4(c) and extracted watermark in Fig. 11(d) are 0.0160 and 0.9903, respectively. Besides, the three plane images reconstructed at each distance of $L$ = 30 mm, 45 mm, and 60 mm, respectively, are shown in Fig. 11(b)-11(d). Here, it can be seen that the focused plane image is reconstructed at $L$ = 60 mm where the original watermark objects were positioned. Therefore, the image reconstructed on the original pickup plane of the watermark will be focused, but if getting away from the pickup plane, the reconstructed image will become out of focus and blurred. Here, a meaningful set of depth-related watermarks can be reconstructed along the output plane to identify the copyright. Therefore, the reconstruction distance $L$ can be used as a secure key in our watermarking method.

 figure: Fig. 11.

Fig. 11. The 3D watermark “heart fingerprint” was reconstructed from synthetic elemental images located at different distances.

Download Full Size | PDF

In addition, the developed watermarking technology not only ensures that the observers can not perceive the embedded watermark but also must be robust to some attacks. The robustness refers to the probability of extracting the embedded watermark accurately when the watermarked image is exposed to some attack. A highly robust watermarking method can still extract the embedding watermark after a series of attacks. In this paper, the following experiments analyze the robustness of the proposed HDCT watermarking scheme to the attack, such as Gaussian noise, salt & pepper, and cropping attacks. And the BER and SSIM between the extracted watermark and the original watermark are calculated to quantitatively measure the robustness of the watermark embedded into the image by the proposed watermarking method.

In the experiment, the results of different noise attacks are shown in Fig. 12. Figure 12 shows the extracted watermark images after noise attacks of Gaussian noise, salt & pepper, and cropping attacks, respectively. From the figures, it is obvious that the contents of the watermarks extracted from addictive noise as well as cropping attacks are successfully detected. Table 2 gives the results of BER and SSIM with different noise attacks. As shown, it is obvious that although the watermarked light-field 3D image is seriously attacked by noises, the embedded watermark data can be extracted most accurately from the light-field 3D image by using the HDCT watermarking method.

 figure: Fig. 12.

Fig. 12. Extracted the watermark “heart fingerprint” against noise attacks with proposed method: (a) Gaussian noise ($v$ = 0.05), (b) Gaussian noise ($v$ = 0.10), (c) Gaussian noise ($v$ = 0.30), (d) salt & pepper ($d$ = 0.05), (e) salt & pepper ($d$ = 0.10), (f) salt & pepper ($d$ = 0.30), (g) cropping 10%, (h) cropping 15%, (i) cropping 30%.

Download Full Size | PDF

Tables Icon

Table 2. BER and SSIM of the proposed method against attacks.

On the other hand, Figs. 13(a)-(c) show the extracted 2D watermark “heart fingerprint” after adding Gaussian noise to the watermarked light-field 3D image, in which the 2D watermark “heart fingerprint” is directly embedded into the light-field image. As can be seen from Fig. 13, the extracted 2D watermark is distorted and corrupted by noise compared with Fig. 12. When Gaussian noise is 0.3, the 2D watermarking will cause serious visual quality distortion.

 figure: Fig. 13.

Fig. 13. Extracted 2D watermark “heart fingerprint” against Gaussian noise attack: (a) 0.05, (b) 0.1, (c) 0.3.

Download Full Size | PDF

By comparison, the experimental results show that the robustness can be improved by using the synthesized 3D elemental image array as embedding watermark instead of 2D watermark before embedding. Traditional watermarking methods have some difficulties in recognizing extracted watermark data when the watermarked data was seriously damaged, and ownership protection of content seems to be very difficult. The proposed 3D light-field watermarking scheme based on HDCT is robust to attacks, and the possibility of the practical application of the proposed system in the presence of some attacks.

To test the security of the proposed watermarking method, we change the initial configuration value ‘01001101’ to ‘01001100’. Figure 14 shows an extracted watermark located at different depths using the wrong CA key, which becomes blurred and unidentifiable. This is further proof of the safety of our scheme.

 figure: Fig. 14.

Fig. 14. Extracted watermark “heart fingerprint” located at a different distance with wrong CA key: initial configuration ‘01001100’.

Download Full Size | PDF

4. Conclusion

In this paper, a method of ownership protection for the light-field image is proposed. Setting appropriate embedding parameters can balance the robustness and imperceptibility of the watermark. Therefore, the saliency value of the light-field image is calculated at first. Considering the high-dimensional characteristics of the light-field 3D image, We introduce an HDCT space to connect multiple color spaces to extract and use the rich color features to extract the embedding parameters accurately. The experimental results show that the proposed 3D watermarking scheme based on HDCT is robust to the attack and can be applied to the system in the presence of some attacks.

Funding

The funding from Sichuan University (2020SCUNG205); National Natural Science Foundation of China (61775151, 61975138).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. N. Li, J. Ye, Y. Ji, H. Ling, and J. Yu, “Saliency detection on light field,” IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1605–1616 (2017). [CrossRef]  

2. J. Kim, J. Jung, Y. Jeong, K. Hong, and B. Lee, “Real-time integral imaging system for light field microscopy,” Opt. Express 22(9), 10210–10220 (2014). [CrossRef]  

3. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017). [CrossRef]  

4. H. Zhang, B. Su, J. He, C. Zhang, Y. Wu, S. Zhang, and C. Zhang, “Light field imaging and application analysis in thz,” International Conference on Optical Instruments and Technology - IRMMW-THz Technologies and Their Applications p. 10623 (2017).

5. C. U. S. Edussooriya, D. G. Dansereau, L. T. Bruton, and P. Agathoklis, “Five-dimensional depth-velocity filtering for enhancing moving objects in light field vide,” IEEE Trans. Signal Process. 63(8), 2151–2163 (2015). [CrossRef]  

6. Y. L. Edmund, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]  

7. M. A. B. Farah, R. Guesmi, A. Kachouri, and M. Samet, “A novel chaos based optical image encryption using fractional fourier transform and dna sequence operation,” Opt. Laser Technol. 121, 105777 (2020). [CrossRef]  

8. M. R. Abuturab, “Multiple color-image fusion and watermarking based on optical interference and wavelet transform,” Opt. Laser. Eng. 89, 47–58 (2017). [CrossRef]  

9. X. Li and I. Lee, “Robust copyright protection using multiple ownership watermarks,” Opt. Express 23(3), 3035–3046 (2015). [CrossRef]  

10. I. J. Cox, M. L. Miller, and J. A. Bloom, Digital Watermarking (Academic Press, 2002).

11. S. Jiao, C. Zhou, Y. Shi, W. Zou, and X. Li, “Review on optical image hiding and watermarking techniques,” Opt. Laser Technol. 109, 370–380 (2019). [CrossRef]  

12. N. Nikolaidis and I. Pitas, “Robust image watermarking in the spatial domain,” Signal Process. 66(3), 385–403 (1998). [CrossRef]  

13. W. Chen, X. Chen, A. Stern, and B. Javidi, “Phase–modulated optical system with sparse representation for information encoding and authentication,” IEEE Photonics J. 5(2), 6900113 (2013). [CrossRef]  

14. S. Wang, X. Meng, Y. Yin, Y. Wang, X. Yang, X. Zhang, X. Peng, W. He, G. Dong, and H. Chen, “Optical image watermarking based on singular value decomposition ghost imaging and lifting wavelet transform,” Opt. Laser. Eng. 114, 76–82 (2019). [CrossRef]  

15. Z. Ye, P. Qiu, H. Wang, J. Xiong, and K. Wang, “Image watermarking and fusion based on fourier single-pixel imaging with weighed light source,” Opt. Express 27(25), 36505–36523 (2019). [CrossRef]  

16. A. Ansari, S. Hong, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Ownership protection of plenoptic images by robust and reversible watermarking,” Opt. Laser. Eng. 107, 325–334 (2018). [CrossRef]  

17. A. Ansari, G. Saavedra, and M. Martinez-Corral, “Robust light field watermarking by 4d wavelet transform,” IEEE Access 8, 203117–203133 (2020). [CrossRef]  

18. X. Li, Y. Wang, Q. Wang, S. Kim, and X. Zhou, “Copyright protection for holographic video using spatiotemporal consistent embedding strategy,” IEEE Trans. Ind. Inf. 15(11), 6187–6197 (2019). [CrossRef]  

19. X. Li, M. Zhao, X. Zhou, and Q. Wang, “Ownership protection of holograms using quick-response encoded plenoptic watermark,” Opt. Express 26(23), 30492–30508 (2018). [CrossRef]  

20. X. Li, S. Kim, and Q. Wang, “Copyright protection for elemental image array by hypercomplex fourier transform and an adaptive texturized holographic algorithm,” Opt. Express 25(15), 17076–17098 (2017). [CrossRef]  

21. C. Chen, J. Wei, C. Peng, W. Zhang, and H. Qin, “Improved saliency detection in rgb-d images using two-phase depth estimation and selective deep fusion,” IEEE Trans. Image Process. 29, 4296–4307 (2020). [CrossRef]  

22. M. Zhang, W. Ji, Y. Piao, J. Li, Y. Zhang, S. Xu, and H. Lu, “Lfnet: light field fusion network for aalient object detection,” IEEE Trans. Image Process. 29, 6276–6287 (2020). [CrossRef]  

23. J. Zhang, Y. Liu, S. Zhang, R. Poppe, and M. Wang, “Light field saliency detection with deep convolutional networks,” IEEE Trans. Image Process. 29, 4421–4434 (2020). [CrossRef]  

24. H. Yeung, J. Hou, X. Chen, J. Chen, Z. Chen, and Y. Y. Chung, “Light field spatial super-resolution using deep efficient spatial-angular separable convolution,” IEEE Trans. Image Process. 28(5), 2319–2330 (2019). [CrossRef]  

25. J. Zhang, M. Wang, L. Lin, X. Yang, J. Gao, and Y. Rui, “Saliency detection on light field: a multi-cue approach,” ACM T. on Multim. Comput. 13(3), 1–22 (2017). [CrossRef]  

26. L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001). [CrossRef]  

27. Y. Umeki, T. Yoshida, and M. Iwahashi, “Estimation method of initial labels for propagation-based saliency detection,” 2016 APSIPA pp. 1–4 (2016).

28. J. Kim, D. Han, Y. Tai, and J. Kim, “Salient region detection via high-dimensional color transform and local spatial support,” IEEE Trans. Image Process. 25(1), 9–23 (2016). [CrossRef]  

29. Z. Qin, W. Zhang, X. Jiang, X. Yan, and Z. Yan, “Real-time interactive computer-generated integral imaging method based on ray tracing,” Acta Photonica Sinica 48, 330–338 (2019).

30. S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25(1), 330–338 (2017). [CrossRef]  

31. D. H. Kim, M. U. Erdenebat, K. C. Kwon, J. S. Jeong, J. W. Lee, K. A. Kim, N. Kim, and K. H. Yoo, “Real-time 3d display system based on computer-generated integral imaging technique using enhanced ispp for hexagonal lens array,” Appl. Opt. 52(34), 8411–8418 (2013). [CrossRef]  

32. X. Li, S. J. Cho, and S. T. Kim, “High security and robust optical image encryption approach based on computer-generated integral imaging pickup and iterative back-projection techniques,” Opt. Laser. Eng. 55, 162–182 (2014). [CrossRef]  

Supplementary Material (4)

NameDescription
Visualization 1       Different perspectives of the reconstructed original light-field 3D scene of the dog
Visualization 2       Different perspectives of the reconstructed watermarked light-field 3D scene of the dog.
Visualization 3       Different perspectives of the reconstructed original light-field 3D scene of the food.
Visualization 4       Different perspectives of the reconstructed watermarked light-field 3D scene of the food.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The watermarking embedding parameters generation process by the proposed method.
Fig. 2.
Fig. 2. The HDCT space.
Fig. 3.
Fig. 3. Integral imaging desktop 3D display structure.
Fig. 4.
Fig. 4. Light-field image and watermark: (a) 3D scene “Food”, (b) light-filed image, (c) 2D “heart fingerprint”, (d) synthesized 3D elemental image array from “heart fingerprint”.
Fig. 5.
Fig. 5. Experiments on light-field images by HDCT method, (a) and (c) original light-field images, (b) and (d) extracted saliency maps from (a) and (c), respectively.
Fig. 6.
Fig. 6. Saliency maps of light-field images by methods [20] and [23]: (b) and (e) method [20], (c) and (f) method [23].
Fig. 7.
Fig. 7. Different perspectives of the reconstructed original light-field 3D scene “Dog” (complete video in Visualization 1).
Fig. 8.
Fig. 8. Different perspectives of the reconstructed watermarked light-field 3D scene “Dog” (complete video in Visualization 2).
Fig. 9.
Fig. 9. Different perspectives of the reconstructed original light-field 3D scene “Food” (complete video in Visualization 3).
Fig. 10.
Fig. 10. Different perspectives of the reconstructed watermarked light-field 3D scene “Food” (complete video in Visualization 4).
Fig. 11.
Fig. 11. The 3D watermark “heart fingerprint” was reconstructed from synthetic elemental images located at different distances.
Fig. 12.
Fig. 12. Extracted the watermark “heart fingerprint” against noise attacks with proposed method: (a) Gaussian noise ($v$ = 0.05), (b) Gaussian noise ($v$ = 0.10), (c) Gaussian noise ($v$ = 0.30), (d) salt & pepper ($d$ = 0.05), (e) salt & pepper ($d$ = 0.10), (f) salt & pepper ($d$ = 0.30), (g) cropping 10%, (h) cropping 15%, (i) cropping 30%.
Fig. 13.
Fig. 13. Extracted 2D watermark “heart fingerprint” against Gaussian noise attack: (a) 0.05, (b) 0.1, (c) 0.3.
Fig. 14.
Fig. 14. Extracted watermark “heart fingerprint” located at a different distance with wrong CA key: initial configuration ‘01001100’.

Tables (2)

Tables Icon

Table 1. Comparison of impercepribility using PSNR and SSIM.

Tables Icon

Table 2. BER and SSIM of the proposed method against attacks.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

H i = j = 1 N k = 1 B ( h i k h j k ) 2 ( h i k + h j k ) ,
G i = j = 1 N ( c i c j ) 2 ,
w i j = 1 Z i e x p ( 1 2 σ p 2 | | p i p j | | 2 2 ) ,
L i = j = 1 N w i j ( c i c j ) 2 ,
O ( i , j ) = E m b [ O ( i , j ) , W ( i , j ) , k e m b , e ] .
W ( i , j ) = E x t [ O ( i , j ) , k e x t , e ] .
e i = e g ( e m i n + 255 S i 255 ( e m a x e m i n ) ) ,
O ( i , j ) = O ( i , j ) + e W ( i , j ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.