Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Copyright protection for elemental image array by hypercomplex Fourier transform and an adaptive texturized holographic algorithm

Open Access Open Access

Abstract

In practical applications of three-dimensional integral imaging, the captured elemental image array (EIA) needs to be stored and delivered through the Internet. Therefore, there is an urgent need for protecting the copyright of EIA against piracy and malicious manipulation. In our work, we propose a copyright protection algorithm for EIA by combining the use of the modified hypercomplex Fourier transform (HFT) and the adaptive texturized holographic algorithm. The modified HFT can accurately extract the features from each elemental image. According to these features, we embed watermark into the visually less noticeable regions of the EIA to increase the visual perception. In addition, an adaptive texturized holographic algorithm is proposed to increase the robustness. Finally, the analytical performances are contrasted with simulation results where the imperceptibility and robustness of the proposed method are evaluated against standard attacks.

© 2017 Optical Society of America

1. Introduction

Integral imaging is employed as part of a true three-dimensional (3D) imaging system, allowing the display of full color images with continuous parallax and continuous viewing points within a wide viewing zone [1–3]. Basically, it enables us to capture a 3D image by a two-dimensional (2D) elemental image array (EIA) obtained by a lenslet array [4–8]. In the pickup process, the EIA is recorded by a 2D light-sensitive device such as a charge coupled device (CCD). To reconstruct 3D images from the EIA, generally a 2D display panel together with another lenslet array are used to produce the opposite direction rays to the captured ones. From which, we know that EIA plays a very important role in integral imaging. With the rapid development of 3D multimedia technique, information security faces more and more challenges [9–13]. In practical applications, the recorded EIA needs to be stored and delivered through the Internet, which involves considerable storage capacity, data protection and copyright protection, thus promoting the needs for compression, encryption and secure watermarking of EIA.

Several approaches for compression of elemental images have been presented recently [14–16]. In 2007, a compression method for integral images using discrete cosine transform (DCT) is reported in [14]. In 2009, an enhanced compression rate method of integral images by using motion-compensated residual images in 3D integral imaging is proposed in [15]. Meanwhile, a large number of encryption methods for copyright protection of elemental images were presented [17–19]. Such as in [17], the authors employed a photo-counting double random phase encoding method for encrypting elemental images. In [18], the authors borrowed the chaotic map to encrypt the depth-converted elemental images. The authors in [19], who proposed elemental images encryption approach based on the multispectral 3D photon-counted integral imaging and Hartley transform.

However, copyright protection aims at elemental images, which is scarcely reported. With the fast development of integral imaging based 3D display technologies, in data transmission, the captured elemental images need to be frequently delivered through network. If these important data are not protected, it is easily stolen or embezzled without the permission of copyright owners, which will inflict serious losses to owners. Therefore, the copyright protection is gaining importance for elemental images because of the widespread popularity of the network for 3D data distribution in the future.

Generally, there is a strong global self-correlation for a usual image. However, unlike it, EIA possesses a series of characters of local self-correlation, because it is composed of a series of elemental images. Before embedding, if researchers don’t consider the characteristic of each elemental image, then they directly embed watermark signal into EIA, which will cause serious perceptual distortion of watermarked elemental images. Therefore, the previous watermarking proposes are directly borrowed to implement watermark embedding into EIA, which will cause serious visual quality distortion.

Feature detection algorithm is responsible for defining which areas of an image will attract more attention of the human visual system. Therefore, feature detection algorithm can be exploited to measure the embedding strength in the watermarking framework. To improve the visual quality of the watermarked image, we need to abstract more features from EIA to determine embedding strength. The input of the traditional Fourier transform is a real matrix. Each pixel is a real number and is an element of the input matrix. However, if we consider more features, each element is a vector in a matrix. Thus, in that case, the traditional Fourier transform becomes unsuitable for computational purposes. The hypercomplex can be employed to combine multiple features (color, intensity, and motion).

In this work, we focus on copyright protection for elemental images by modelling in the perception detection framework to locate inattentive area of EIA. A watermarking algorithm using the modified hypercomplex Fourier transform (HFT) with a new adaptive texturized holographic algorithm is proposed. The modified HFT can accurately extract the features (salient regions) of each of elemental images, and according to these features, we embed watermark into the visually less noticeable regions of elemental images to increase the visual perception.

Meanwhile, our work is to adaptively transform the hologram into a set of texturizes that optimally match the frequency domain to improve imperceptibility. As previous algorithms, the use of a Fibonacci transform in a scrambled version of the watermark, in which the scrambling is performed only for increasing security. Different previous transform algorithms, our objective in scrambling the watermark is to perform texturization to transform the hologram into a set of textures that best match the corresponding transform domain of elemental images. Specifically, during embedding, we let each pixel of hologram choose their optimal position to embed to improve imperceptibility. We can utilize the proposed adaptive texturized holographic algorithm to achieve this assignment. The goal of this work is to preserve visual quality of watermarked elemental images while improving robustness against natural image processing attacks. In this case, we aim to increase the watermark strength in visually less noticeable regions while keeping a lesser watermark strength within attentive segments of human behavioural model. The simulation results verify that the algorithm maintains imperceptibility, while improving robustness.

2. Previous integral imaging based watermarking algorithms

Recently, many watermarking literatures based on integral imaging technique have been presented. However, the common objective of these techniques is to utilize integral images as a watermark, and utilize the memory distribution property of the elemental images to improve the robustness of the watermark. Meanwhile, in the course of our investigation, the previous watermarking algorithms [20–24] are directly embedded watermark data into host images for copyright protecting, which will degrade the quality of the watermarked image. Unlike the usual image that with a strong global self-correlation is shown in Fig. 1(a), EIA has the special character of local self-correlations as shown in Fig. 1(b).

 figure: Fig. 1

Fig. 1 Analysis of self-correlation: (a) a usual image, (b) EIA generated by the usual image.

Download Full Size | PDF

In [22], the authors presented a watermarking algorithm in the DWT domain for digital image copyright protection. In the watermark embedding process, they employed 3D EIA as the watermark embedded into the DWT domain of the host image. The 3D property of the EIA watermark can support a robust reconstruction. The watermark reconstruction utilized the computational integral imaging reconstruction (CIIR). Because DWT-based watermarking method usually provides only one transform plane for embedding, the security of watermark extraction will degrade.

In [23,24], the authors have proposed a watermarking design based on cellular automata (CA) transform, the authors efficiently improved the security of the watermark using the CA gateway values. A host image is first transformed into multiple-resolution frequency domains. The watermark is first recorded in the form of 2D EIA, and the recorded EIA as the watermark is directly embedded into the frequency domains. However, the researchers have not considered that due to the characteristics of the human vision system to which the host image is subject, watermark data is directly embedded into the frequency domain, which will degrade the visual perception of the watermarked image.

The results of the above optical watermarking approaches confirmed that these algorithms are robust to the natural image processing attacks. However, these algorithms have the following common problems.

  • 1) All of them employed EIA as a watermark and embedded it into the host image. The same objective is to protect the host image, is not to copyright protect EIA.
  • 2) All of these approaches didn’t analyze the characteristic of each elemental image. Although, these algorithms do great contributions for the robustness of watermark, the visual quality of the watermarked EIA implemented by these algorithms will inevitably weaken.

In order to address problems of the conventional transform-based method [22], which provides only one transform plane for embedding, we adopt the CA transform to resolve this shortage. With the help of CA transform, our method provides multiple transform planes because of various gateway values of CA. The details will be discussed in subsection 3.2.

As we mentioned above, the watermarking methods of [23,24], in which the watermark data is directly embedded into the host image without considering the characteristic of each elemental image, which will degrade the visual perception of the watermarked image. In our work, the watermarking method using HFT with adaptive texturized holography is proposed. HFT can accurately extract the features of each of elemental images, and according to these features, we embed watermark into the visually less noticeable regions of elemental images to increase the visual perception. Meanwhile, our work is to adaptively transform the hologram into a set of texturizes that optimally match the frequency domain to improve imperceptibility.

3. Proposed watermarking algorithm for elemental image copyright protection

As mentioned in the previous section, watermarking for elemental images has a problem of distorted visual quality. As a means to resolve the weak visual quality of watermarked elemental images, a perception detection algorithm is employed to extract the features of each of elemental images. To enhance the visual quality of watermarked elemental images, the embedding parameters of the watermark data need to be determined by these extracted features.

3.1 Feature extraction by the modified hypercomplex Fourier transform

Generally, a commonly used feature detection that corresponds more naturally to human perception is the HSV color model, which contains three components: hue, saturation and lightness. In this work, different from [25,26], we consider only the static image case by employing hue, saturation, lightness information, each pixel of the raw image is represented by hypercomplex numbers (quaternion) consisting of HSV three-color components, which do not consider color opponent-components and intensity. The hypercomplex matrix s of our proposed HFT algorithm can be rewritten as follows:

s=Hμ1+Sμ2+Vμ3,
where H, S, V are three components of the HSV model,μ1,μ2andμ3satisfy μ1μ2,μ1μ3, μ2μ3, μ3=μ1×μ2, μ12=μ22=μ32=μ1μ2μ3=1. Based on the definition of Eq. (1), the quaternion involved in HSV is given by:

s=f1+f2μ1,
f1=Hμ1,
f2=S+Vμ2.

Given an image, the HFT of a hypercomplex matrix s(x, y) can be calculated by two complex Fourier transform of symplectic parts:

S(u,v)=F1(u,v)+F2(u,v)μ1,
and
Fk(k=1,2)(u,v)=1MNx=0M1y=0N1eμ12π(xu/M)+(yv/N))fk(x,y),
where μ is a unit pure quaternion andμ2=1, k = 1, 2. The inverse HFT is given as

fk(x,y)=1MNu=0M1v=0N1eμ12π(xu/M)+(yv/N))Fk(u,v).

Furthermore, we also can compute S in the hypercomplex frequency domain by the following equation in the polar form:

S(u,v)=S(u,v)eμ1ϕ(u,v),
whereindicates the modulus for each element of a hypercomplex matrix, Sdenotes amplitude spectrum and ϕ(u,v)is the phase spectrum. Finally, we can perform the inverse transform to Eq. (8) and the saliency map is calculated by
sM=g×S1(u,v)2,
where g denotes the Gaussian kernel with a fixed variable.

After the salience map being calculated, in our work, the watermarkwo,l(x,y)will be embedded by the following equation:

fo,l(x,y)=fo,l(x,y)+αo,lsM(fo,l(x,y))wo,l(x,y),
where fo,l(x,y)denotes the o-th frequency orientation at the l-th resolution level of the imagef, the optimized embedding parameter αo,lis positive number which determines a trade-off between the robustness and imperceptibility.

The generation process of the optimized embedding parameter αo,l(Alpha map), in our work, is shown in Fig. 2, and can be expressed mathematically as follows:

 figure: Fig. 2

Fig. 2 Procedure for computing saliency using the proposed HFT.

Download Full Size | PDF

αo,l=(x,y)Maxoverall(o,l)(sM(fo,l(x,y)))sM(fo,l(x,y))Maxoverall(o,l)(sM(fo,l(x,y))).

The embedding Alpha mapαis calculated by HFT is briefly summarized below:

Input

The color elemental imagesfwith the size of x×y.

Output

Embedding parameter (Alpha map)αfrom elemental images f.

  • 1: Calculate HSV matrix from RGB elemental images.
  • 2: Perform the HFT on the hypercomplex matrix s (x, y).
  • 3: Obtain the saliency map by performing the inverse transform to Eq. (8).
  • 4: Calculate the optimized embedding parameter (Alpha map) by Eq. (11).
  • 5: Return Alpha mapα.

3.2 Subband-domain saliency maps obtained via CA transforms

CA transform presents a more direct way of achieving the linkage between a given phenomenon and the evolving CA field. CA transforms can be utilized in the way other transforms (e.g., Fourier, wavelet, etc.) are utilized. Cellular automata are capable of generating billons of orthogonal, semiorthogonal, bi-orthogonal and non-orthogonal bases.

In a 2D square space considering of N×N cells, the transform base A=Aijkl (i, j, k, l = 0, 1,…N-1), we can write:

fij=k=0N1l=0N1cklAijkl,
where cklis the transform coefficient. The 2D CA transform bases can be derived from products of one dimensional function, the bases Aijkl(see Fig. 3) are derived from the one-dimensional types in the form:

 figure: Fig. 3

Fig. 3 2DAijkl basis function rule = 14, initial configuration = 11010100. A00klis the block at the extreme upper left corner. The top row denotes0j<8,i=0. The left column is j=0, 0i<8.

Download Full Size | PDF

Aijkl=AAikjl

In our work, the one-dimensional CA bases Aik can be generated from

Aik=α+βaikaki
where αikis the state of the CA at the node i at time t = k while αandβare constants.

CA is a discrete dynamical system, where space, time, and the states of the system are all discrete. For reasons that one-dimensional CA provides a sufficient foundation for transforming data in any number of dimensions, we use Wolfram rule nomenclature when we are dealing with simple 2-state, 3-site and one-dimensional neighbourhood CA. Figure 4 shows the evolution of one-dimensional CA with the Wolfram rules after 256 time-steps along x-axis and y-axis directions.

 figure: Fig. 4

Fig. 4 Evolution of CAs with rules 30, 45, 82, 146, 182, 195 and 210.

Download Full Size | PDF

In using CA transforms to big data (e.g. elemental images), the redundancy is identified by transforming the data into the CA space. The principal strength of CA-based watermarking is the large number of transform bases available. We make use of CA bases that maximize the number of the transform planes for data embedding. Unlike the previous transform-based watermarking methods only provide one transform plane for watermarking, CA transform can provide a series of transform planes with different CA rules. An 8-bit CA can provide 255 CA rules. Another principal strength is the parallel and integer-based character of the CA. This can translate into an enormous computational speed in a well-designed, CA transform based image watermarking. Given an image, we seek to represent the transform data in the form:

ckl=fijBijkl,
where Bijkl is the inverse of baseΑijkl. It can be generated by one-dimensional CA according to CA gateway values. More details are in [23].

We investigate the saliency map generation in the CA transform domain. In order to adopt it to the CA based watermarking schemes. Figure 3 shows how the HFT map is created with the CA domain. Generally, the watermark data embedded into each RGB channels, respectively, which not only increases the calculation load, but also degrades the quality of the watermarked image. However, in the HSV color space, the change of the brightness, contract, and saturation cannot change the hue component. Thus, in this work, we embed watermark data into the hue component. After the embedded channel is selected, 2-level CA transform is applied for the spatial space of the EIA channel and the third column of Fig. 5 shows the 2-level transform domains of CA. Then, we apply the HFT to each CA transform domains and obtain the HFT map. The generating process of HFT map is shown in Fig. 5.

 figure: Fig. 5

Fig. 5 Generating the HFT mask in the CA transform domains.

Download Full Size | PDF

3.3 Watermark embedding

The adaptive watermark embedding mainly consist three stages:

  • 1) Converting the elemental images data from spatial to frequency domain by using CA transform.
  • 2) Calculating feature maps (saliency map) from the CA transform subband domain via the HFT algorithm.
  • 3) Embedding the adaptive texturized hologram into the CA transform domain via the optimal alpha map generated from subsection 3.1.

Figure 6 shows a diagram of the texturized hologram embedding procedure. The first two stages of the proposed embedding procedure have been discussed in subsections 3.1 and 3.2. Next, we mainly describe the watermark embedding algorithm via the adaptive texturized holographic algorithm.

 figure: Fig. 6

Fig. 6 Block diagram of the stages used in HFT to generate the watermarked image.

Download Full Size | PDF

To maximize the robustness of the watermark, we use a holographic logo to replace the conventional binary watermark or gray-scale watermark. This approach helps improve the ability of watermark to tolerate some data loss attacks such as geometric attacks and noise. In this work, digital Fresnel hologram [27] is borrowed to record holograms. The object surface illuminated by a coherent beam of wavelengthλ produces an object wavefront noted

A(x,y)=A0(x,y)exp[jψ0(x,y)],
whereA0is the amplitude of the object, phase ψ0 is random because of the roughness of the surface, and it will be considered to be uniform over [π,+π]. At distanced0, the reference set of coordinates is chosen to be (x,y,d0), the diffracted field produced by the object is given in the Fresnel approximations by
O(x,y,d0)=A(x,y)iλd0exp[2iπd0λ]×exp[iπλd0(x2+y2)],
where symbol ‘’ denotes convolution and symbol ‘×’ means the multiplication operator.

In digital Fresnel holography, encoding of the object wave is performed through the Fresnel diffraction and interferences with the reference wave. The general relation for the reference wave is

r(x,y)=aRexp[2jπ(μRx+νRy)+jΔψ(x,y)],
where {μR,νR}denotes spatial frequencies of the reference wave. The term Δψ(x,y) added to the reference phase corresponds to aberrations of the reference wavefront. The phase aberration ψ(x,y)which can be generally represented by
ψ(x,y)=exp[j2πλ(kxx+kyy)]exp(jϕ(t)),
where the factor kxand kydefine the propagation direction and ϕ(t) denotes the phase difference between the reference and object wave with time t. In the interference plane, the hologram H mixing the two such as:

H(x,y,d0)=|O(x,y,d0)|2+|r(x,y)|2+r(x,y)O(x,y,d0)+r(x,y)O(x,y,d0).

In our work, we propose a new algorithm for an adaptive holography that builds the strengths of previous approaches, however, which takes a fundamentally different method. The main idea of our approach is to adaptively transform the hologram into a set of texturizes that optimally match the frequency domain of the host image. Specifically, during embedding, we let each pixel of hologram choose their optimal position to embed to improve imperceptibility. We can utilize the adaptive texturized hologram to achieve this assignment. The hologram is scrambled by a pixel-scrambling iterations, the optimal texturized watermark is obtained by iterations. The Fibonacci transform for image transformation can be written as:

(xy)=(1110)(xy)(modN).

3.4 Watermark extraction

The watermark extracting process, similar to the watermark embedding, consists of three stages:

  • 1) Re-decomposing the watermarked EIA into CA frequency domains with level-2 CA transforms;
  • 2) Extracting the adaptive hologram with the precomputed alpha map;
  • 3) Reconstructing the watermark logo with the inverse Fibonacci transform and digital Fresnel hologram reconstruction algorithm.

Figure 7 shows a diagram of the watermark extracting procedure. First, we perform the level-2 CA transform to the watermarked image via Eq. (15) to obtain the CA transform domains. Second, we calculate the saliency maps from the CA domains (Stage 1) by the HFT model and the Alpha map can be calculated by Eq. (11). Third, the adaptive watermark data can be extracted from the CA domains by comparing the difference between the original and watermarked CA domains. Fourth, the hologram is rescrambled by the inversely Fibonacci transform. The watermark can be numerically reconstructed from the extracted hologram by computing the diffracted field at the distance d0 according to Eqs. (17) and (18).

 figure: Fig. 7

Fig. 7 Watermark extraction process.

Download Full Size | PDF

4. Results and discussion

To evaluate the performance of the proposed method, we conduct experiments on two standard EIA (3D scene ‘Doll’ and ‘Cars’) of size of 900×900. The EIA is captured by a pinhole array or the computational integral imaging (CII) system. The CII optical pickup device is shown in Fig. 8. A lenslet array which is composed 30×30lenslets is used and located at z=0 mm. The interval between lenslet is 1.08 mm and the gap g between EIA and lenslet array is 3 mm. For the adaptive hologram watermark, generated by subsection 3.3, is of size450×450. Common test sets parameters are CA gateway values (8-bit cells, Wolfram rule = 143, initial values = 01011101) and CA decomposition to 2 levels.

 figure: Fig. 8

Fig. 8 The EIA capture device used in this experiment.

Download Full Size | PDF

To measure the similarity of the original EIA to the watermarked EIA, the peak signal-to-noise-ratio (PSNR) and mean structural similarity (SSIM) are employed. To further accurately quantitative analysis the robustness of the reconstructed watermark, bit correct ratio (BCR) is used to measure the similarity between the original and extracted watermarks, the values computed for the parameters of SSIM and PSNR mentioned in the Appendix.

4.1 Visual perception measurement of elemental images by our proposed HFT

We evaluate the performance of the proposed modified HFT algorithm with the state-of-the-art algorithms to perform the comparisons: Itti’s model [28] and the traditional HFT model [26] with two standard EIAs. Each EIA includes elemental images which are shown in Figs. 9(a) and 10(a), respectively. As shown in Fig. 9, the first row shows the original EIA and the calculated saliency maps with Itti, traditional HFT and our proposed HFT models. Figure 9(e) shows the correlations of each of elemental images of 3D scene ‘Doll’. The correlations of the calculated saliency maps with Itti, traditional HFT, and our proposed HFT models are shown in Figs. 9(f)-9(h), respectively. Both Itti and traditional HFT hardly detect the salient regions of each elemental image, while the proposed HFT obtains the acceptable result. The correlations of Fig. 9 provide more intuitive analyzed results for comparison. Figure 10 shows another example test of 3D scene ‘Cars’. Figures 10(b)-10(d) show the calculated saliency maps of EIA of ‘Cars’ with Itti, traditional HFT and our proposed HFT models, respectively. Figure 10(e) shows the correlations of EIA of ‘Cars’. Figures 10(f)-10(h) show the correlations of the calculated saliency maps with Itti, traditional HFT and our proposed HFT models, respectively.

 figure: Fig. 9

Fig. 9 Responses to the elemental images of 3D scene ‘Doll’: The first row shows the original elemental images and corresponding saliency maps with three algorithms: (a) EIA of 3D scene ‘Doll’, (b) saliency map with Itti, (c) saliency map with HFT, (d) saliency map with our proposed HFT models; (e) correlation of EIA of 3D scene ‘Doll’, (f) correlation of saliency map with Itti, (g) correlation of saliency map with HFT, (h) correlation of saliency map with our proposed HFT models.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Responses to the elemental images of 3D scene ‘Cars’: The first row shows the original elemental images and corresponding saliency maps with three algorithms: (a) EIA of 3D scene ‘Cars’, (b) saliency map with Itti, (c) saliency map with HFT, (d) saliency map with our proposed HFT models; (e)-(f) correlations of EIA of ‘3D scene ‘Cars’, Itti, HFT, and our proposed HFT models.

Download Full Size | PDF

Based on the above results of analysis, it is clear that the proposed modified HFT algorithm succeeds in highlighting the salient region. However, saliency maps calculated by Itti and traditional HFT are unable to highlight the salient region of each elemental image. In other words, using the Itti and traditional HFT algorithms fail in extracting the feature of each of elemental images. Figures 10 shows another simulation result with the elemental images captured by ‘Cars’, which shows that our saliency map can more successfully detect the salient elemental images. We note from Figs. 9 and 10, which our proposed HFT algorithm achieves quite a high performance level, and all of them outperform Itti and traditional HFT.

4.2 Imperceptibility of the adaptive watermarking algorithm

In this subsection, we firstly analyze the proposed adaptive watermarking algorithm with the Fibonacci transform iterations, which yields a series of textures. Here, the original and texturized holograms have been embedded into the CA frequency domains. The same Alpha map is calculated by subsection 3.2, which is used to control the embedding weights. In this work, to decrease the complexity of the color image watermarking, the RGB color image is transformed as HSV color space, the watermark data is only embedded into the hue component. Because of the change of luminance, contrast, and saturation of a color image, it has little effect on the hue component. Figure 11 illustrates the advantage of texturizing the holograms based on Fibonacci transform iterations. The first row shows the original and texturized holograms with different iterations. The second and third rows show the original and watermarked EIAs. The enlarged watermarked EIAs are shown in the fourth row, and the outlines of the hologram logo without texturization are visible after embedding. Also, selecting inappropriate texturization also leads to visible distortions. The PSNR values of one channel of EIA (R channel) are recorded in Fig. 11. It is clear that the adaptive watermark with 15 iterations provides the best performance.

 figure: Fig. 11

Fig. 11 Demonstration of adaptively texturizing a hologram to visually match the host EIA. First column: a sample texturized holograms before embedding. Second column: original EIA. Third column: watermarked EIA, all embedded with the same alpha map. Fourth column: enlarged watermarked EIA.

Download Full Size | PDF

To further evaluate the performance of the proposed method, we compare the proposed adaptive watermarking method with latest similar method in [22]. In [22], the authors directly embedded the hologram-like EIA into the low frequency domain of the DWT. The SSIM comparison results for the watermarked EIA are shown in Figs. 12 and 13. Our proposed method is universal imperceptibility in the watermarked EIA that it works with the adaptive texturized holography. The results demonstrate that our proposed method, with higher SSIM values, outperforms the integral imaging based watermarking method in [22]. Moreover, our method reconstructs high visual quality of reconstructed 3D scenes.

 figure: Fig. 12

Fig. 12 (Upper row) Imperceptibility test of method in [22]: watermarked image, SSIM map of the watermarked image, reconstructed 3D scene ‘Doll’, and SSIM map of 3D scene. (Lower row) Imperceptibility test of the proposed watermarking method.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 (Upper row) Imperceptibility test of method in [22]: watermarked image, SSIM map of the watermarked image, reconstructed 3D scene ‘Cars’, and SSIM map of 3D scene. (Lower row) Imperceptibility test of the proposed watermarking method.

Download Full Size | PDF

4.3 Robustness to attacks

Finally, we present results for the study of the robustness of the proposed watermarking system against typical attacks namely Gaussian noise, salt & pepper noise, speckle noise, median filtering, JPEG compression and cropping. After attacks, each watermark is extracted and is compared with original watermark in terms of BCR. The attacked watermarked image with the uniform noise is

f(x,y)=f(x,y)×(1+β×n(x,y)),
where f(x,y) is the pixel value of the watermarked image at (x,y), β is a parameter that controls the strength of the additive noise, n(x,y)is noise with uniform distribution and zero mean, and f(x,y) is the pixel value of the attacked image.

Attacked watermarked EIA and the reconstructed watermark logos in the same attack order mentioned above are shown in Fig. 14. Also, the calculated BCR values from the extracted watermarks are showed in Fig. 14. As can be seen, for non-geometrical attacks, the proposed watermarking method demonstrates high robustness to Gaussian noise, salt & pepper, speckle noise, median filter, JPEG compression, and cropping attacks. Moreover, because the hologram watermark processes the property of data redundancy, even though most of the watermark data is destroyed, the watermark can be reconstructed successfully with the remaining data. Thus, the proposed method shows a significant robustness to geometrical and noise attacks.

 figure: Fig. 14

Fig. 14 (a) Effects of attack type and reconstructed watermarks on robustness measured by BCR: (a) Gaussian noise with standard deviation of 0.12. (b) Salt & pepper with noise density 0.12. (c) Speckle noise with variance of 0.12. (d) Median filter with size of 3×3. (e) JPEG compression with the compression ratio of 40%. (f) Cropping attack with cropping size of500×500pixels.

Download Full Size | PDF

4.3.1 Robustness test for digital watermarking aspect

Next, to emphasize the advantage of the proposed watermarking method with digital watermarking methods, the comparison to previous works mainly focuses on digital transform based watermarking method. At first, we compare our proposed method with the typical digital watermarking methods of [20] and [21]. In [20], the ‘Haar’ wavelet basis function is used; the authors embed the visually recognizable image to the coefficients at the frequency bands of DWT of an image. In [21], a 2D CA transform is used in their work, and the watermark is embedded into the CA domain. We compare the robustness using the aforementioned attack types under different intensities. The comparative results with the digital watermarking methods of [20] and [21] are shown in Figs. 15 and 16, respectively.

 figure: Fig. 15

Fig. 15 (a) Reconstructed watermarks on robustness: (a) our proposed method under Gaussian noise with standard deviation of 0.12, (b) the method of [21] with standard deviation of 0.12, (c) and (d) the method of [20] with standard deviation of 0.12 and 0.06, respectively; (e) our proposed method under salt & pepper with noise density of 0.12, (f) the method of [21] under salt & pepper with noise density of 0.12, (g) and (h) the method of [20] with noise density of 0.12 and 0.06, respectively; (i) our proposed method under speckle noise with variance of 0.12, (g) the method of [21] under speckle noise with variance of 0.12, (k) and (l) the method of [20] with variance of 0.12 and 0.06, respectively.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 (a) Reconstructed watermarks on robustness: (a) our proposed method under Gaussian noise with median filter with size of 3×3, (b) the method of [21] under Gaussian noise with median filter with size of 3×3, (c) and (d) the method of [20] with the filter size of 3×3 and 2×2, respectively; (e) our proposed method with the compression ratio of 10%, (f) the method of [21] with the compression ratio of 10%, (g) and (h) the method of [20] with the compression ratio of 10% and 40%, respectively; (i) our proposed method with cropping size of 500×500, (g) the method of [21] with cropping size of 500×500, (k) and (l) the method of [20] with the cropping size of500×500and 300×300, respectively.

Download Full Size | PDF

The calculated BCR values from the Figs. 15 and 16 with the different noise density are recorded in Tables 1 and 2, respectively.

Tables Icon

Table 1. BCR values with Gaussian noise, salt & pepper, and speckle noise attack.

Tables Icon

Table 2. BCR values with median filter, JPEG compression, and cropping attack.

From the simulation results of Figs. 15 and 16 and Tables 1 and 2, we can know that the conventional digital watermarking method provides better robustness when the noise density is small. However, with the increase of the noise density, the quality of the reconstructed watermarks seriously degrades, and the reconstructed watermarks are going to be very hard to be recognized. In contrast, our proposed method provides high robustness when the noise density becomes very large. The reason is that the memory-distributed property of the adaptive texturized holographic watermark, even though most information of the watermark data is lost, the watermark can be reconstructed from the remaining information of the holographic watermark, successfully.

4.3.2 Robustness test for optical watermarking aspect

At last, we compare our proposed method with the optical watermarking algorithm in [22], which embeds the elemental images into the DWT domain of the host image. To emphasize the superiority and distinction of the proposed method with our pervious works, we also compare with our previous work in [24], which is the only proposal that satisfies similar requirements of parameters of integral imaging and CA gateway values. We selected and implemented the two optical algorithms because of their algorithms are similar to integral imaging. To facilitate a fair comparison, we only replace the host images which used in methods of [22] and [24] with EIA. In comparison experiments, the EIA is generated by the CII system. It is important to mention that unlike the methods above, in our simulation experiments, the texturized hologram is embedded instead of EIA. The comparative results with the optical watermarking methods of [22] and [24] are shown in Figs. 17 and 18. The calculated BCR values from the reconstructed watermarks of the Figs. 17 and 18 are shown in Figs. 19-21. Results for Gaussian noise and salt & pepper noise attacks under different intensities are shown in Fig. 19, where compare our proposed method with the other two methods proposed in [22] and [24]. As can be seen from Fig. 19, our method provides better robustness than other two methods. Figure 20 shows the comparisons of three methods with the speckle noise and the median filter attacks. Robustness tests for JPEG compression and cropping attacks are also demonstrated in the experiment (see the results depicted in Fig. 21). As shown in Figs. 19-21, the results shown clearly indicate that the proposed method yields best performance for Gaussian noise, salt & pepper, speckle noise, JPEG compression, and cropping attacks, except for median filter attack, where our method yields the second-best performance, and the EIA watermark used in [24] provided the best performance.

 figure: Fig. 17

Fig. 17 (a) Reconstructed watermarks with Gaussian noise, salt & pepper, and speckle noise attack: the first column shows results of our proposed method; the second column shows results of the method of [22]; the third column shows results of the method in [24].

Download Full Size | PDF

 figure: Fig. 18

Fig. 18 (a) Reconstructed watermarks with median filter, JPEG compression, and cropping attack: the first column shows results of our proposed method; the second column shows results of the method of [22]; the third column shows results of the method in [24].

Download Full Size | PDF

 figure: Fig. 19

Fig. 19 (a) Effects of Gaussian noise and salt & pepper noise attacks on robustness measured by BCR: (a) Gaussian noise with the Gaussian standard deviations from 0.02 to 0.14. (b) Salt & pepper noise with the noise density from 0.02 to 0.14.

Download Full Size | PDF

 figure: Fig. 20

Fig. 20 (a) Effects of speckle noise and median filter attacks on robustness measured by BCR: (a) speckle noise with the variance from 0.02 to 0.14. (b) Median filter with the window size from 2×2 to 5×5.

Download Full Size | PDF

 figure: Fig. 21

Fig. 21 (a) Effects of JPEG compression and cropping attacks on robustness measured by BCR: (a) JPEG compression with the compression ratio from 10% to 90%. (b) Cropping with the size from 100×100 to 600×600.

Download Full Size | PDF

Since the watermark embedded by the proposed watermarking method is the texturized holographic watermark, the embedded data are therefore resistance to geometric attacks like cropping attack. In addition, it is also found that the proposed watermarking method yields better performance for resisting non-geometric attack because of the texturized property of watermark.

4.4 Watermarking time-consuming analysis

Apart from the security analysis, the calculation speed of the watermark embedding and extraction process in the watermarking system is very important. The experiments are carried out on a personal computer equipped with on a 3.6 GHz Intel Core i7 with 8 GB RAM personal computer. Table 3 lists the watermark embedding and extraction speeds of our proposed method and the DWT-based watermarking method with the same EIAs ‘Doll’ and ‘Cars’.

Tables Icon

Table 3. Average embedding/extraction speed of EIAs.

5. Conclusions

In this paper, we have presented an adaptive-texturized watermarking method for EIA using the modified HFT. The modified HFT exploits features of EIA to create a saliency map by visual attention. The weight parameters are generated from the saliency map and the watermark data is embedded into the CA domains according to the visual attentiveness of each elemental image. The major contributions of the proposed watermarking method are the following: 1) it successfully extracts the feature map from elemental images which provides the weight parameters to control data embedding and 2) it introduces the adaptive texturized hologram which best matches the corresponding CA domain. As we have demonstrated, the use of all available texturized regions in the hologram with varying embedding weights, in addition to the use of matching CA domains and affine parameters compensation, affords significant robustness to a variety of attacks. A comparison demonstrates that the proposed method generally outperforms the similar methods in terms of imperceptibility and robustness.

Appendix

The parameters computed in this paper as quantitative evaluators are SSIM and the PSNR. The SSIM is considered to be correlated with the quality perception of HVS and designed by modelling any image distortion. The SSIM is defined as

SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2),
c1=(k1L)2,c2=(k2L)2
where μx and μy is the average of x and y, σis variance and σxy is covariance of x and y, L is the dynamic range of the pixel values, and k1=0.01, k2=0.03by default. The positive value of SSIM is in the range between ‘0’ to ‘1’, where SSIM = 1 means no image-quality distortion.

PSNR is used to quantitatively evaluate the image quality in our scheme. PSNR can be calculated by

PSNR(O,O)=10log102552MSE(x,y),
MSE(x,y)=1MNx=0M1y=0N1[O(x,y)O(x,y)]2,
where O and O are the original and reconstructed plane image, respectively, x and y denote the pixel coordinates of images with the size of M×N.

BCR is used to judge the difference between the extracted watermark and the original watermark. BCR can be calculated by

BCR=(1i=1LMwiwiLM),
where wiandwidenote the original and the extracted watermarks, respectively, and LMis the size of the watermark.

Acknowledgments

National Natural Science Foundation of China (NSFC) (61535007, 61320106015); the Equipment Research Program in Advance of China (JZX2016-0606/Y267).

References and links

1. A. Stern and B. Javidi, “3-D computational synthetic aperture integral imaging (COMPSAII),” Opt. Express 11(19), 2446–2451 (2003). [CrossRef]   [PubMed]  

2. R. Horisaki, X. Xiao, J. Tanida, and B. Javidi, “Feasibility study for compressive multi-dimensional integral imaging,” Opt. Express 21(4), 4263–4279 (2013). [CrossRef]   [PubMed]  

3. B. Javidi, R. Ponce-Díaz, and S. H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006). [CrossRef]   [PubMed]  

4. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]   [PubMed]  

5. B. Lee, H. Kang, and E. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Research 1(2), 6–10 (2010). [CrossRef]  

6. S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

7. H. Yoo, “Axially moving a lenslet array for high-resolution 3D images in computational integral imaging,” Opt. Express 21(7), 8873–8878 (2013). [CrossRef]   [PubMed]  

8. D. H. Shin and H. Yoo, “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Opt. Express 16(12), 8855–8867 (2008). [CrossRef]   [PubMed]  

9. Z. Liu and S. Liu, “Random fractional Fourier transform,” Opt. Lett. 32(15), 2088–2090 (2007). [CrossRef]   [PubMed]  

10. G. Situ and J. Zhang, “Multiple-image encryption by wavelength multiplexing,” Opt. Lett. 30(11), 1306–1308 (2005). [CrossRef]   [PubMed]  

11. Y. Frauel, A. Castro, T. J. Naughton, and B. Javidi, “Resistance of the double random phase encryption against various attacks,” Opt. Express 15(16), 10253–10265 (2007). [CrossRef]   [PubMed]  

12. S. Liu, Y. Li, and B. Zhu, “Optical image encryption by cascaded fractional Fourier transforms with random phase filtering,” Opt. Commun. 187(3), 57–63 (2001). [CrossRef]  

13. N. Zhou, S. Pan, S. Cheng, and Z. Zhou, “Image compression–encryption scheme based on hyper-chaotic system and 2D compressive sensing,” Opt. Laser Technol. 82, 121–133 (2016). [CrossRef]  

14. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12(8), 1632–1642 (2004). [CrossRef]   [PubMed]  

15. H. H. Kang, J. H. Lee, and E. S. Kim, “Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging,” Opt. Express 20(5), 5440–5459 (2012). [CrossRef]   [PubMed]  

16. A. Aggoun, “Compression of 3D integral images using 3D wavelet transform,” J. Disp. Technol. 7(11), 586–592 (2011). [CrossRef]  

17. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

18. X. Li, C. Li, and I.-K. Lee, “Chaotic image encryption using pseudo-random masks and pixel mapping,” Signal Process. 125, 48–63 (2016). [CrossRef]  

19. I. Muniraj, B. Kim, and B. G. Lee, “Encryption and volumetric 3D object reconstruction using multispectral computational integral imaging,” Appl. Opt. 53(27), G25–G32 (2014). [CrossRef]   [PubMed]  

20. M. Hsieh, D. Tseng, and Y. Huang, “Hiding digital watermarks using multiresolution wavelet transform,” IEEE Trans. Ind. Electron. 48(5), 875–882 (2001). [CrossRef]  

21. H. Wu, J. Zhou, and X. Gong, “A novel image watermarking algorithm based on two-dimensional cellular automata transform,” in Proceedings of Conference of Information Technology and Artificial Intelligence (IEEE, 2011), pp. 206–210. [CrossRef]  

22. D. Hwang, D. Shin, and E. Kim, “A novel three-dimensional digital watermarking scheme basing on integral imaging,” Opt. Commun. 277(1), 40–49 (2007). [CrossRef]  

23. X. Li and S. Kim, “Optical 3D watermark based digital image watermarking for telemedicine,” Opt. Lasers Eng. 51(12), 1310–1320 (2013). [CrossRef]  

24. X. W. Li and I. K. Lee, “Robust copyright protection using multiple ownership watermarks,” Opt. Express 23(3), 3035–3046 (2015). [CrossRef]   [PubMed]  

25. J. Li, M. D. Levine, X. An, X. Xu, and H. He, “Visual saliency based on scale-space analysis in the frequency domain,” IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 996–1010 (2013). [CrossRef]   [PubMed]  

26. C. Li and Y. Hu, “Salient traffic sign detection based on multiscale hypercomplex Fourier transform,” in proceedings of IEEE International Congress on Image and Signal (IEEE 2011), pp.1963–1966. [CrossRef]  

27. P. Picart and J. Leval, “General theoretical formulation of image formation in digital Fresnel holography,” J. Opt. Soc. Am. A 25(7), 1744–1761 (2008). [CrossRef]   [PubMed]  

28. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1
Fig. 1 Analysis of self-correlation: (a) a usual image, (b) EIA generated by the usual image.
Fig. 2
Fig. 2 Procedure for computing saliency using the proposed HFT.
Fig. 3
Fig. 3 2D A ijkl basis function rule = 14, initial configuration = 11010100. A 00kl is the block at the extreme upper left corner. The top row denotes 0j<8, i=0. The left column is j=0, 0i<8.
Fig. 4
Fig. 4 Evolution of CAs with rules 30, 45, 82, 146, 182, 195 and 210.
Fig. 5
Fig. 5 Generating the HFT mask in the CA transform domains.
Fig. 6
Fig. 6 Block diagram of the stages used in HFT to generate the watermarked image.
Fig. 7
Fig. 7 Watermark extraction process.
Fig. 8
Fig. 8 The EIA capture device used in this experiment.
Fig. 9
Fig. 9 Responses to the elemental images of 3D scene ‘Doll’: The first row shows the original elemental images and corresponding saliency maps with three algorithms: (a) EIA of 3D scene ‘Doll’, (b) saliency map with Itti, (c) saliency map with HFT, (d) saliency map with our proposed HFT models; (e) correlation of EIA of 3D scene ‘Doll’, (f) correlation of saliency map with Itti, (g) correlation of saliency map with HFT, (h) correlation of saliency map with our proposed HFT models.
Fig. 10
Fig. 10 Responses to the elemental images of 3D scene ‘Cars’: The first row shows the original elemental images and corresponding saliency maps with three algorithms: (a) EIA of 3D scene ‘Cars’, (b) saliency map with Itti, (c) saliency map with HFT, (d) saliency map with our proposed HFT models; (e)-(f) correlations of EIA of ‘3D scene ‘Cars’, Itti, HFT, and our proposed HFT models.
Fig. 11
Fig. 11 Demonstration of adaptively texturizing a hologram to visually match the host EIA. First column: a sample texturized holograms before embedding. Second column: original EIA. Third column: watermarked EIA, all embedded with the same alpha map. Fourth column: enlarged watermarked EIA.
Fig. 12
Fig. 12 (Upper row) Imperceptibility test of method in [22]: watermarked image, SSIM map of the watermarked image, reconstructed 3D scene ‘Doll’, and SSIM map of 3D scene. (Lower row) Imperceptibility test of the proposed watermarking method.
Fig. 13
Fig. 13 (Upper row) Imperceptibility test of method in [22]: watermarked image, SSIM map of the watermarked image, reconstructed 3D scene ‘Cars’, and SSIM map of 3D scene. (Lower row) Imperceptibility test of the proposed watermarking method.
Fig. 14
Fig. 14 (a) Effects of attack type and reconstructed watermarks on robustness measured by BCR: (a) Gaussian noise with standard deviation of 0.12. (b) Salt & pepper with noise density 0.12. (c) Speckle noise with variance of 0.12. (d) Median filter with size of 3×3. (e) JPEG compression with the compression ratio of 40%. (f) Cropping attack with cropping size of 500×500pixels.
Fig. 15
Fig. 15 (a) Reconstructed watermarks on robustness: (a) our proposed method under Gaussian noise with standard deviation of 0.12, (b) the method of [21] with standard deviation of 0.12, (c) and (d) the method of [20] with standard deviation of 0.12 and 0.06, respectively; (e) our proposed method under salt & pepper with noise density of 0.12, (f) the method of [21] under salt & pepper with noise density of 0.12, (g) and (h) the method of [20] with noise density of 0.12 and 0.06, respectively; (i) our proposed method under speckle noise with variance of 0.12, (g) the method of [21] under speckle noise with variance of 0.12, (k) and (l) the method of [20] with variance of 0.12 and 0.06, respectively.
Fig. 16
Fig. 16 (a) Reconstructed watermarks on robustness: (a) our proposed method under Gaussian noise with median filter with size of 3×3, (b) the method of [21] under Gaussian noise with median filter with size of 3×3, (c) and (d) the method of [20] with the filter size of 3×3 and 2×2, respectively; (e) our proposed method with the compression ratio of 10%, (f) the method of [21] with the compression ratio of 10%, (g) and (h) the method of [20] with the compression ratio of 10% and 40%, respectively; (i) our proposed method with cropping size of 500×500, (g) the method of [21] with cropping size of 500×500, (k) and (l) the method of [20] with the cropping size of 500×500and 300×300, respectively.
Fig. 17
Fig. 17 (a) Reconstructed watermarks with Gaussian noise, salt & pepper, and speckle noise attack: the first column shows results of our proposed method; the second column shows results of the method of [22]; the third column shows results of the method in [24].
Fig. 18
Fig. 18 (a) Reconstructed watermarks with median filter, JPEG compression, and cropping attack: the first column shows results of our proposed method; the second column shows results of the method of [22]; the third column shows results of the method in [24].
Fig. 19
Fig. 19 (a) Effects of Gaussian noise and salt & pepper noise attacks on robustness measured by BCR: (a) Gaussian noise with the Gaussian standard deviations from 0.02 to 0.14. (b) Salt & pepper noise with the noise density from 0.02 to 0.14.
Fig. 20
Fig. 20 (a) Effects of speckle noise and median filter attacks on robustness measured by BCR: (a) speckle noise with the variance from 0.02 to 0.14. (b) Median filter with the window size from 2×2 to 5×5.
Fig. 21
Fig. 21 (a) Effects of JPEG compression and cropping attacks on robustness measured by BCR: (a) JPEG compression with the compression ratio from 10% to 90%. (b) Cropping with the size from 100×100 to 600×600.

Tables (3)

Tables Icon

Table 1 BCR values with Gaussian noise, salt & pepper, and speckle noise attack.

Tables Icon

Table 2 BCR values with median filter, JPEG compression, and cropping attack.

Tables Icon

Table 3 Average embedding/extraction speed of EIAs.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

s=H μ 1 +S μ 2 +V μ 3 ,
s= f 1 + f 2 μ 1 ,
f 1 =H μ 1 ,
f 2 =S+V μ 2 .
S(u,v)= F 1 (u,v)+ F 2 (u,v) μ 1 ,
F k (k=1,2) (u,v)= 1 MN x=0 M1 y=0 N1 e μ 1 2π(xu/M)+(yv/N)) f k (x,y) ,
f k (x,y)= 1 MN u=0 M1 v=0 N1 e μ 1 2π(xu/M)+(yv/N)) F k (u,v) .
S(u,v)= S(u,v) e μ 1 ϕ(u,v) ,
sM=g× S 1 (u,v) 2 ,
f o,l (x,y)= f o,l (x,y)+ α o,l sM( f o,l (x,y)) w o,l (x,y),
α o, l = (x,y) Max over all (o,l) (sM( f o,l (x,y)))sM( f o,l (x,y)) Max over all (o,l) (sM( f o,l (x,y))) .
f ij = k=0 N1 l=0 N1 c kl A ijkl ,
A ijkl =A A ik jl
A ik =α+β a ik a ki
c kl = f ij B ijkl ,
A(x,y)= A 0 (x,y)exp[j ψ 0 (x,y)],
O(x,y, d 0 )=A(x,y) i λ d 0 exp[ 2iπ d 0 λ ]×exp[ iπ λ d 0 ( x 2 + y 2 ) ],
r(x,y)= a R exp[2jπ( μ R x+ ν R y)+jΔψ(x,y)],
ψ(x,y)=exp[ j 2π λ ( k x x+ k y y) ]exp(jϕ(t)),
H(x,y, d 0 )= | O(x,y, d 0 ) | 2 + | r(x,y) | 2 + r (x,y)O(x,y, d 0 )+r(x,y) O (x,y, d 0 ).
( x y )=( 1 1 1 0 )( x y )(modN).
f (x,y)=f(x,y)×(1+β×n(x,y)),
SSIM(x,y)= (2 μ x μ y + c 1 )(2 σ xy + c 2 ) ( μ x 2 + μ y 2 + c 1 )( σ x 2 + σ y 2 + c 2 ) ,
c 1 = ( k 1 L) 2 , c 2 = ( k 2 L) 2
PSNR(O, O )=10 log 10 255 2 MSE(x,y) ,
MSE(x,y)= 1 MN x=0 M1 y=0 N1 [ O(x,y) O (x,y) ] 2 ,
BCR=( 1 i=1 L M w i w i L M ),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.