Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Exploiting data compression to improve reliability of phase-modulated holographic data storage

Open Access Open Access

Abstract

Due to the interference of complex noise in holographic channels and the limitation of phase retrieve algorithms, the reliability of phase-modulated holographic data storage (PHDS) is seriously threatened, especially for multi-level phase modulation. A method for improving data reliability of PHDS is proposed by applying lossless data compression and low-density parity-check (LDPC) codes, which can eliminate data redundancy and correct errors effectively. We allocate the space saved by compression to store more LDPC parity bits and develop a method to determine the LDPC code rate and a method to manage the free space. Our method does not require the characteristics of the reconstructed phase distribution, which simplifies the statistical analysis and calculation. Simulation and experimental results demonstrate that our method greatly decreases the bit error rate (BER) and decoding iterations, and boosts the decoding success probability. For instance, when the phase error rate is 0.029 and the compression rate is 0.6, our method reduces the BER by $87.8{\% }$, the decoding iterations by $84.3{\%}$, and improves the decoding success probability by $93{\%}$. Our method enhances both data reliability and storage efficiency in PHDS.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

For data storage, the demand of huge storage density and fast data transfer rate is growing rapidly. Holographic data storage is considered the most promising candidate for the next generation of optical data storage systems due to the feasibility of high-density recording [14]. As a three-dimensional volume storage technology, holographic data storage employs various multiplexing techniques such as shift multiplexing [5], angular multiplexing [6], wavelength multiplexing [7], and phase code multiplexing [8], as well as the two-dimensional data page as the reading and writing unit, significantly boosting storage capacity and data conversion rate. The other two benefits of holographic storage are ultra-long-term preservation and low energy consumption, extremely consistent with the idea of green development in the modern period [9]. Holographic storage is classified into three types based on modulation mode: phase-modulated, amplitude-modulated, and complex amplitude-modulated holographic data storage [10]. PHDS is regarded as a special case of complex amplitude-modulated holographic storage with a uniform amplitude of 1 [11]. PHDS provides a vast storage capacity and a high signal-to-noise ratio because phase modulation encode has a greater code rate [12]. The challenge with phase-type holographic data storage is that the detector like complementary metal oxide semiconductor (CMOS) cannot directly detect the phase information. Several phase retrieval techniques, including interferometric, non-interferometric, and combinations of both have been presented [1321]. Interferometric systems are more complex and sensitive to environmental disturbances due to the multi-step capture operations, result in a low data transfer speed. The iterative Fourier transform (IFT) based interference-free system is simpler, but the speed of data conversion is limited by multiple algorithm iterations. When reading data from PHDS, both data transfer rate and data reliability must be considered. There are trade-offs between data transfer speed and data reliability.

Several studies have focused on improving data reliability or transmission speed. Error correction code (ECC) is an important technique for ensuring data reliability by lowering the bit error rate (BER) down to an acceptable level [22]. High raw bit error rate (RBER) can be tolerated by ECCs with strong error-correcting performance [23,24]. LDPC code has a strong error-correcting performance and is very suitable for holographic data storage systems. Firstly, LDPC code has the performance close to Shannon limit and the completely parallel decoding mode, which is very suitable for holographic storage with high density storage and fast data conversion rate. Secondly, the prior information of the noise distribution in the channel/data page can be used not only to design coding, but also to optimize decoding. In a previous work, we used the reconstructed phase distribution to increase accuracy of the initial log-likelihood ratio (LLR) information, which enhanced the LDPC decoding performance and data reliability [25]. Based on the prior information of the reference beam, the reliability of PHDS is enhanced by assigning larger LLR weights to the reference bits than the information bits [26]. In order to improve the data reliability of PHDS, a reliable bit aware LDPC optimization approach was presented by analyzing and utilizing the phase demodulation features to obtain reliable bits to optimize the decoding performance of LDPC code [27]. In abovementioned studies, the characteristics of retrieved phase distributions or demodulated phases need to be as prior to increase the accuracy of the initial LLR information. To extract the characteristics of the phase distributions or demodulated bits, a large number of reconstructed phase data pages have to be evaluated, which imposes a massive computing burden. Although some statistical analysis can be done offline, the initial LLR information of each bit still requires some computational work and storage expense. We believe that there is a more effective way to improve LDPC codes for PHDS without using additional assistance.

This study enhances the error correction performance of LDPC codes by exploiting data compression, without relying on the retrieved phase or demodulated bits. This is a novel perspective in the field of PHDS. The space saved by compressed information bits can be used to store more check bits. Moreover, the remaining space is fully utilized to store all 0 bits, which can improve the decoding performance by improving the LLR information of all bits in the remaining space. LDPC codewords stored in holographic media are susceptible to errors due to noise disturbance. The decoding of other unknown information in the codeword can be aided by the all 0 bits information stored in the remaining space, which is known information as part of the bit information of the LDPC codeword. It can give useful LLR information to speed up the decoding update of unknown information because known information and unknown information both participate in the same check equation under the LDPC decoding rules. As a result, LDPC decoding performance can be improved. Unknown information indicates that it cannot determine whether the read bit information is correct or incorrect. Data compression has important applications in data transmission and storage. It can save energy and space by reducing the amount of data transmitted and/or decreasing transfer time because the size of data is reduced [28]. There are some well-known data compression algorithms. In this paper, we do not investigate specific data compression algorithms, but rather explore the effectiveness of our proposed approach by setting a series of compression rates. A simulation system with the same parameters as the experimental system and a real-world experiment are conducted to confirm the viability and feasibility of our proposed method.

2. Theory and methods

2.1 Collinear phase-modulated holographic data storage system

According to whether the optical path of reference beam and information beam is coaxial or not, PHDS can be divided into collinear PHDS and off-axis PHDS. Figure 1 shows a simple illustration of the collinear PHDS. The four-level modulated phase data pages are uploaded onto a spatial light modulator (SLM) to generate signal beam and reference beam. In data recording, both signal part pattern and reference part pattern are uploaded onto the SLM to generate signal beam and reference beam. The signal beam and the reference beam interfere in the recording medium to form an interference fringe, which is recorded as a hologram in the medium. In data reading, only the reference part pattern is loaded on the SLM to form a reference beam identical to the one used in recording to illuminate the hologram. The reconstructed beam produced by diffraction then appears on the back focal plane of lens2. Since the reconstructed beam is diffracted, its intensity is much lower than that of the reference beam. Therefore, an attenuator is placed at the back focal plane of lens2, whose function is to reduce the intensity of the reference beam to ensure the intensity uniformity of the reconstructed beam and the reference beam. Finally, the Fourier domain intensity of the reconstructed beam is captured by CMOS as the input for phase retrieval.

 figure: Fig. 1.

Fig. 1. The illustration of the collinnear phase-modulated holographic data storage system.

Download Full Size | PDF

The phase information page can be reconstructed using the following IFT algorithm [15,16,26].

1) A two-dimensional phase page of $[P]$ is randomly guessed as the initial input of the IFT algorithm. $[P]$ contains two parts of phase information, the left half is the signal phase of $[P_{s}]$, and the right half is the reference phase of $[P_{r}]$ as the known phase.

2) The complex amplitude distribution $[C_k]$ of reconstructed light on the object domain is gained by using Formula (1).

$${[C_k]} = {e^{i \cdot [P_s^k]}} + A_r \cdot {e^{i \cdot [{P_r}]}}.$$

Here, $A_r$ represents the reference light amplitude weight. $k$ denotes the number of IFT iterations. Because $[P_r]$ is known information that does not change with k.

3) The complex amplitude distribution $[F_k]$ of the Fourier domain is obtained by Fourier transform of Formula (1).

$${[F_k]} = f\{{C_k}\} = |[\sqrt {{Int_0}} ]| \cdot {e^{i \cdot {P{_k}}}}.$$

Here, $f$ is the Fourier transform operator. ${{Int_0}}$ is the intensity distribution obtained by CCD.

4) Inverse Fourier transform is applied to Formula (2) to obtain the complex amplitude distribution ${C_k^{'}}$ of the object domain.

$${[C_k^{'}]}={f^{ - 1}}\{ [{F_k}]\} = |{W_s}| \cdot {e^{i \cdot [P_s^{'k}]}} + |{W_r}| \cdot {e^{i \cdot [{P_r}]}}.$$

Here, $W_s$ and $W_r$ denote the new amplitude weight of the signal beam and reference beam, respectively. $[P_s^{'k}]$ is the new reference light phase distribution.

5) $[C_k^{'}]$ is processed using the reference phase as a constraint. Specifically, the information phase on the left side of the two-dimensional phase page remains unchanged, and the right part is replaced with a known reference phase. Thus, we can get a new distribution $[C_k^{''}]$.

$${[C_k^{\prime\prime}]} = 1 \cdot {e^{i \cdot [P_s^{'k}]}} + |{A_r}| \cdot {e^{i \cdot [{P_r}]}}.$$
6) The Fourier intensity distribution $[Int_k]$ is gained by Fourier transform of Formula (4).
$$[{Int_k}] = f\{{C_k^{\prime\prime}}\} \cdot {(f\{ {C_k^{\prime\prime}}\} )^ * }.$$

Here, $*$ represents the conjugation.

7) The intensity error rate is computed by using Formula (6).

$${[I_k]} = \frac{{\sum (|[{Int_k}] - {[Int_0}}]|) }{{\sum {{[Int{_0}}}] }}.$$
8) The difference $\Delta I$ between two adjacent intensity error rates is calculated by using Formula (7).
$$\Delta I = {I_k} - {I_{k - 1}}.$$

If $\Delta I$ is less than the set threshold or reaches the maximum number of iterations, the IFT iteration terminates. Otherwise, the obtained phase distribution is used as the new input to continue the iteration.

2.2 Data reliability and LDPC codes

Any storage system is affected by various complex noises that are always difficult to eliminate completely. The PHDS is deteriorated by noise from optical systems, materials and electrical components, especially for multilevel modulation. In our previous study, we found that when the phase error rate (PER) rose to a high level, the LDPC code could not correct the error completely, even if the number of iterations increased substantially, reducing the data conversion rate. Moreover, when the phase error was very bad, the decoding failure rate was close to $50{\%}$ [25], meaning that half of the data frames cannot be fully corrected. This is because the interference of various noises in the system and the limitation of phase recovery algorithm resulted in a high PER that exceeded the error correcting ability of the LDPC code. Even if the BER can be reduced to some extent, it still required a lot of iterative decoding computation and time, decreasing the data transmission rate. Therefore, it is important to optimize LDPC codes for the phase-modulated holographic data storage. After the phase page has been reconstructed using the IFT algorithm, PER is defined as the ratio of the number of phase errors to the size of the phase page. PER can reflect the quality of the reconstructed phase page.

LDPC code is a kind of linear block code, which consists of information bits and redundancy parity bits. Redundancy parity bits are also called supervision bits, which are responsible for checking and correcting errors [29]. Each parity bit can be represented as the modulo 2 sum of some pre-specified information bits [30]. The construction rule of check matrix can be expressed by check equations. The parity check matrix represents a set of homogeneous linear modulo 2 equations that form the check equations. The relationship between the parity check matrix and check equations is depicted clearly in Fig. 2. Each variable node (VN) represents a column in the check matrix, and each check node (CN) represents a row in the check matrix. The mutual constraint between information bits and check bits is reflected in the check equations. The solution set for the check equations is the set of code words. When the length of information bit is fixed, the more check equations there are, the stronger the constraint relation between information bit and check bit is. Therefore, increasing the check bit is undoubtedly helpful to improve the error correction ability of LDPC code. However, when the length of the LDPC codeword remains unchanged, increasing the check bits will occupy more storage space, which reduces the storage space allocated by the system for the information bits. In traditional LDPC codes, there are two ways to increase the number of check bits: one is to increase the length of the codeword when the number of information bits is fixed; another is to reduce the number of information bits when the code length is fixed, to increase the number of check bits. Although these methods can improve data reliability, they also lead to the reduction of LDPC coding rate and data storage capacity, unable to fully utilize the advantages of phase holographic storage of high density and large capacity. This is an irreconcilable contradiction in traditional methods.

 figure: Fig. 2.

Fig. 2. The parity check matrix and its corresponding parity check equations. Each column in the parity check matrix containing only 0 and 1 is called the variable node and each row is called the check node corresponding the check equation. After encoding, the bit sequences of $X{\rm {\ =\ (}}{{\rm {x}}_1}{\rm {,\ \ }}{{\rm {x}}_2}{\rm {,\ \ }}{{\rm {x}}_3}{\rm {,\ \ }}{{\rm {x}}_4}{\rm {,\ \ }}{{\rm {x}}_5}{\rm {,\ \ }}{{\rm {x}}_6}{\rm {,\ \ }}{{\rm {x}}_7}{\rm {,\ \ }}{{\rm {x}}_8}{\rm {)}}$ satisfy ${H_{3 \times 8}} \cdot {X^T}{\rm {\ =\ }}{0^T}$. Here, ${H_{3 \times 8}}$ denotes the parity check matrix. It is also one of the decoding termination conditions. The addition operation is the exclusive OR operation. Expanding the above equation, we can get the following check equations: ${{\rm {x}}_1} \oplus {{\rm {x}}_3} \oplus {{\rm {x}}_5} \oplus {{\rm {x}}_7} = 0$, ${{\rm {x}}_2} \oplus {{\rm {x}}_4} \oplus {{\rm {x}}_6} = 0$, and ${{\rm {x}}_1} \oplus {{\rm {x}}_2} \oplus {{\rm {x}}_3} \oplus {{\rm {x}}_8} = 0$. After processing, we can get ${{\rm {x}}_6}{\rm {\ =\ }}{{\rm {x}}_2} \oplus {{\rm {x}}_4}$, ${{\rm {x}}_7}{\rm {\ =\ }}{{\rm {x}}_1} \oplus {{\rm {x}}_3} \oplus {{\rm {x}}_5}$, and ${{\rm {x}}_8} = {{\rm {x}}_1} \oplus {{\rm {x}}_2} \oplus {{\rm {x}}_3}$, where ${{\rm {x}}_6}$, ${{\rm {x}}_7}$, and ${{\rm {x}}_8}$ denote the parity check bits. ${{\rm {x}}_1}$, ${{\rm {x}}_2}$, ${{\rm {x}}_3}$, ${{\rm {x}}_4}$, and ${{\rm {x}}_5}$ represent the information bits.

Download Full Size | PDF

2.3 Data compression

Data compression is a technology that transforms a string of characters into a shorter one that conveys the same information [28]. It reduces the amount of data to be sent or stored by eliminating some of the redundancy in the data. Data compression improves the efficiency of data transmission and storage, and protects data integrity. It has many benefits, such as lowering storage and communication costs and easing the load of data input/output channels on computer systems. Compression rate is the ratio of the size of the compressed data to the size of the original data, as shown in Eq. (1).

$$compression{\rm{ }}\ rate = \frac{{the{\rm{ }} \ size {\rm{ }}\ of\ {\rm{ }}compressed\ {\rm{ }}data}}{{the\ {\rm{ }}size\ {\rm{ }}of\ {\rm{ }}\ original{\rm{ }}\ data}}.$$

In practice, compression rate affects data transmission speed, storage space utilization, and data reliability and security. A lower compression rate means a higher compression efficiency, but also more time for compression and decompression.

At present, several compression strategies for holographic data storage have been proposed. Darakis et al. [31] proposed a phase-shifting digital holographic data compression approach. The proposed method compresses the interference patterns to decrease the amount of data recorded. Naughton et al. [32,33] studied lossless and lossy data compression in the phase-shift digital holography and compressed holograms to speed up the transmission of three dimensional images. Arrifano et al. [34] presented multiple description coding including scalar quantization for digital hologram compression. Liu et al. [35] exploited the compressed sensing technology to improve the anti-noise ability and reduce the bit error rate for the holographic data storage system. Hachani et al. [36] proposed a JPEG-based holographic data compression scheme, which integrates a region-of-interest-based bit allocation method to encode phase-shifting digital holograms. To conserve storage space, the proposed method compresses bit information using data compression. More redundant bits can be stored in the freed-up storage space, which will help LDPC codes conduct error correction more effectively. The proposed method, which has a great application potential in the holographic data storage system, can significantly increase data reliability and capacity when the phase reconstructed by the IFT algorithm has a high error rate.

2.4 Our method

Data compression can remove the redundancy of user data, so it is theoretically possible to use the space saved by compressing data to store more check data. This inspired us to explore using data compression to enhance phase reliability of PHDS. Figure 3 shows the channel model of our proposed scheme for holographic data storage system. The stage of our method in the system is marked by the orange dashed block, which is closely linked to the ECC module. Figure 4 illustrates the data flow of our method. The detailed steps are as follows:

 figure: Fig. 3.

Fig. 3. Channel model of holographic data storage system.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The data flow diagram of the proposed method in phase-modulated holographic data storage system.

Download Full Size | PDF

Step 1: The gray image is divided into $m \times l$ pixel sub-blocks of the same size, according to the order from left to right and from top to bottom. Randomly select one of them as an example of recording and reading. Suppose this sub-block can be represented by $k$ bits of binary numbers. If LDPC coding is performed directly on these $k$ information bits, $(n-k)$ parity check bits can be generated. However, we only reserved $(n-k)$ bits of storage space and did not perform LDPC coding.

Step 2: This step consists of three sub-steps, which are data compression, calculation of code rate and disposal of free space. The purpose of this step is to reduce the storage space requirements while ensuring information integrity.

(1) Data compression. Data compression is performed on each subblock, and assume that the compression rate of the selected subblock in the Fig. 4 is $r$, $0<r<1$. The size of the subblock is $k$ bits and can be compressed into $k \cdot r$ bits. The amount of compressed data is smaller, but carries the same information. Data compression can save $k \cdot (1-r)$ bits of storage space. We use these $k \cdot (1-r)$ bits and the reserved $(n-k)$ bits to store the parity check bits of the compressed data, so the parity check bit has at most $k \cdot (1-r)$ bits. The original data can be compressed using lossless compression algorithms such as Lempel-Ziv, Lempel-Ziv-Welch, and Burrows-Wheeler. Since the holographic data storage system requires compression, these algorithms have a high compression efficiency [32]. In order to achieve the compression requirements, we aim to select an algorithm throughout the design stage that has a higher compression efficiency, such as the Burrows-Wheeler algorithm. An improved LDPC code can be created after the data has been compressed. The proposed approach divides the original data into several subblocks, and each subblock is subsequently compressed and encoded using LDPC codes. Due to the randomness of the data, each subblock has a different compression rate. We set the length of the LDPC codeword with data compression to be equal to the length of the LDPC codeword without data compression, despite the fact that each subblock’s compression rate varies. In the proposed method, the codeword length is fixed, we can calculate the amount of the remaining space, the LDPC code rate, and the length of the 0 regions once each subblock has finished the data compression operation with a compression rate $r$. Particularly for image data, the data can be compressed because it is redundant. Additionally, this ensures that after compression, there will be more space. Data compression improves data reliability by freeing up redundant space to store more parity bits. The proposed approach can still be effective when the data compression rate is not constant.

(2) Code rate. The LDPC code rate can be determined in the following ways. Firstly, the identity matrix with the size of 1024$\times$1024 is selected as the base matrix of the check matrix. When the number of columns in the check matrix is $n$, the column weight is $n/1024$. Since the parity check data has at most $(n - k \cdot r)$ bits, the row weight of the check matrix is $\left \lfloor {(n - k \cdot r)/1024} \right \rfloor$, where $\left \lfloor {\cdot } \right \rfloor$ is the downward integral function. The number of rows of the check matrix(i.e., the number of parity check bits) is calculated as $1024 \times \left \lfloor {(n - k \cdot r)/1024} \right \rfloor$. Thus, LDPC code rate is $1 - \frac {{1024 \times \left \lfloor {(n - k \cdot r)/1024} \right \rfloor }}{n}$. Figure 5 clearly illustrates the encoding differences between traditional LDPC codes and the proposed method. In Fig. 5, number 1 denotes the traditional method without data compression and number 2 represents the proposed method with data compression to improve error correction performance of LDPC codes.

 figure: Fig. 5.

Fig. 5. Traditional denoted by number 1 versus our methods represented by number 2.

Download Full Size | PDF

(3) Free space. The compressed $k \cdot r$ bits are LDPC encoded according to the code rate determined in step 2 (2), $1024 \times \left \lfloor {(n - k \cdot r)/1024} \right \rfloor$ bits of are generated. The length of LDPC code word is $n$ bits that is fixed. By calculation, there are $(n - k \cdot r) - 1024 \times \left \lfloor {(n - k \cdot r)/1024} \right \rfloor$ bits of storage space that are vacant, which is call called as free space. The free space is fully utilized by filling it with all 0 bits. We call the parity data stored in the original redundant space (i.e., $(n-k)$ bits) as parity data 2, and the parity data stored in the compressed space as parity data 1. Parity data 1 and parity data 2 together form all the parity data generated by the LDPC code. The free space can be placed in three ways: 1) on the left side of parity data 1; 2) between parity data 1 and parity data 2; 3) on the right of parity data 2, as shown in Fig. 6. All three ways of placing free space can be applied to our method. They are compatible with our method. We can choose one of these ways (e.g., represented by number 1) to verify the effectiveness of our approach. Figure 6 illustrates the possibility of free space and parity check data layout after using data compression.

 figure: Fig. 6.

Fig. 6. Three ways of placing free space.

Download Full Size | PDF

Step 3: By following the specific modulation rule: $00\to 0$, $01\to \frac {\pi }{2}$, $11\to \pi$, $10\to \frac {{3\pi }}{2}$, the LDPC codeword is then 4-level phase-modulated to become $n/1024$ signal data pages with a size of $32 \times 16$. The reference data page with a size of $32 \times 16$ is randomly generated, which are combined with signal data pages form $n/1024$ phase data pages with a size of $32 \times 32$. The n/1024 phase data pages from an LDPC codeword are called a set of phase data pages.

Step 4: These phase data pages are sent to holographic channels with complex noise for transmission. When reading data, the Fourier intensity information of the reconstructed beam is captured by the CMOS camera. The IFT algorithm is then utilized to retrieve the signal phase data.

Step 5: According to the demodulation rules: $0\to 00$, $\frac {\pi }{2}\to 01$, $\pi \to 11$, $\frac {{3\pi }}{2}\to 10$, a set of reconstructed phase data pages are demodulated into an n-bit stream. Before activating the LDPC decoding, it is critical to obtain the initial LLR information for each bit. Usually, the logical value of each bit needs to be determined before obtaining the LLR value. For bits in a non-free area, set their LLR to +1 if they are 1, and to $-1$ if they are 0, as shown in Fig. 7. Based on the prior of the bit information in the free space, we assign the initial LLR value of the bit in the free space to $-1$ directly, without judging the logical value of the free bits. Then, LDPC decoding is performed using the layered LDPC decoding algorithm, which uses the CN and VN information updated into a specific layer for decoding decision timely, with high decoding efficiency. The overall layered decoding algorithm is as follows [37].

 figure: Fig. 7.

Fig. 7. Obtaining the initial LLR information for each bit to launch LDPC decoding.

Download Full Size | PDF

(a) Update VN information by utilizing equation (2)

$${G_j} = {{\rm S}_j} - K_{_{ij}}^{(t - 1)}.$$

$S_j$ denotes the initial LLR of the $j$th bit in the sequence to be decoded. $t$ denotes the number of decoding iterations.

(b) Update CN information of $p$ check nodes by calculating equation (3)

$$K_{ij}^{t} = (\prod_{k \in M(i)\backslash j} {sign({G_k})} ) \cdot (\mathop {\min }_{k \in M(i)\backslash j} \{ {\rm{|}}{G_k}{\rm{|}}\} \cdot \alpha ),$$
where $\alpha \in (0,1)$ is a normalized factor and ${M(i)\backslash j}$ denotes the set $\{ j:{H_{ij}} \ne 0\}$ excluding the $j$th bit. $(lay - 1) \cdot p < i \le lay \cdot p$, $1 \le j \le N$, and ${H_{ij}} \ne 0$. Here, $p$ is the dimension of the identity matrix in the check matrix $H$; $lay$ and $N$ represent the layer number and the codeword length, respectively.

(c) Calculate the maximum likelihood information of all bits through equation (4)

$${{\rm S}_j} = {G_j}{\rm{ + }}K_{_{ij}}^{t}.$$
(d) Decoding decision. If ${{\rm S}_j}<0$, $v_j$=1, otherwise $v_j$=0. If $H \cdot \mathop V^ \to = \mathop 0^ \to$ or get to the maximum layer and iterations, decoding output. Otherwise, jump to the next decoding round.

Step 6: Since the result of LDPC decoding is the compressed information bits, the decompression operation is required to be performed to obtain original information bits. Finally, the gray image can be restored from the decompressed bit-streams.

In comparison to conventional LDPC codes, our proposed method first performs data compression operations. Our goal is to increase the number of check bits by utilizing the space that is saved by information data compression. Compared with the conventional LDPC code, our method adds $1024 \times \left \lfloor {(n - k \cdot r)/1024} \right \rfloor - (n - k)$ parity bits.

3. Simulation results and discussion

A simulation study with parameters same to the experiment is performed to validate our method. A gray image with $256 \times 256$ pixels is divided into 32 sub-blocks of $64 \times 32$ pixels first. Since a grayscale (0-255) pixel can be represented by 8 bits, a sub-block can be represented by 16384 ($64 \times 32 \times 8 = 16384$) bits. If these 16384 information bits are LDPC encoded at a rate of 0.89, an LDPC codeword of 18432 bits can be formed, which contains 2048 parity check bits. However, we only reserve storage space for 2048 bits, do not perform LDPC encoding operations. Data compression is carried out on the 16384 information bits, and the compression rate is $r$, $0<r<1$. The length of compressed data is $16384 \cdot r$ bits. We set up simulations with a series of different compression rates from 0.6 to 0.9 to explore the effect of compression rate on LDPC decoding performance. LDPC encoding is then performed on the compressed data at a code rate of $1 - \frac {{\left \lfloor {(18432 - 16384 \cdot r)/1024} \right \rfloor }}{{18}}$ to form a LDPC code word of $18432$ bits. The LDPC codeword consists of $16384 \cdot r$ information bits, $1024 \times \left \lfloor {(18432 - 16384 \cdot r)/1024} \right \rfloor$ parity check bits and the free space of $(18432 - 16384 \cdot r) - 1024 \times \left \lfloor {(18432 - 16384 \cdot r)/1024} \right \rfloor$ bits. Compared with the LDPC code without data compression, $1024 \times \left \lfloor {(18432 - 16384 \cdot r)/1024} \right \rfloor - 2048$ parity check bits are added. The free space is used to place all 0 bits. Each LDPC codeword is then 4-level phase (0, $\frac {\pi }{2}$, $\pi$, $\frac {3\pi }{2}$) modulated to become signal data pages with a size of $32 \times 16$. Since each phase data can represent 2 bits, each signal data page contains 1024 ($32 \times 16 \times 2 = 1024$) bits. Thus, each LDPC code word can be modulated to 18 signal data pages ($18432 / 1024 = 18$). The reference data page with a size of $32 \times 16$ is randomly generated. The reference data page and each signal data page are combined to form $18$ phase data pages with a size of $32 \times 32$.

When recording data, every phase data page is uploaded on SLM and every phase data is displayed by a block of 4$\times$4 pixels on the SLM where the pixel pitch is 20$\mu m$. When reading data, only the part of reference data page is uploaded on the SLM. After Fourier transform, the Fourier intensity of the reconstructed beam is captured by the CMOS where the pixel pitch is 5.86$\mu m$ and only two Nyquist size frequencies in the Fourier intensity is retained to do phase retrieval. By transforming Fourier intensity information, phase information can be retrieved by using the IFT algorithm. The maximum iteration number of IFT is 100. Then, a set of reconstructed signal phase data pages are demodulated into a 18432-bit stream. LDPC decoding is executed to correct the error bits of the obtained data frames. Since the bits obtained after decoding are compressed, after LDPC decoding, the decompression is required to obtain the recorded data. The following parameters are also set. The maximum decoding iteration number of LDPC is 30. The laser has a 532nm wavelength. The focal length of the lens is 150mm. The CMOS’s dynamic range is 8 bits. IFT has an iteration threshold of $5.0\times 10^{-3}$. In the simulation experiment, we add Gaussian noise to the optical channel from SLM to CCD. In previous studies, electrical noise has been modeled as Gaussian noise [35,38]. We therefore add additive white Gaussian noise (AWGN) to each intensity value that the CCD gets in an intensity image. PER can be obtained by adding different signal-to-noise ratios to the AWGN channel. The phase page, which has a certain amount of phase errors, is then reconstructed using the IFT algorithm. By comparing the original phase and the reconstructed phase, PER can be computed. Finally, we investigate the variations of BER and UBER, decoding iterations, and the probability of successful decoding under different PERs, as shown in Figs. 811.

 figure: Fig. 8.

Fig. 8. The UBER of the proposed method and conventional method for different PER.

Download Full Size | PDF

The validity of the proposed method is verified through multiple dimensions. Our proposed method is compared with the situation without error correction and the conventional method, respectively. The conventional method is to directly use LDPC codes that do not utilize compression. The proposed method for promoting the system’s reliability is illustrated when compared to not employing ECC. The performance improvement of the proposed method over the conventional LDPC code can be quantified when compared to the traditional LDPC code without data compression. First, we estimated the fluctuation of the uncorrectable bit error rate (UBER) under different compression rates. As shown in Fig. 8, the UBER of the proposed method is remarkably lower by up to four orders of magnitude than the UBER of the traditional method. In general, for a fixed PER, the lower the compression rate, the lower the UBER. When the PER exceeds 0.022, a high compression rate (like 0.9) is used, there will not be an obvious difference in UBER between the proposed method and the conventional method. This is because when the PER is high, the LDPC code with strong error correction ability is required. However, LDPC codes with high compression rate mean that the number of check bits is small, resulting in poor decoding error correction ability. This problem can be well ameliorated when a low compression rate is applied.

BER is the most intuitive and basic indicator to evaluate reliability, and its degree of reduction can intuitively reflect the error correction capability of LDPC codes. The BER of the proposed method and conventional method for different PER are counted. As shown in Fig. 9, when the PER is low and the compression ratio is low, the BER obtained by using our proposed method is lower than the BER without using ECC, while the BER obtained by using the conventional method is always higher than the BER without using ECC. This is enough to confirm the effectiveness of our method.

 figure: Fig. 9.

Fig. 9. The BER of the proposed method and conventional method for different PER.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Average decoding iterations curves of the proposed method and conventional method for different PER.

Download Full Size | PDF

As the PER increases, there may be a phenomenon that the BER after using our method with a high compression rate is higher than the BER without using ECC. This phenomenon also occurs in the conventional method. The reason is that when the PER is too high, it will exceed the error correction capability of LDPC codes. LDPC codes cannot completely correct the erroneous bits, and may even cause new errors. Using our method with a low compression rate, such as 0.6, can address this problem well. This suggests that in the implementation of the scheme, a lower compression rate should be considered in the face of the larger PER. When PER is 0.037, compared with the conventional method, our method can reduce the BER by $33.3{\%}$; compared with the method without using ECC, our method can reduce the BER by $40{\%}$. By using different compression rates, the BERs have different degrees of reduction. When PER is 0.029, compared with the conventional method, our method can reduce the BER by $30.6{\%}$, $38.9{\%}$ and $89.7{\%}$ when using compression ratios of 0.8, 0.7 and 0.6, respectively.

The benefits of the proposed method are further demonstrated by measuring the average number of decoding iterations for all LDPC code words under different PERs, as shown in Fig. 10. The average number of decoding iterations is increased with the PER, indicating a higher decoding complexity and delay. However, the decoding iterations are significantly reduced by our method compared to the conventional method, especially at lower compression rates. For example, at a PER of 0.017, the conventional method reaches the maximum of 30 decoding iterations, while our method only requires 19.778, 5.562, 3.384, and 1.604 iterations at average for compression rates of 0.9, 0.8, 0.7, and 0.6, respectively. This shows that the error correction capability and the read performance of the system are improved by our method. Even at a higher PER of 0.029, $43.3{\%}$, and $84{\%}$ reduction in decoding iterations is achieved by our method compared to the conventional method for compression rates of 0.7 and 0.6, respectively.

Furthermore, the probability of successful decoding for different phase error rates is calculated, as shown in Fig. 11. The probability of successful decoding is defined as the ratio of data frames where all error bits are completely corrected to the total number of data frames. As the PER increases, the probability of successful decoding shows a downward trend. The higher the compression rate, the lower the decoding success rate. The probability of successful decoding using our method is significantly higher than that of the conventional method. For the conventional method, when the PER exceeds 0.010, the probabilities of successful decoding are always 0. However, the situation can be greatly enhanced by our approach. When the PER is 0.01, our technique guarantees the successful decoding probability of $100{\%}$ for compression rates of 0.8, 0.7, and 0.6. When the PER reaches 0.029, compared with the conventional method, the proposed method can reduce the probability of successful decoding by $42.4{\%}$, $53.8{\%}$, and $93{\%}$ using data compression with rate of 0.8, 0.7, and 0.6, respectively.

 figure: Fig. 11.

Fig. 11. The probability of successful decoding curves of the proposed method and conventional method for different PER.

Download Full Size | PDF

Finally, we demonstrate the effectiveness of our proposed approach visually in Figs. 12 –14. We analyze the phase error distribution of retrieved data pages under different conditions: Fig. 12(a) shows the distribution after using the IFT algorithm without error correction; Fig. 12(b) shows the distribution after using the LDPC codes without data compression and 30 decoding iterations; Figs. 1314 show the distribution after using our proposed method. The white squares in the reconstructed phase pages represent the wrong phase. In Fig. 12(a), PER is 0.023. The phase error distribution without error correction reflects the reliability of the holographic channel. Figure 12(b) depicts the phase error distribution after 30 decoding iterations using the conventional method. The PER is 0.02. The overall phase error for the conventional method does not improve compared to that without ECC, even when the number of decoding rounds reaches the maximum value of 30.

 figure: Fig. 12.

Fig. 12. Phase error distribution after (a) using the IFT algorithm without error correction (b) using the LDPC codes without data compression under the number of decoding iterations of 30.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Phase error distribution after using the LDPC codes (a) with data compression rate of 0.9 under the number of decoding iterations of 25 (b) with data compression rate of 0.8 under the number of decoding iterations of 5.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Phase error distribution after using the LDPC codes (a) with data compression rate of 0.7 under the number of decoding iterations of 3 (b) with data compression rate of 0.6 under the number of decoding iterations of 3.

Download Full Size | PDF

Compared with Fig. 12, the PER of Fig. 13(a) has no significant decrease; however, the number of error phases is dramatically reduced when using our method with a data compression rate of 0.8 and a decoding round of 5, as shown in Fig. 13(b). Figure 14 shows the phase error distribution of our proposed method using a compression rate of 0.7 or 0.6 and a decoding iteration number of 3. As shown in Fig. 14(a), when the compression rate is 0.7 and the number of decoding iteration number is 3, there are still a few wrong phases in the data page; however, as shown in Fig. 14(b), when the compression rate is 0.6 and the iteration number is also 3, the PER is zero. Therefore, reducing the compression rate can not only significantly reduce the PER, but also reduce the number of decoding iteration.

4. Experiment

The real world experiments are also conducted to validate the proposed method. The experimental setup of the collinear phase-modulated holographic data storage system is seen in Fig. 15.

 figure: Fig. 15.

Fig. 15. The experimental setup of the collinear phase-modulated holographic data storage system.

Download Full Size | PDF

The green beam from the laser is expanded with a beam expander and adjusted to be linearly polarized light by a linear polarizer. The aperture, which is shaped with two identical rectangular windows, is illuminated by a parallel light with a spherical spot. The signal portion is in the left window, and the reference portion is in the right window. The rectangular window measures $2.56mm\times 1.28mm$ in size. Two of the aperture’s rectangular windows are open for recording. The aperture and phase-only SLM are at the same plane and promise to display the phase pattern uploaded on the SLM precisely. In order to allow just the reference beam to pass through the window of the signal component during reading, a black screen is utilized. We utilize an aperture and an attenuator to make sure that all of the reconstructed beam passes through and balances the intensity difference between the signal and reference parts during the reading process because the reconstructed signal beam is a diffracted beam with a lower intensity. The aperture size has an impact on data reliability. Data reliability degrades and the error rate rises when the aperture size is decreased. The aperture can be reduced to a certain extent with our proposed method because it has a stronger capacity for error correction and can withstand a wider range of error rates. The proposed approach is robust to downsizing the aperture size.

A half-wave plate is used to adjust the polarization state of the beam to meet SLM requirements. A beam can be divided into two or more beams by a beam splitter. A 4-level phase data pattern is uploaded to the SLM to modulate as the signal beam and reference beam. Then, after lens5, the reference and signal beams converge on the recording medium and interfere with each other to form interference fringes that are recorded in the medium as a hologram. The CMOS captures the Fourier intensity of the retrieved beam at the back focal plane of lens6; lens6 is a Fourier lens.

The laser has the wavelength of 532 nm and the power of 300 mW. The recording medium is an Irgacure 784 doped PMMA photopolymer with a thickness of 1.5 mm. The SLM is X10468-04 by HAMAMATSU with a resolution of $792 \times 600$ and pixel pitch of 20 $\mathrm{\mu}$m. The CMOS used is DCC3260M by Thorlabs, with a resolution of $1936 \times 1216$ and pixel pitch of 5.86 $\mathrm{\mu}$m. The focal lengths of Lens2-Lens5 is 150 mm and the Fourier lens6 has the focal length of 300 mm.

9000 phase data pages (i.e., 500 data frames) are recorded in the phase-modulated collinear holographic data storage system. These data pages are recorded using displacement multiplexing technique at different recording locations. CCD captures intensity images of data pages recorded at various points. The IFT technique described in Section 2.1 reconstructs the phase information for each data page using the acquired intensity image. The reconstructed two-dimensional data page is converted into a one-dimensional bit stream of information after phase demodulation. BER is then determined by comparing each bit in the codeword to the original bit both before and after applying the proposed method. The amount of error bits in a codeword as a percentage of its length is known as BER. We examine Figs. 16 and  17 together in order to more thoroughly and intuitively grasp the BER. Figures 16 and  17 show that the BERs of the proposed approach with a compression ratio of 0.6 to 0.9 are concentrated below 0.02, whereas those of the standard LDPC without compression and without ECC are concentrated over this threshold. This clearly reflects the superiority of our method. It is also important to note that the proposed method’s BERs for the compression rates of 0.6, 0.7, and 0.8 are all equal to 0. Therefore, the proposed method can significantly reduce BER and improve the reliability of phase-type holographic storage.

 figure: Fig. 16.

Fig. 16. The variations of BERs of 500 data frames after without using ECC, the conventional LDPC code without compression, the proposed method with compression rate with 0.7 and 0.9.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. The variations of BERs of 500 data frames after without using ECC, the conventional LDPC code without compression, the proposed method with compression rate with 0.6 and 0.8.

Download Full Size | PDF

The number of decoding iterations of 500 data frames of after using without ECC, the conventional LDPC code without compression, and the proposed method are counted, respectively. We set the maximum number of decoding iterations of LDPC code to 30. The results are shown in Fig. 18. The number of decoding iterations of the conventional LDPC code without compression and the proposed method with compression rate of 0.9 has reached the maximum 30 of the set decoding iterations. However, the number of decoding iterations of the proposed method with compression rate of 0.8, 0.7, and 0.6, is significantly decreased, within 10 iterations to decode successfully. In particular, the number of translation iterations can be controlled within 3 when our method adopts a compression rate of 0.6. The significant reduction in the number of decoding iterations not only means the improvement of data conversion rate, but also means the improvement of data reliability because most of the data frames that reach the maximum number of decoding iterations have not been successfully decoded.

 figure: Fig. 18.

Fig. 18. The number of decoding iterations of 500 data frames after using without ECC, the conventional LDPC code without compression, the proposed method with compression rate with 0.6 to 0.9, respectively.

Download Full Size | PDF

5. Conclusion

A method by using data compression and LDPC codes to improve the data reliability of PHDS is proposed. The space saved by data compression is used to store more added parity check bits of LDPC codes. This strengthens the constraint relationship between check bits and information bits, which improves the decoding performance of LDPC codes. Compared with conventional LDPC code optimization methods, the proposed method does not rely on the characteristics of reconstructed phase distribution, reducing the burden of statistical analysis and calculation. A method to calculate the code rate and a method to manage the free space are presented. The simulation and experimental results show that the proposed method significantly reduces the BER, the number of decoding iterations, and improves the probability of successful decoding. In particular, the lower the compression rate, the better the decoding performance. When the phase error rate is 0.029 and the compression rate is 0.6, our method can reduce the BER by $87.8{\%}$, lower the number of decoding by $84.3{\%}$, and improve the probability of successful decoding by $93{\%}$. Our method improves data reliability of PHDS and storage efficiency. Our method is not limited to holographic storage, but can also be extended to improve other storage systems.

Funding

National Key Research and Development Program of China (2018YFA0701800, 2022YFB2804300); National Natural Science Foundation of China (62102156).

Acknowledgments

This work was supported by Key Laboratory of Information Storage System, Ministry of Education of China, and the Engineering Research Center of Data Storage Systems and Technology. The corresponding author is Fei Wu.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Horimai and X. Tan, “Holographic information storage system: today and future,” IEEE Trans. Magn. 43(2), 943–947 (2007). [CrossRef]  

2. L. Hesselink, S. S. Orlov, and M. C. Bashaw, “Holographic data storage systems,” Proc. IEEE 92(8), 1231–1280 (2004). [CrossRef]  

3. H. Horimai and X. Tan, “Advanced collinear holography,” Opt. Rev. 12(2), 90–92 (2005). [CrossRef]  

4. A. Vijayakumar, T. Katkus, S. Lundgaard, D. P. Linklater, E. P. Ivanova, S. H. Ng, and S. Juodkazis, “Fresnel incoherent correlation holography with single camera shot,” Opto-Electron. Adv. 3(8), 200004 (2020). [CrossRef]  

5. D. Psaltis, M. Levene, A. Pu, G. Barbastathis, and K. Curtis, “Holographic storage using shift multiplexing,” Opt. Lett. 20(7), 782 (1995). [CrossRef]  

6. O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288 (1999). [CrossRef]  

7. S. Yin, H. Zhou, F. Zhao, M. Wen, Z. Yang, J. Zhang, and F. T. S. Yu, “Wavelength multiplexed holographic storage in a sensitive photorefractive crystal using a visible-light tunable diode laser,” Opt. Commun. 101(5-6), 317–321 (1993). [CrossRef]  

8. C. Denz, G. Pauliat, G. Roosen, and T. Tschudi, “Volume hologram multiplexing using a deterministic phase encoding method,” Opt. Commun. 85(2-3), 171–176 (1991). [CrossRef]  

9. X. Tan, “Optical data storage technologies for big data era,” Infrared and Laser Engineering 45(9), 0935001 (2016). [CrossRef]  

10. X. Lin, J. Liu, J. Hao, K. Wang, Y. Zhang, H. Li, H. Horimai, and X. Tan, “Collinear holographic data storage technologies,” Opto-Electron. Adv. 3(3), 190004 (2020). [CrossRef]  

11. J. Hao, X. Lin, Y. Lin, M. Chen, R. Chen, S. Guohai, H. Horimai, and X. Tan, “Lensless complex amplitude demodulation based on deep learning in holographic data storage,” Opto-Electron. Adv. 6(3), 220157 (2023). [CrossRef]  

12. J. Liu, K. Xu, J. Liu, J. Cai, Y. He, and X. Tan, “Phase modulated collinear holographic storage,” Opt. Express 26(4), 3828 (2018). [CrossRef]  

13. M. He, L. Cao, Q. Tan, Q. He, and G. Jin, “Novel phase detection method for a holographic data storage system using two interferograms,” J. Opt. A: Pure Appl. Opt. 11(6), 065705 (2009). [CrossRef]  

14. X. Lin, Y. Huang, Y. Y. Li, J. Liu, J. Liu, R. Kang, and X. Tan, “Four-level phase pair encoding and decoding with single interferometric phase retrieval for holographic data storage,” Chin. Opt. Lett 16(25), 032101 (2018).

15. J. Hao, K. Wang, Y. Zhang, H. Li, X. Lin, Z. Huang, and X. Tan, “Collinear non-interferometric phase retrieval for holographic data storage,” Opt. Express 28(18), 25795–25805 (2020). [CrossRef]  

16. X. Lin, Y. Huang, T. Shimura, R. Fujimura, Y. Tanaka, M. Endo, H. Nishimoto, J. Liu, Y. Li, Y. Liu, and X. Tan, “Fast non-interferometric iterative phase retrieval for holographic data storage,” Opt. Express 25(25), 30905–30915 (2017). [CrossRef]  

17. X. Lin, J. Hao, K. Wang, Y. Zhang, H. Li, and X. Tan, “Frequency expanded non-interferometric phase retrieval for holographic data storage,” Opt. Express 28(1), 511–518 (2020). [CrossRef]  

18. J. Hao, X. Lin, Y. Li, Y. Ren, K. Wang, Y. Zhang, H. Li, and X. Tan, “Fast phase retrieval with a combined method between interferometry and non-interferometry in the holographic data storage,” Opt. Eng. 59(10), 1 (2020). [CrossRef]  

19. P. Koppa, “Phase-to-amplitude data page conversion for holographic storage and optical encryption,” Appl. Opt. 46(17), 3561 (2007). [CrossRef]  

20. T. Nobukawa and T. Nomura, “Multilevel recording of complex amplitude data pages in a holographic data storage system using digital holography,” Opt. Express 24(18), 21001 (2016). [CrossRef]  

21. N. Yoneda, Y. Saita, and T. Nomura, “Computer-generated-hologram-based holographic data storage using common-path off-axis digital holography,” Opt. Lett. 45(10), 2796 (2020). [CrossRef]  

22. E. Hwang, P. Yoon, J. Park, and H. Nam, “Three-Dimensional Error Correction Schemes for Holographic Data Storage,” Jpn. J. Appl. Phys. 44(5S), 3529–3533 (2005). [CrossRef]  

23. H. Pishro-Nik, N. Rahnavard, J. Ha, F. Fekri, and A. Adibi, “Low-density parity-check codes for volume holographic memory systems,” Appl. Opt. 42(5), 861–870 (2003). [CrossRef]  

24. H. Hayashi and K. Kimura, “Low-density parity-check coding for holographic data storage,” Jpn. J. Appl. Phys. 44(5S), 3495 (2005). [CrossRef]  

25. Q. Yu, F. Wu, M. Zhang, Y. Zhao, and C. Xie, “Improving reliability using phase distribution aware LDPC code for holographic data storage,” Appl. Opt. 61(21), 6119 (2022). [CrossRef]  

26. Q. Yu, F. Wu, M. Zhang, and C. Xie, “Fast phase error correction with reference beam-assisted LDPC coding for collinear holographic data storage,” Opt. Express 31(12), 20345 (2023). [CrossRef]  

27. Y. Zhao, F. Wu, X. Lin, J. Zhou, M. Zhang, Q. Yu, X. Tan, and C. Xie, “Improving the data reliability of phase modulated holographic storage using a reliable bit aware low-density parity-check code,” Opt. Express 30(21), 37579 (2022). [CrossRef]  

28. D. A. Lelewer and D. S. Hirschberg, “Data compression,” ACM Comput. Surv. 19(3), 261–296 (1987). [CrossRef]  

29. W. E. Ryan, “An introduction to LDPC codes,” CRC Handbook for Coding and Signal Processing for Recording Systems 5(2), 1–23 (2004).

30. R. Gallager, “Low-density parity-check codes,” IEEE Trans. Inform. Theory 8(1), 21–28 (1962). [CrossRef]  

31. E. Darakis and J. Soraghan, “Compression of interference patterns with application to phase-shifting digital holography,” Appl. Opt. 45(11), 2437 (2006). [CrossRef]  

32. T. J. Naughton, Y. Frauel, B. Javidi, and E. Tajahuerce, “Compression of digital holograms for three-dimensional object reconstruction and recognition,” Appl. Opt. 41(20), 4124 (2002). [CrossRef]  

33. T. J. Naughton, J. B. McDonald, and B. Javidi, “Efficient compression of Fresnel fields for internet transmission of three-dimensional images,” Appl. Opt. 42(23), 4758 (2003). [CrossRef]  

34. A. Arrifano, M. Antonini, and M. Pereira, “Multiple description coding of digital holograms using Maximum-a-Posteriori,” in IEEE 4th European Workshop on Visual Information Processing (EUVIP) (France, Paris, 2013).

35. J. Liu, L. Zhang, A. Wu, Y. Tanaka, S. Shigaki, T. Shimura, X. Lin, and X. Tan, “High noise margin decoding of holographic data page based on compressed sensing,” Opt. Express 28(5), 7139–7151 (2020). [CrossRef]  

36. M. Hachani, A. Ouled Zaid, and F. Dufaux, “Phase-shifting digital holographic data compression,” J Opt 48(3), 412–428 (2019). [CrossRef]  

37. J. Kim, J. Cho, and W. Sung, “A high-speed layered min-sum LDPC decoder for error correction of NAND flash memories,” in IEEE 54th International Midwest Symposium on Circuits and Systems, 1–4 (2011).

38. C. Gu, G. Sornat, and J. Hong, “Bit-error rate and statistics of complex amplitude noise in holographic data storage,” Opt. Lett. 21(14), 1070–1702 (1996). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. The illustration of the collinnear phase-modulated holographic data storage system.
Fig. 2.
Fig. 2. The parity check matrix and its corresponding parity check equations. Each column in the parity check matrix containing only 0 and 1 is called the variable node and each row is called the check node corresponding the check equation. After encoding, the bit sequences of $X{\rm {\ =\ (}}{{\rm {x}}_1}{\rm {,\ \ }}{{\rm {x}}_2}{\rm {,\ \ }}{{\rm {x}}_3}{\rm {,\ \ }}{{\rm {x}}_4}{\rm {,\ \ }}{{\rm {x}}_5}{\rm {,\ \ }}{{\rm {x}}_6}{\rm {,\ \ }}{{\rm {x}}_7}{\rm {,\ \ }}{{\rm {x}}_8}{\rm {)}}$ satisfy ${H_{3 \times 8}} \cdot {X^T}{\rm {\ =\ }}{0^T}$. Here, ${H_{3 \times 8}}$ denotes the parity check matrix. It is also one of the decoding termination conditions. The addition operation is the exclusive OR operation. Expanding the above equation, we can get the following check equations: ${{\rm {x}}_1} \oplus {{\rm {x}}_3} \oplus {{\rm {x}}_5} \oplus {{\rm {x}}_7} = 0$, ${{\rm {x}}_2} \oplus {{\rm {x}}_4} \oplus {{\rm {x}}_6} = 0$, and ${{\rm {x}}_1} \oplus {{\rm {x}}_2} \oplus {{\rm {x}}_3} \oplus {{\rm {x}}_8} = 0$. After processing, we can get ${{\rm {x}}_6}{\rm {\ =\ }}{{\rm {x}}_2} \oplus {{\rm {x}}_4}$, ${{\rm {x}}_7}{\rm {\ =\ }}{{\rm {x}}_1} \oplus {{\rm {x}}_3} \oplus {{\rm {x}}_5}$, and ${{\rm {x}}_8} = {{\rm {x}}_1} \oplus {{\rm {x}}_2} \oplus {{\rm {x}}_3}$, where ${{\rm {x}}_6}$, ${{\rm {x}}_7}$, and ${{\rm {x}}_8}$ denote the parity check bits. ${{\rm {x}}_1}$, ${{\rm {x}}_2}$, ${{\rm {x}}_3}$, ${{\rm {x}}_4}$, and ${{\rm {x}}_5}$ represent the information bits.
Fig. 3.
Fig. 3. Channel model of holographic data storage system.
Fig. 4.
Fig. 4. The data flow diagram of the proposed method in phase-modulated holographic data storage system.
Fig. 5.
Fig. 5. Traditional denoted by number 1 versus our methods represented by number 2.
Fig. 6.
Fig. 6. Three ways of placing free space.
Fig. 7.
Fig. 7. Obtaining the initial LLR information for each bit to launch LDPC decoding.
Fig. 8.
Fig. 8. The UBER of the proposed method and conventional method for different PER.
Fig. 9.
Fig. 9. The BER of the proposed method and conventional method for different PER.
Fig. 10.
Fig. 10. Average decoding iterations curves of the proposed method and conventional method for different PER.
Fig. 11.
Fig. 11. The probability of successful decoding curves of the proposed method and conventional method for different PER.
Fig. 12.
Fig. 12. Phase error distribution after (a) using the IFT algorithm without error correction (b) using the LDPC codes without data compression under the number of decoding iterations of 30.
Fig. 13.
Fig. 13. Phase error distribution after using the LDPC codes (a) with data compression rate of 0.9 under the number of decoding iterations of 25 (b) with data compression rate of 0.8 under the number of decoding iterations of 5.
Fig. 14.
Fig. 14. Phase error distribution after using the LDPC codes (a) with data compression rate of 0.7 under the number of decoding iterations of 3 (b) with data compression rate of 0.6 under the number of decoding iterations of 3.
Fig. 15.
Fig. 15. The experimental setup of the collinear phase-modulated holographic data storage system.
Fig. 16.
Fig. 16. The variations of BERs of 500 data frames after without using ECC, the conventional LDPC code without compression, the proposed method with compression rate with 0.7 and 0.9.
Fig. 17.
Fig. 17. The variations of BERs of 500 data frames after without using ECC, the conventional LDPC code without compression, the proposed method with compression rate with 0.6 and 0.8.
Fig. 18.
Fig. 18. The number of decoding iterations of 500 data frames after using without ECC, the conventional LDPC code without compression, the proposed method with compression rate with 0.6 to 0.9, respectively.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

[ C k ] = e i [ P s k ] + A r e i [ P r ] .
[ F k ] = f { C k } = | [ I n t 0 ] | e i P k .
[ C k ] = f 1 { [ F k ] } = | W s | e i [ P s k ] + | W r | e i [ P r ] .
[ C k ] = 1 e i [ P s k ] + | A r | e i [ P r ] .
[ I n t k ] = f { C k } ( f { C k } ) .
[ I k ] = ( | [ I n t k ] [ I n t 0 ] | ) [ I n t 0 ] .
Δ I = I k I k 1 .
c o m p r e s s i o n   r a t e = t h e   s i z e   o f   c o m p r e s s e d   d a t a t h e   s i z e   o f     o r i g i n a l   d a t a .
G j = S j K i j ( t 1 ) .
K i j t = ( k M ( i ) j s i g n ( G k ) ) ( min k M ( i ) j { | G k | } α ) ,
S j = G j + K i j t .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.