Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase retrieval based on deep learning with bandpass filtering in holographic data storage

Open Access Open Access

Abstract

A phase retrieval method based on deep learning with bandpass filtering in holographic data storage is proposed. The relationship between the known encoded data pages and their near-field diffraction intensity patterns is established by an end-to-end convolutional neural network, which is used to predict the unknown phase data page. We found the training efficiency of phase retrieval by deep learning is mainly determined by the edge details of the adjacent phase codes, which are the high-frequency components of the phase code. Therefore, we can attenuate the low-frequency components to reduce material consumption. Besides, we also filter out the high-order frequency over twice Nyquist size, which is redundant information with poor anti-noise performance. Compared with full-frequency recording, the consumption of storage media is reduced by 2.94 times, thus improving the storage density.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the advent of the information era, people's demand for data storage capacity and data transfer rate is becoming more and more intense [1,2]. Holographic data storage (HDS) technology is a form of three-dimensional data storage technology, which has the characteristics of high storage density and long storage life, so it is considered as one of the most promising storage technologies [37]. Phase-modulated HDS technology has become a research hotspot in recent years because of its higher code rate and higher signal-to-noise ratio (SNR) [810]. However, efficient phase decoding is a tough problem.

Since the image detector cannot detect the phase information directly, the interferometric method is commonly utilized to transform phase information into the detectable intensity information, which is captured by the charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) [11,12]. The interferometric method is unsuitable for HDS because the system is complex and unstable, which is easily affected by environmental disturbance [1315]. Conversely, the non-interferometric method is more ideal in HDS because of its stability. There are many non-interferometric methods such as PIE, TIE, IFTA and so on [1619]. Considering the demand of HDS to simple operation and stable system, the single-shot phase retrieval method is more suitable. Lin et al. proposed a non-interferometric method combining embedded phase data and iterative Fourier transform algorithm, which realized accurate and quick phase retrieval in the holographic data storage system [20]. Chen et al. proposed a dynamic sampling iterative phase retrieval method, which can shorten the iteration number and reduce the phase error rate of phase retrieval [21]. Even so, the traditional iterative method still hinders the improvement of data transfer rate in holographic storage systems. In addition, the iterative method needs to retrieve the phase through the captured the Fourier intensity spectrum distribution. But the noise can introduce additional spectral components, which can affect the phase retrieval results. Therefore, a phase reconstruction method with stronger sampling noise resistance is needed. Extensive research has demonstrated that deep learning can attain favorable retrieval results for both phase and complex amplitude images [2230]. In previous studies, we proposed a lensless phase retrieval method for HDS based on deep learning and embedded data which can not only improve the data transfer rate and the noise resistance performance, but also improve the storage density by reducing the proportion of embedded phase data [31]. The relationship between the known encoded data page and its near-field diffraction intensity image can be established by the end-to-end convolutional neural network, so as to predict the unknown phase data page.

In the phase pattern, the low frequency component represents the smooth region with high intensity, and the high frequency component represents the edge details with low intensity. Then, we found that the training efficiency of the network model is mainly determined by the edge of the phase pattern, which means that we can selectively record only those frequencies that have a gain for model training. In this paper, we propose a deep learning combined with bandpass filtering for phase retrieval method in holographic data storage, which can realize phase retrieval by only recording and using partial frequency components of phase pattern for deep learning training. We attenuate the low frequency component of the phase pattern to explore the frequency range that can be filtered. Besides, in order to eliminate noise caused by high-order frequencies, we filter out the frequency above twice the Nyquist interval to achieve bandpass filtering of the phase pattern. The experimental results show that our proposed method can reduce the consumption of media and improve the storage density.

2. Theory and methods

The illustration of the data storage and phase retrieval system is shown in Fig. 1. In the recording process, the encoded phase data page is loaded on the spatial light modulator (SLM) as the information beam, which interferes with another reference beam and forms a hologram in the storage media. In the reading process, only the reference beam illuminates the holographic media, and a diffractive beam is generated to realize the reproduction of the information beam, which is called the reconstruction beam. Aperture is used as a spatial filter to limit the size of frequency distribution and regulate the information beam. The calculation method of Nyquist size w as shown in Eq. (1).

$$w = \frac{{\lambda f}}{l}.$$
where $\lambda$ is wavelength of laser, f is focal length of lens and l is the spatial size of one phase data. In the experiment, one phase data is described by 4 × 4 pixels block on the SLM to show and the pixel pitch is 12.5 µm. So $l$ should be 50 µm. The size parameters of aperture are calculated as shown in Eq. (2) and Eq. (3).
$${D_1} = 2w.$$
$${D_2} = \tau w.$$
where $\tau$ is the high pass filter factor. The setting of $\tau$ determines the size of the attenuation region (lower stopband) of the central part of the aperture. The band-pass filtering method is that the frequencies above ${D_1}$(upper stopband) and below ${D_2}$(lower stopband) are filtered. The frequencies between ${D_1}$ and ${D_2}$ is passband.

 figure: Fig. 1.

Fig. 1. The illustration of the data storage and phase retrieval system.

Download Full Size | PDF

First, considering the pure phase modulation on SLM, the optical field and its Fourier transform of the phase data page as shown in Eq. (4) and Eq. (5) respectively.

$$s(x,y,z = 0) = \exp[i\varphi (x,y,z = 0)].$$
$$S(u,v) = \mathrm{{\cal F}}\{ s(x,y,z = 0)\} .$$
where $\varphi$ is the phase data page, $\mathrm{{\cal F}}\{ \} $ denotes Fourier transform operator, the position of the reconstructed beam plane (the back focal of L4) is defined as $z = 0$.

Second, we can regulate the information beam by designing different aperture transmission functions $T(u,v)$ to change the spatial frequency of the information beam. Therefore, real recorded Fourier frequency of the phase-only input as shown in Eq. (6).

$${S_{filtering}}(u,v) = S(u,v) \cdot T(u,v)$$

Then, if the influence of system and material noise is ignored, the light field of the reconstructed beam as shown in Eq. (7).

$$U(x,y,z = 0) = {\mathrm{{\cal F}}^{ - 1}}\{ {S_{filtering}}(u,v)\}$$

Last, the CMOS is placed at a distance d from the reconstructed beam plane to capture the diffraction intensity $I({x,y,z = d} )$. the propagating and imaging process is modeled as shown in Eq. (8).

$$I(x,y,z = d) = {|{{U_d}(x,y,z = d)} |^2} = H(\varphi (x,y,z = 0))$$
where $\varphi $ is the phase data page, ${U_d}(x,y,z = d)$ is the complex amplitude distribution of the reconstructed beam at the L4 back focal plane by the near-field diffraction distance d toward the detector plane.$H({\cdot} )$ is a function of the propagating and imaging system, denoting the mapping relationship between phase data page and diffraction intensity captured by the detector.

The phase retrieval is to solve the inverse function ${H^{ - 1}}({\cdot} )$. In the experiment, the transmission and imaging system is usually nonlinear, so the estimate of phase data page as shown in Eq. (9).

$$\hat{\varphi }(x,y,z = 0) = {H^{ - 1}}(I(x,y,z = d)).$$

Convolutional neural networks are good at solving inverse problems in imaging [32]. In this study, we trained the end-to-end convolutional neural network U-net to obtain the relationship between the diffraction intensity pattern and the phase data page, that is, let the neural network solve ${H^{ - 1}}({\cdot} )$[33,34]. Once the model training is completed, the phase data page can be reconstructed directly from the diffraction intensity pattern.

The architecture of the U-net convolutional neural network used in this paper is shown in Fig. 2. Its structure is symmetrical. The left part is the contracting path, the diffraction intensity image is down-sampled and feature extraction is carried out through the 2*2 max pooling operation, which can be regarded as an encoder. The right part is the expanding path, which is the up-sampling process, realizing the stitching and retrieval of the features of the intensity image, and mapping to the phase data page. The activation function following the convolutional layer is Rectified Linear Unit (ReLU), and Sigmoid as the activation function of the output layer [35]. We use the diffraction intensity image captured by the CMOS and the phase data page of the same size as the input and output of the neural network respectively.

 figure: Fig. 2.

Fig. 2. The architecture of the U-net convolutional neural network.

Download Full Size | PDF

The loss function of the neural network is the mean square error (MSE) as shown in Eq. (10).

$$MSE = \min \frac{1}{{NWH}}\sum\limits_{\textrm{n} = 1}^N {\sum\limits_{\textrm{u} = 1}^W {\sum\limits_{v = 1}^H {({{\hat{\varphi }}_n}(u,v) - {\varphi _n}(u,v))} } } .$$
where W and H are the width and height of the image respectively, N = 4 is the batch size. ${\hat{\varphi }_n}(u,v)$ is the phase data page reconstructed from the nth diffraction intensity pattern and ${\varphi _n}(u,v)$ is the corresponding ground truth. On the basis of the stochastic gradient descent (SGD) [36], 4 sets of picture pairs were randomly selected from the training data set, each set of picture pairs including the reconstructed phase data page and the corresponding ground truth, and calculated the MSE according to Eq. (10).

The training super parameters of the neural network are set as shown in Table 1. The program is implemented in Python 3.6 based on PyTorch framework and used GeForce RTX 3090 for accelerated calculation. In the all simulations, we used 6000 pairs of images as a training data set, 1000 pairs of images as a test data set, and 10 intensity images as a validation data set to calculate the average phase data error rate.

Tables Icon

Table 1. Super Parameters for the Neural Network Training

3. Simulation and experimental results

In the simulation, we set the parameters according to the experimental devices. In order to verify the feasibility of the bandpass filtering method, no noise is added in the simulation. Although higher-level encodings can improve the encoding rate, they are more sensitive to noise. The four-level phase encoding may be preferred due to its balance of these factors. For conventional four-level equal interval phase encoding (0, π/2, π, 3π/2), each encoding value may produce the same phase differences, which is not suitable for deep learning methods [38]. Therefore, all the phase data was randomly encoded by the four-level phase (π/6, 2π/3, π, 3π/2) with the different intervals in this paper.

Considering the sequence of propagating and imaging processes, we set the diffraction distance d = 2 mm. We first analyzed the influence of the aperture function $T(u,v)$ on the phase retrieval according to Eq. (6). In the phase-modulated HDS system, the evaluation criterion of phase retrieval is bit error rate (BER), which is the ratio of error retrieved phase data pixels to total phase data pixels [31].

In order to evaluate the shape and size of the filter, we designed circular and square bandpass filters, whose aperture functions as shown in Eq. (11) and Eq. (12) respectively.

$${T_c}(u,v) = \left\{ {\begin{array}{{ll}} 0&{\textrm{if }\sqrt {{u^2} + {v^2}} \le \frac{{\tau w}}{2}\textrm{ }}\\ 1&{\textrm{if }\frac{{\tau w}}{2} < \sqrt {{u^2} + {v^2}} ,|u |\le w,|v |\le w}\\ 0&{\textrm{if }|u |> w,|v |> w} \end{array}} \right.$$
$${T_s}(u,v) = \left\{ {\begin{array}{{ll}} 0&{\textrm{if }|u |\le \frac{{\tau w}}{2},\textrm{ }|v |\le \frac{{\tau w}}{2}}\\ 1&{\textrm{if }\frac{{\tau w}}{2}\mathrm{\ < }|u |\le w,\frac{{\tau w}}{2}\mathrm{\ < }|v |\le w}\\ 0&{\textrm{if }|u |> w,|v |> w} \end{array}} \right.$$
where ${T_c}(u,v)$ and ${T_s}(u,v)$ are functions of circular bandpass filter and square bandpass filter, the value of $\tau$ ranges from $- w$ to w.

The intensity distribution of the Fourier spectrum of the phase pattern is shown in Fig. 3(a). The schematic diagram of the circular filter is shown in Fig. 3(b). The schematic diagram of the square filter is shown in Fig. 3(d). The filtered intensity distribution of the Fourier spectrum of the phase pattern is shown in Fig. 3(c) and Fig. 3(e).

 figure: Fig. 3.

Fig. 3. (a) The intensity distribution of the Fourier spectrum of the phase pattern, (b) the schematic diagram of the circular bandpass filter, (c) intensity distribution of the Fourier spectrum after the circular bandpass filter, (d) the schematic diagram of the square bandpass filter, (e) intensity distribution of the Fourier spectrum after the square bandpass filter.

Download Full Size | PDF

The high pass filter factor $\tau$ is changed from 0.1 to 2 (when the high pass filter factor of the square filter is set to 2, all frequencies are cut off, which is meaningless). The phase data error rate of the two filtering functions at different high pass filter factors is shown in Fig. 4. When the high pass filter factor value is greater than 1.1, the phase retrieval effect of the circular bandpass filter is better than that of the square bandpass filter because the former frequency response is smoother than the latter, which can better retain edge information in the image and avoid sharpening effect and edge blurring. Therefore, only circular bandpass filter is used in subsequent simulations.

 figure: Fig. 4.

Fig. 4. The phase data error rate of the two filtering functions at different high pass filter factors.

Download Full Size | PDF

Due to the printing process of ultraviolet curing, it is difficult to completely attenuate the spectrum intensity of the lower stopband. Therefore, we set the lower stopband transmittance from 0.01 to 0.05, and obtained the curve relationship between the high pass filter factor and the phase error rate is shown in Fig. 5(a). It can be seen that due to the high intensity of low-frequency information, when the high pass filter factor value is large, the higher stopband transmittance has a more significant effect on the gain of phase retrieval.

 figure: Fig. 5.

Fig. 5. BER with (a) high pass filter factor under different lower stopband transmittance and (b) passband transmittance under different high pass filtering factors.

Download Full Size | PDF

In addition, since the transmittance of common transparent optical materials is about 0.8-1, we set $\tau $= 0.5, 1, 1.5, 2 respectively and changed the passband transmittance of the filter from 0.81 to 0.99. The result is shown in Fig. 5(b). Because the passband is high-frequency information with relatively low intensity, the transmittance of the passband has little influence on phase retrieval. This means that selecting the base material of the filter in the experiment has a higher degree of freedom and does not have to be restricted to materials with high transmittance.

In the previous work, we found that different diffraction distances d would produce different intensity images, is shown in Fig. 6, which would affect the results of phase retrieval. Therefore, we investigated the influence of diffraction distance on the phase retrieval effect with different high pass filter factors. We compared the BER with $\tau $= 0.5, 1, 1.5, 2 at the diffraction distance d from 0 mm to 3 mm. The result is shown in Fig. 7. As the diffraction distance increases within a given range, the edge details of the intensity image become progressively blurred but the diffraction features are gradually enriched. Because high-frequency information above the 2 times Nyquist interval is filtered out, the partial details of the image are lost. In a certain range of diffraction distances, the phase retrieval effect of this method is almost unaffected.

 figure: Fig. 6.

Fig. 6. Intensity images captured by the CMOS at different diffraction distances.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. BER curves with diffraction distance under different high pass filter factors.

Download Full Size | PDF

Before and after bandpass filtering, intensity images captured by the CMOS are shown in Fig. 8(b) and Fig. 8(d) respectively. Their three-dimensional intensity distributions of the Fourier spectrum are shown in Fig. 8(c) and Fig. 8(e) respectively. It can be seen that after filtering, the light intensity recorded in the media is significantly reduced. According to the parameters measured by experiment, we set the lower stopband transmittance to 0.02, and the passband transmittance to 0.91. Compared with all frequency records, we calculate the multiples by which the bandpass filtering method saves the storage material. The result is shown in Fig. 9.

 figure: Fig. 8.

Fig. 8. (a) The phase pattern uploaded on SLM, (b) intensity image captured by the CMOS before filtering, (c) three-dimensional intensity distributions of the Fourier spectrum of image (b), (d) intensity image captured by the CMOS after filtering and (e) three-dimensional intensity distributions of the Fourier spectrum of image (d).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The curve relationship between multiple of media saving and high pass filter factor.

Download Full Size | PDF

We also considered the case of filter position shift. In the experiment, it is difficult to achieve that the position of the aperture is exactly in the center of the spectrum, and the shift can be ensured below $0.1w$ usually. Taking the high pass filter factor $\tau = 1$ as an example, the aperture was shift by 1-6 pixels to the upper left, lower left, upper right and lower right respectively (6 pixels corresponded to $0.1w$ in the experiment). The curve relationship between BER and the number of shift pixels in the simulation is shown in Fig. 10. Due to the symmetry of the spectrum, the direction of shift has little impact on the phase retrieval results. In addition, within the shift of 6 pixels, the fluctuation range of BER is below 0.1%, which proves the tolerance of the bandpass filtering method to aperture position shift is large.

 figure: Fig. 10.

Fig. 10. The BER curve with aperture offset number pixels.

Download Full Size | PDF

The experimental setup of the off-axis holographic data storage system is shown in Fig. 11. In this study, considering that media will cause uneven data samples, we did not use media for data capture. The wavelength of the laser is 532 nm. When the laser wavelength is 532 nm, the holographic storage media is more sensitive and has good diffraction efficiency. The SLM is X15213-16 by HAMAMATSU, which has the resolutions of 1272 × 1024 and a pixel spacing of 12.5 µm. The aperture is placed on the back focal plane of L1 to perform spatial filtering on the information beam. We used polymethyl methacrylate (the transmittance is about 0.91) as the substrate material for the filter and coated the attenuation area of the filter (the transmittance is about 0.02). The CMOS1 is DCC3260 M by Thorlabs, which has the resolutions of 1936 × 1216 and a pixel spacing of 5.86 µm. It was placed 1 mm from the back focal plane of the L4 and used to capture the diffraction intensity of the reconstructed beam. The CMOS2 is DCC1545M-GL by Thorlabs, which has the resolutions of 1280 × 1024 and a pixel spacing of 5.2 µm and was used to capture the Fourier spectrum intensity of the reconstructed beam. By observing the images captured by the CMOS2, we could determine whether aperture is located at the center of the spectrum.

 figure: Fig. 11.

Fig. 11. The experimental setup of the off-axis holographic data storage system. BE: beam expander, BS: beam splitter, HWP: half wave plate, SLM: spatial light modulator, L1-L5: lens (150 mm).

Download Full Size | PDF

The phase data page uploaded on the SLM and its diffraction intensity image are shown in Fig. 12. The encoded phase pattern is a 32 ∗ 32 data matrix and every phase data is displayed by a block of 4 × 4 pixels on the SLM. Therefore, the pixel matrix of the phase data page uploaded to the SLM is 128*128. According to the parameters of the SLM and the CMOS, the diffraction intensity pattern pixel size of data page captured in the CMOS1 is approximately 273*273. To capture all the diffraction features, we took 288*288 of the intensity pattern, shrunken it to 256∗256 by the bicubic interpolation algorithm, and padded zero to 288 ∗ 288. Meanwhile, the phase data page was expanded to 256*256 by 2 times upsampling, also padded zero to 288∗288 to maintain the same size as the intensity image. The diffraction intensity patterns captured by the CMOS for each high pass filter factor in the experiment are shown in Fig. 13.

 figure: Fig. 12.

Fig. 12. (a) The encoded phase data page uploaded on the SLM, (b) the phase data page was expanded to 256*256 by 2 times upsampling and then padded zero to 288∗288, (c) The diffraction intensity image captured by the CMOS with a high pass filter factor of 0.5 at a diffraction distance of 1 mm.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Diffraction intensity captured by the CMOS with different high pass filter factors in the experiment.

Download Full Size | PDF

For each size high pass filter factor, 7000 randomly generated phase data pages are uploaded to SLM and corresponding intensity images are captured by the CMOS as the dataset. The proportions of each dataset were consistent with the simulation. In order to suppress the impact of the dynamic noise, we shuffled the experimental images. When the $\tau $= 1.9, 2.0, the system noise is almost consistent with the intensity of the reconstructed beam, resulting in no corresponding intensity images captured by the CMOS. The curve relationship between high pass filter factor and phase error rate in the experiment and simulation is shown in Fig. 14. As the high-pass filter factor increases, the diffraction intensity captured by the CMOS decreases consequently in the experiment, which amplifies the impact of the dynamic noise of the CMOS on the retrieval effect. On the contrary, since no noise is added to the simulation, the diffraction intensity pattern is normalized to enhance its darker high-frequency details, which makes the phase retrieval relatively stable. When no check code is added, we think that BER of around 0.1% is acceptable in the phase-modulated HDS system. Therefore, the optimal high pass filter factor is determined to be 0.9, and the BER is 0.1%. After bandpass filtering, the recorded light intensity is only about 33.97% of the unfiltered one, which can save about 2.94 times the media.

 figure: Fig. 14.

Fig. 14. The curve relationship between high pass filter factor and phase data error rate in the experiment and simulation.

Download Full Size | PDF

4. Conclusions

In this paper, we proposed a bandpass filtering combined with deep learning for phase retrieval method in holographic data storage. The parameter settings of this method were discussed through simulation, and the feasibility was verified by experiments. This method could reduce the consumption of storage media and increase the storage density by 2.94 times by bandpass filtering on the information beam. Besides, the method filters out frequencies above twice the Nyquist interval, improves the noise resistance of the system and is insensitive to the diffraction distance. Meanwhile, this method has a higher tolerance for the transmittance and placement of the aperture in the experimental system, which is quite suitable for the phase-modulated HDS system. In the future, we will use storage media for data capture to further investigate the influence of sample inhomogeneity caused by media on bandpass filtering methods.

Funding

National Natural Science Foundation of China (62005048, U22A2080); National Key Research and Development Program of China (2018YFA0701800); Project of Fujian Province Major Science and Technology (2020HZ01012).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Haw, “Holographic data storage: The light fantastic,” Nature 422(6932), 556–558 (2003). [CrossRef]  

2. X. Lin, J. Hao, M. Zheng, et al., “Optical holographic data storage-The time for new development,” Opto-Electron. Eng. 46(3), 180642 (2019).

3. E. Leith, A. Kozma, J. Upatnieks, et al., “Holographic data storage in three-dimensional media,” Appl. Opt. 5(8), 1303–1311 (1966). [CrossRef]  

4. L. Hesselink, S. S. Orlov, and M. C. Bashaw, “Holographic data storage systems,” Proc. IEEE 92(8), 1231–1280 (2004). [CrossRef]  

5. H. Horimai, X. Tan, and J. Li, “Collinear holography,” Appl. Opt. 44(13), 2575–2579 (2005). [CrossRef]  

6. H. Horimai and X. Tan, “Holographic information storage system: Today and future,” IEEE Trans. Magn. 43(2), 943–947 (2007). [CrossRef]  

7. X. Lin, J. Liu, J. Hao, et al., “Collinear holographic data storage technologies,” Opto-Electron. Adv. 3(3), 190004 (2020). [CrossRef]  

8. M. Takabayashi, A. Okamoto, A. Tomita, et al., “Symbol error characteristics of hybrid-modulated holographic data storage by intensity and multi phase modulation,” Jpn. J. Appl. Phys. 50(9S1), 09ME05 (2011). [CrossRef]  

9. X. Lin, J. Ke, X. Xiao, et al., “An effective phase modulation in collinear holographic storage,” Proc. SPIE 9006, 900607 (2014). [CrossRef]  

10. X. Lee, Y. Yu, K. Lee, et al., “Random phase encoding in holographic optical storage with energy-effective phase modulation by a phase plate of micro-lens array,” Opt. Commun. 287, 40–44 (2013). [CrossRef]  

11. X. Lin, J. Hao, K. Wang, et al., “Frequency expanded non-interferometric phase retrieval for holographic data storage,” Opt. Express 28(1), 511–518 (2020). [CrossRef]  

12. Y. Zhao, F. Wu, X. Lin, et al., “Phase-distribution-aware adaptive decision scheme to improve the reliability of holographic data storage,” Opt. Express 30(10), 16655–16668 (2022). [CrossRef]  

13. S. H. Jeon and S. K. Gil, “2-step Phase-shifting digital holographic optical encryption and error analysis,” J. Opt. Soc. Korea 15(3), 244–251 (2011). [CrossRef]  

14. K. Xu, Y. Huang, X. Lin, et al., “Unequally spaced four levels phase encoding in holographic data storage,” Opt. Rev. 23(6), 1004–1009 (2016). [CrossRef]  

15. J. Liu, H. Horimai, X. Lin, et al., “Phase modulated high density collinear holographic data storage system with phase-retrieval reference beam locking and orthogonal reference encoding,” Opt. Express 26(4), 3828–3838 (2018). [CrossRef]  

16. A. Maiden and J. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

17. X. Pan, C. Liu, Q. Lin, et al., “Ptycholographic iterative engine with self-positioned scanning illumination,” Opt. Express 21(5), 6162–6168 (2013). [CrossRef]  

18. V. Volkov, Y. Zhu, and M. Graef, “A new symmetrized solution for phase retrieval using the transport of intensity equation,” Micron 33(5), 411–416 (2002). [CrossRef]  

19. J. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

20. X. Lin, Y. Huang, T. Shimura, et al., “Fast non-interferometric iterative phase retrieval for holographic data storage,” Opt. Express 25(25), 30905–30915 (2017). [CrossRef]  

21. R. Chen, J. Hao, C. Yu, et al., “Dynamic sampling iterative phase retrieval for holographic data storage,” Opt. Express 29(5), 6726–6736 (2021). [CrossRef]  

22. K. Wang, J. Dou, Q. Kemao, et al., “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765–4768 (2019). [CrossRef]  

23. K. Wang, Q. Kemao, J. Di, et al., “Y4-Net: a deep learning solution to one-shot dual-wavelength digital holographic reconstruction,” Opt. Lett. 45(15), 4220–4223 (2020). [CrossRef]  

24. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

25. J. Wu, K. Liu, X. Sui, et al., “High-speed computer-generated holography using an autoencoder-based deep neural network,” Opt. Lett. 46(12), 2908–2911 (2021). [CrossRef]  

26. M. Qiao, X. Liu, and X. Yuan, “Snapshot temporal compressive microscopy using an iterative algorithm with untrained neural networks,” Opt. Lett. 46(8), 1888–1891 (2021). [CrossRef]  

27. K. Liao, Y. Chen, Z. Yu, et al., “All-optical computing based on convolutional neural networks,” Opto-Electron. Adv. 4(11), 200060 (2021). [CrossRef]  

28. M. Liao, S. Zheng, S. Pan, et al., “Deep-learning-based ciphertext-only attack on optical double random phase encryption,” Opto-Electron. Adv. 4(5), 200016 (2021). [CrossRef]  

29. T. Ma, M. Tobah, H. Wang, et al., “Benchmarking deep learning-based models on nanophotonic inverse design problems,” Opto-Electron. Sci. 1(1), 210012 (2022). [CrossRef]  

30. Z. Zheng, S. Zhu, Y. Chen, et al., “Towards integrated mode-division demultiplexing spectrometer by deep learning,” Opto-Electron. Sci. 1(11), 220012 (2022). [CrossRef]  

31. J. Hao, X. Lin, Y. Lin, et al., “Lensless phase retrieval based on deep learning used in holographic data storage,” Opt. Lett. 46(17), 4168–4171 (2021). [CrossRef]  

32. M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017). [CrossRef]  

33. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241 (Springer, 2015).

34. A. Sinha, J. Lee, S. Li, et al., “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1225 (2017). [CrossRef]  

35. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

36. T. S. Ferguson, “An inconsistent maximum likelihood estimate,” J. Am. Stat. Assoc. 77(380), 831–834 (1982). [CrossRef]  

37. DP. Kingma and J. Ba, “Adam: a method for stochastic optimization,” In Proceedings of the 3rd International Conference on Learning Representations (2015).

38. J. Hao, X. Lin, Y. Lin, et al., “Lensless complex amplitude demodulation based on deep learning in holographic data storage,” Opto-Electron. Adv. 6(3), 220157 (2023). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The illustration of the data storage and phase retrieval system.
Fig. 2.
Fig. 2. The architecture of the U-net convolutional neural network.
Fig. 3.
Fig. 3. (a) The intensity distribution of the Fourier spectrum of the phase pattern, (b) the schematic diagram of the circular bandpass filter, (c) intensity distribution of the Fourier spectrum after the circular bandpass filter, (d) the schematic diagram of the square bandpass filter, (e) intensity distribution of the Fourier spectrum after the square bandpass filter.
Fig. 4.
Fig. 4. The phase data error rate of the two filtering functions at different high pass filter factors.
Fig. 5.
Fig. 5. BER with (a) high pass filter factor under different lower stopband transmittance and (b) passband transmittance under different high pass filtering factors.
Fig. 6.
Fig. 6. Intensity images captured by the CMOS at different diffraction distances.
Fig. 7.
Fig. 7. BER curves with diffraction distance under different high pass filter factors.
Fig. 8.
Fig. 8. (a) The phase pattern uploaded on SLM, (b) intensity image captured by the CMOS before filtering, (c) three-dimensional intensity distributions of the Fourier spectrum of image (b), (d) intensity image captured by the CMOS after filtering and (e) three-dimensional intensity distributions of the Fourier spectrum of image (d).
Fig. 9.
Fig. 9. The curve relationship between multiple of media saving and high pass filter factor.
Fig. 10.
Fig. 10. The BER curve with aperture offset number pixels.
Fig. 11.
Fig. 11. The experimental setup of the off-axis holographic data storage system. BE: beam expander, BS: beam splitter, HWP: half wave plate, SLM: spatial light modulator, L1-L5: lens (150 mm).
Fig. 12.
Fig. 12. (a) The encoded phase data page uploaded on the SLM, (b) the phase data page was expanded to 256*256 by 2 times upsampling and then padded zero to 288∗288, (c) The diffraction intensity image captured by the CMOS with a high pass filter factor of 0.5 at a diffraction distance of 1 mm.
Fig. 13.
Fig. 13. Diffraction intensity captured by the CMOS with different high pass filter factors in the experiment.
Fig. 14.
Fig. 14. The curve relationship between high pass filter factor and phase data error rate in the experiment and simulation.

Tables (1)

Tables Icon

Table 1. Super Parameters for the Neural Network Training

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

w = λ f l .
D 1 = 2 w .
D 2 = τ w .
s ( x , y , z = 0 ) = exp [ i φ ( x , y , z = 0 ) ] .
S ( u , v ) = F { s ( x , y , z = 0 ) } .
S f i l t e r i n g ( u , v ) = S ( u , v ) T ( u , v )
U ( x , y , z = 0 ) = F 1 { S f i l t e r i n g ( u , v ) }
I ( x , y , z = d ) = | U d ( x , y , z = d ) | 2 = H ( φ ( x , y , z = 0 ) )
φ ^ ( x , y , z = 0 ) = H 1 ( I ( x , y , z = d ) ) .
M S E = min 1 N W H n = 1 N u = 1 W v = 1 H ( φ ^ n ( u , v ) φ n ( u , v ) ) .
T c ( u , v ) = { 0 if  u 2 + v 2 τ w 2   1 if  τ w 2 < u 2 + v 2 , | u | w , | v | w 0 if  | u | > w , | v | > w
T s ( u , v ) = { 0 if  | u | τ w 2 ,   | v | τ w 2 1 if  τ w 2   < | u | w , τ w 2   < | v | w 0 if  | u | > w , | v | > w
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.