Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color image information transmission based on elliptic optical vortex array encoding/decoding

Open Access Open Access

Abstract

A multichannel high-dimensional data encoding/decoding scheme based on composite elliptic optical vortex (EOV) arrays is proposed. By exploiting the rotation angle of the EOV, a 4 × 4 composite EOV array is used for high-dimensional data encoding. The conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus is used to assign different reconstruction focus to the data of the three channels (R, G, and B) of the color image. Then, the data of the three channels is transmitted simultaneously by a single hologram to further improve the transmission efficiency. At the receiver, the initial information sequence is decoded by directly identifying the captured intensity patterns with a deep learning-based convolutional neural network. In the experiment, a 128 × 128-pixel color image is successfully transmitted, which confirms the feasibility of our proposed encoding/decoding scheme. This method has great potential for future high-capacity optical communications.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical vortex beams have distinct features of the helical wavefront. It was shown by Allen in 1992 that the helically phased vortex beams comprising an azimuthal phase term exp(ilθ), possess an orbital angular momentum (OAM) of l${\hbar}$ per photon, where l is referred to topological charge and θ is azimuthal angle [1]. The unique characteristics of vortex beams have led to their wide applications in different areas, such as optical manipulation [2], optical microscopy [3], optical sensing [4], and optical imaging [5]. Furthermore, optical vortex beams with different topological charges are orthogonal to each other. It provides an additional degree of freedom for encoding, which can be used to increase the capacity of optical communications [6].

For vortex beam-based optical communications, OAM multiplexing [7] and OAM encoding/decoding [8] are two important approaches for carrying and delivering data information. OAM multiplexing utilizes the orthogonality of OAM to transmit information through multiple coaxial optical vortex beams [9,10], which is a subset of mode-division multiplexing. OAM multiplexing can be performed together with wavelength-division multiplexing [11] and polarization-division multiplexing [12] to increase the channel capacity. In addition to OAM multiplexing, the OAMs carried by optical vortices can be regarded as symbols for data encoding/decoding, which is also called OAM shift-keying [13,14]. The key advantage of using vortex beam encoding/decoding is its security feature, which does not depend on mathematical or quantum mechanical encryption methods. Furthermore, 2N OAM modes are essential in realizing N-bit OAM mode encoding/decoding transmission [15]. As the topological charge is infinite, theoretically, OAM symbols can provide an infinite number of encoded patterns.

The traditional OAM encoding/decoding method has technical limitations. Decoding at the receiver is accomplished by coherent detection methods, such as digital holograms [16], phase holograms [17], fork-shaped gratings [18], and Dammam vortex gratings [19]. These detection methods have high alignment requirements and require the input beam to be incident on the geometric center of the grating or hologram. Moreover, during transmission, the distortion of the wavefront phase induced by the turbulence effect increases the bit error rate (BER). In our previous work [20], a composite optical vortex array was used to encode the grayscale values of image pixels. Depending on the structure of the composite optical vortex bright-ring or dark-ring lattice, data information can be conveniently obtained from the intensity patterns captured by camera at the receiver. However, with the rapid development of the big data industry, the existing physical dimensions can no longer satisfy the growing demand for data traffic. In recent years, combining multiple optical dimensions for encoding/decoding has become the focus of research to improve communication capacity [21,22].

Elliptic optical vortices (EOVs) have attracted widespread attention in the last few years [2327]. EOVs have similar helical phase characteristics to circular optical vortices and controllable rotation angles. Yang et al. experimentally demonstrated the optical field distribution of composite elliptic Bessel vortex and composite higher-order elliptic Laguerre–Gaussian (LG) vortex beams [28]. Y. K. Wang et al. generated 1D and 2D EOV arrays (EOVAs) with orientational selection by multicoordinate transformation [29]. These studies have opened the possibilities of applying EOVs. The rotation angle of the elliptic optical vortex provides an additional encoding degree of freedom, which can improve the transmission capacity and lead to better stability of the system. In this research, a multichannel high-dimensional data encoding/decoding scheme based on composite elliptic optical vortex arrays is proposed. The topological charge combination and rotation angle of each element in the array can take any value. Simultaneous transmission of data from multiple channels is realized using the conjugate symmetric extension Fourier computer-generated holography algorithm with controlled reconstruction focus. Using a trained convolutional neural network (CNN) for decoding, the information can be recovered directly from the received intensity distribution. In our scheme, the grayscale values of the three channels’ (R, G, and B) pixels are encoded as three groups of 4 × 4 composite EOV arrays, with each composite EOV array carrying the grayscale information of 8 pixels (64 bits). Then, the conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus is used to assign different distance information to the hologram corresponding to each channel. In this manner, a single hologram can simultaneously transmit the pixel value information of three channels.

2. Principles and methods

2.1 EOV

The optical vortex has a helical phase structure with a phase singularity at its center, which is characterized by exp(ilθ). In Cartesian coordinates, a vortex beam carrying an OAM is a solution of the Helmholtz equation that can be expressed as follows:

$$\begin{aligned} E_0^l(x,y,z) &= \frac{1}{{w(z)}}{\left( {\frac{{\sqrt {{x^2} + {y^2}} \sqrt 2 }}{{w(z)}}} \right)^{|l |}} \cdot \textrm{exp} \left( {\frac{{ - {x^2} - {y^2}}}{{w{{(z)}^2}}}} \right) \cdot \\ &\textrm{exp} \left[ {i\left( {kz - \frac{{k({{x^2} + {y^2}} )}}{{2R}} + \varphi } \right)} \right] \cdot \textrm{exp} \left( { - il\arctan \left( {\frac{y}{x}} \right)} \right)\\ \end{aligned}$$
$$w(z) = {w_0}\sqrt {1 + (\frac{{{z^2}}}{{{z_R}}})} \textrm{, }{\textrm{z}_R} = \frac{{\pi w_0^2}}{\lambda },\textrm{ }\varphi \textrm{ = (}|l |+ 1){\tan ^{ - 1}}(\frac{z}{{{z_R}}})$$
where w(z) is the spot size parameter, w0 is the waist radius, zR is the Rayleigh length, φ is the Gouy phase, and λ is the wavelength.

Let the ellipticity be embedded into an LG vortex beam so that the complex amplitude of the optical field of the EOV in the initial plane can take the following form [30]:

$$E(x,y,z = 0) = {(\frac{{2{x^2} + 2{\alpha ^2}{y^2}}}{{w_0^2}})^{\frac{l}{2}}}\textrm{exp} ( - \frac{{{x^2} + {\alpha ^2}{y^2}}}{{w_0^2}})\textrm{exp} \left[ {il\arctan (\frac{{\alpha y}}{x})} \right]$$
where α is a dimensionless parameter that defines the ellipticity of an elliptic optical field. The optical vortex is extended along the x-axis, and the intensity shows an elliptical distribution.

In cylindrical coordinates, the EOV expression can be rewritten as follows:

$$\begin{aligned} E(r,\theta ,z = 0) &= {(\frac{{2{{(r\cos \theta )}^2} + 2{\alpha ^2}{{(r\sin \theta )}^2}}}{{w_0^2}})^{\frac{l}{2}}} \cdot \\ &\textrm{exp} ( - \frac{{{{(r\cos \theta )}^2} + {\alpha ^2}{{(r\sin \theta )}^2}}}{{w_0^2}})\textrm{exp} \left[ {il\arctan (\frac{{\alpha \cdot r\sin \theta }}{{r\cos \theta }})} \right] \end{aligned}$$
where r, θ, and z are cylindrical coordinate parameters.

An azimuthal offset θ0 is induced in this paper. Let θ′=θ-θ0. Then, Eq. (4) can be written as follows:

$$E(r,\theta ^{\prime},z = 0) = E(r,\theta - {\theta _0},z = 0)$$

Compared with Eq. (4), in this formulation, the elliptic vortex rotates counterclockwise by θ0. Figure 1 shows the variations in the intensity distribution for different θ0 values when l = 10 and α=3. The first column shows the phase distribution (Fig. 1(a)), followed by the simulated intensity distribution (Fig. 1(e)) and the experimental intensity distribution (Fig. 1(i)) for the initial state of the elliptical vortex (θ0 = 0). The second, third, and fourth columns show the phase and intensity distributions for θ0=π/4, π/2, and 3π/4, respectively. The orientation selection of the EOV can be achieved by taking different θ0 values.

 figure: Fig. 1.

Fig. 1. EOVs with different rotation angles at l = 10 and α=3. θ0 = 0 for (a), (e), (i); θ0=π/4 for (b), (f), (j); θ0=π/2 for (c), (g), (k); θ0 = 3π/4 for (d), (h), (l). (a)-(d) Phase distribution. (e)-(h) Simulated intensity distributions. (i)-(l) Experimental intensity distributions.

Download Full Size | PDF

2.2 Composite EOV

The optical ring lattice is generated using two EOVs superimposed coaxially in free space. The complex amplitude expression of the composite EOV, which is the superposition of two collinear EOVs of topological charges l1 and l2, is expressed as follows:

$$E = {E^{{l_1}}} + {E^{{l_2}}}$$

For the same ellipticity α and rotation angle θ0, the composite elliptical vortex lattice exhibits two forms, namely, a bright-ring lattice or a dark-ring lattice. When l1 and l2 are of opposite sign, the transverse intensity profile shows a distribution of | l1- l2| bright petals (Figs. 2(b) and (d)). When l1 and l2 have the same sign, the transverse intensity profile shows a distribution of | l1- l2| dark petals (Fig. 2(c)). In Fig. 2, the rotation angle of all vortices is 3π/4. The first line is the simulated value, whereas the second line is the intensity distribution generated by the experiment. According to the characteristics of the intensity profile of the composite elliptic optical vortices, the topological charges of the two optical vortices can be directly identified by the optical intensity distribution within the given topological charge combinations. Compared with the vortex beam with a single topological charge, the method described above does not need further coherent detection to identify the topological charges, indicating improved detection efficiency.

 figure: Fig. 2.

Fig. 2. Optical intensity distribution with different topological charge combinations. (a)-(d) Simulation results. (e)-(h) Experimental results.

Download Full Size | PDF

2.3 Encoding/decoding rules

A novel encoding/decoding scheme based on composite elliptic optical vortex arrays composed of different rotation angles and two groups of topological charges is proposed in this paper. In the encoding/decoding rules shown in Fig. 3, four rotation angles with obvious recognition (0, π/4, π/2 and 3π/4) and four topological charge combinations formed by the coaxial superposition of two groups of topological charges (l1 = 1, −1, 2, −2; l2 = 1) are selected. The binary encoding of four rotation angles and four topological charge combinations produces 16 encoding patterns corresponding to the binary data in the range of 0000−1111. With this encoding method, both the rotation angle and the topological charge information can be detected directly using the patterns during decoding without the need for additional detection devices. Finally, by using the 16 patterns to form a 4 × 4 composite elliptic optical vortex array, 64-bit data can be encoded.

 figure: Fig. 3.

Fig. 3. Encoding/decoding rules.

Download Full Size | PDF

Figure 4 shows the process of information transmission. The entire system is divided into two parts: the transmitter and the receiver. Different reconstruction distances are assigned to the data of the three channels by means of the conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus. By continuously switching holograms loaded on a spatial light modulator (SLM), a coded time-varying composite EOVA sequence is generated. After free-space transmission, data information can be conveniently obtained from the intensity patterns at the receiver.

 figure: Fig. 4.

Fig. 4. Schematic of the color image data encoding/decoding process.

Download Full Size | PDF

At the receiver, cameras are used to record the diffraction patterns. The obtained optical arrays are identified according to the encoding/decoding rules in Fig. 3, and the data information contained in the arrays is recovered. As the encoding patterns in our scheme are highly recognizable and the intensity patterns corresponding to each data differ significantly, good decoding results can be maintained even if elliptic vortex arrays are affected by atmospheric turbulence in free-space propagation. Furthermore, as shown in Fig. 4, a conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus allows the data of different channels (R, G, and B) of the color image to be transmitted to the specified position (reconstruction distance d1, d2, and d3, respectively) for reconstruction. Consequently, the data crosstalk between the three channels can be avoided and the transmission rate can be improved.

3. Generation of holograms

Traditional vortex array generation methods focus on generating arrays of individual vortices at different spatial locations [31,32], but these methods have limitations on the topological charges of each element in the vortex array. The conjugate symmetric extension Fourier computer-generated holography algorithm [33] is based on the property that the forward or inverse Fourier transform of a conjugate symmetric complex function is a real function to generate holograms. The complex amplitude of the light wave to be recorded is extended to produce a conjugate symmetric function. Then, the function is actuated via forward or inverse Fourier transformation to generate a real-valued distribution containing both the amplitude information and the phase information of the light wave. The obtained real distribution is coded to develop a grayscale hologram that can be used to reconstruct the original light wave. This method can generate holograms of vortex arrays with complicated structures [34]. Recently, we have further improved this algorithm by extending the imaging method from the Fourier domain to the Fresnel domain, allowing us to generate holograms with controllable reconstruction focus [35]. The topological charge combination and rotation angle of each element in the EOVA are selectively tailored on demand by this method.

The complex amplitude of the EOVA containing the data information obtained after encoding can be written as follows:

$${f_0}({x,y} )= A({x,y} )\textrm{exp} [{({j\varphi (x,y} )} ]\textrm{ }x = 1,2,. \cdots ,\frac{X}{2} - 1;y = 1,2, \cdots Y - 1$$
where A(x, y) and φ(x, y) represent the amplitude and phase of the composite elliptic optical vortex array, respectively. In this method, a complex object light f0(x, y) is conjugate-symmetrically extended into the form expressed as below with the assumption that x and y are both even:
$$f({x,y} )= \left\{ {\begin{array}{ccc} {{f_0}({x,y} )}&{x = 1,2, \cdots ,\frac{X}{2} - 1;y = 1,2, \cdots Y - 1}\\ {f_0^ \ast ({X - x,Y - y} )}&{x = \frac{X}{2} + 1, \cdots X - 1;y = 1,2, \cdots Y - 1}\\ 0&{x = 0\textrm{ }or\textrm{ }y = 0\textrm{ }or\textrm{ }y = \frac{X}{2}} \end{array}} \right.$$
where the superscript “*” represents the complex conjugate. Let f(0, y)=f(x, 0)=f(X/2, 0) = 0, then, the center of symmetry is (X/2, Y/2).

Then, performing an inverse Fourier transform on the complex amplitude f(x, y) obtains:

$$IFT[f(x,y)] = F(\mu ,\nu ) = \frac{1}{{XY}}\sum\limits_{x = 0}^{X - 1} {\sum\limits_{y = 0}^{Y - 1} {f({x,y} )\textrm{exp} \left[ { - j2\pi \left( {\frac{{\mu x}}{X} + \frac{{y\nu }}{Y}} \right)} \right]} } \textrm{ }\begin{array}{{c}} {\mu = 0,1 \cdots ,X - 1}\\ {\nu = 0,1, \cdots ,Y - 1} \end{array}$$
where F(μ, ν) is the inverse Fourier transform of f(x, y), while μ and ν are the serial numbers of samples in the horizontal and vertical directions, respectively. Substituting Eqs. (7) and (8) into Eq. (9) obtains:
$$F(\mu ,\nu ) = \frac{2}{{XY}}\sum\limits_{x = 1}^{X/2 - 1} {\sum\limits_{y = 1}^{Y - 1} {\left\{ {A(x,y)\cos \left[ {2\pi (\frac{{x\mu }}{X} + \frac{{y\nu }}{Y}) + \varphi (x,y)} \right]} \right\}} } ,\begin{array}{{c}} {\mu = 0,1,\ldots ,X - 1}\\ {\nu = 0,1,\ldots ,Y - 1} \end{array}$$

Equation (10) shows that F(μ, ν) is a real function containing information about the amplitude and phase of the object wave. F(μ, ν) is linearly mapped to 0−2π to obtain a phase-only hologram as follows:

$${F_{holo}}(\mu ,\nu ) = 2\pi \times \frac{{F(\mu ,\nu ) - \min [F(\mu ,\nu )]}}{{\max [F(\mu ,\nu )] - \min [F(\mu ,\nu )]}}$$

According to the Fresnel diffraction formula, the quadratic phase $h({\mu ,v;d} )= \frac{k}{{2d}}({{\mu^2} + {v^2}} )$ is introduced to set the reconstructed focus information of the hologram, where d is the reconstruction distances. The remainder of 2π is calculated to obtain:

$${G_{holo}}(\mu ,\nu ;d) = [{F_{holo}} - \frac{k}{{2d}}({\mu ^2} + {\nu ^2})]\bmod 2\pi $$
where k is the wavenumber, k = 2π/λ. The real function G contains information on the amplitude, phase and reconstructed focus of the target light field. In using this method, holograms can be generated without iteration, thus improving the efficiency of holographic coding. This approach is particularly suitable when generating large-scale holograms.

The hologram generation method is shown in Fig. 5. Three groups of data are used to simulate the data information of the three channels of the color image. Each group of data contains eight random numbers in the range of 0−255, which are used to simulate the grayscale value of the pixel. Each number corresponds to an 8-bit binary sequence, which can be represented by two patterns according to the encoding rules in Fig. 3. The 8 numbers in the first column are converted into the 4 × 4 arrays in the second column. F1, F2, and F3 are obtained by inverse Fourier transform after conjugate symmetric extension of the complex amplitudes of the 4 × 4 composite elliptic optical vortex arrays in the second column. Then, holograms G1, G2, and G3 are generated according to the Eq. (12). The corresponding quadratic phase terms h1(μ,ν; d1), h2(μ,ν; d2), and h3(μ,ν; d3) carry the reconstruction focus information d1, d2 and d3, respectively. Finally, three holograms are superimposed to obtain the phase-only composite hologram G loaded on the SLM.

$$G = \arg \left[ {\sum\limits_{i = 1}^3 {\textrm{exp} (j \cdot {G_i})} } \right]$$
where arg[*] is a direct-phase extraction operation.

 figure: Fig. 5.

Fig. 5. Multichannel hologram generation.

Download Full Size | PDF

4. Experimental results and discussion

The experimental setup for encoding/decoding is shown in Fig. 6. At the transmitter, a solid-state laser (MGL-F-532-2 W) running at 532 nm is used as the light source that can generate a fundamental Gaussian beam. The beam is expanded and collimated by sequentially passing through a beam expander (BE) and an aperture (A1). Then, the expanded Gaussian beam illuminates the SLM (PLUTO-VIS-001, 1920 × 1080 pix pitch 8 µm) which is loaded with a hologram generated by the conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus. The hologram contains the information of the composite elliptic optical vortex arrays which composed of 4 × 4 patterns. After diffraction, the 4 × 4 patterns are generated. By switching the holograms loaded on the SLM, a time-varying sequence of EOVAs containing information can be obtained. At the focus of the lens, an aperture (A2) allows better coupling of the EOVAs into the cameras (CS165CU/M, 3.45 ${\times} $3.45 μm in pixel size) for detection. The data sequence can be obtained by analyzing the intensity distribution of the images captured by the cameras. Then, two beam splitters (BSs) are used simultaneously to record the distribution of the light field at different locations. The reconstruction distances of the three channels are set to d1 = Δd1 + Δd, d2 = Δd2 + 2 × Δd, and d3 = 3 × Δd, corresponding to the locations of cameras 1, 2, and 3, respectively.

 figure: Fig. 6.

Fig. 6. Experimental setup. BE: beam expander; A1–A2: aperture; BS: beam splitter; SLM: spatial light modulator.

Download Full Size | PDF

After setting the experimental setup, the generated hologram shown in Fig. 5 is loaded on the SLM for testing. Given the reconstruction distance information of each channel, the cameras can record the intensity distribution of the vortex array containing the data information at a preset position. Figures 7(a)–(c) show the images captured by the cameras, which match closely with the results of the simulation (Figs. 7(d)–(f)). These findings indicate that the encoding/decoding rules are feasible.

 figure: Fig. 7.

Fig. 7. Results of the data transfer test. (a)–(c) Experimental intensity distribution. (d)−(f) Simulated intensity distribution.

Download Full Size | PDF

For the data received by the cameras, a CNN based on deep learning is used for recognition, and the network structure is shown in Fig. 8(a). In a constant working environment, only a small training set is needed to reach good recognition accuracy. Fifty different 4 × 4 arrays are formed by randomly selecting composite elliptic optical vortex from the 16 patterns in Fig. 3. The hologram of each array is generated. After optical reconstruction, the intensity distribution of each reconstructed array is divided into 16 images, and a total of 800 intensity images are obtained. Our trained network is processed using a test set containing 800 intensity images for the 16 states, and the recognition accuracy can reach 99.5%, as shown in Fig. 8(b). Misidentified states can be further enhanced by increasing the number of training samples. Then, elements of the arrays in Fig. 7 are entered into the network in turn for recognition, which can be detected correctly (Fig. 8(c)), and the data information contained in the arrays can be decoded by sequencing the recognition results.

 figure: Fig. 8.

Fig. 8. Decoding with a CNN. (a) CNN structures. (b) Confusion matrix of the testing set. (c) Results of decoding the data received by the cameras.

Download Full Size | PDF

Figure 9(a) shows a 128 × 128-pixel color image transmitted using the proposed encoding/decoding scheme. The first step is to divide the color image into three grayscale images of different channels (R, G, and B). Then, the grayscale value (0–255) of each pixel is extracted from the image according to the order of left to right and top to bottom. By converting each decimal grayscale value into binary sequence, each grayscale value can be represented as an 8-bit binary sequence. According to the encoding rules proposed in Part 2, encoding is performed for every eight pixels in groups in the order of extraction to form a 4 × 4 composite elliptic optical vortex arrays. The pixel information of each channel can be encoded as 2,048 4 × 4 vortex arrays, resulting in three groups of such sequences. In generating holograms of these three groups of vortex arrays, the conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus is used to add different reconstruction distance information. The holograms of the three channels are superimposed into one group of holograms according to the corresponding pixel coordinates, and the entire color image is encoded into 2,048 holograms. This process greatly shortens the time spent on data transmission. By switching the corresponding holograms loaded on the SLM, the image can be converted into a series of time-varying composite elliptic vortex arrays. After free-space propagation, the light intensity distribution is recorded in real time with cameras at preset locations. Each of the three cameras receives a group of vortex arrays with a number of 2,048. Using the convolutional network shown in Fig. 8(a) to identify the received vortex arrays, the data contained in each array can be decoded to recover the transmitted image (Fig. 9(b)). The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and bit error rate (BER) of the reconstructed color image are 37.49 dB, 0.993, and 6.734 × 10−4, respectively. Experimental results show that the wireless communication scheme based on composite elliptic optical vortex arrays multichannel high-dimensional data encoding/decoding is effective and feasible in practice.

 figure: Fig. 9.

Fig. 9. Color image encoding/decoding. (a) Experimental process. (b) Decoding results.

Download Full Size | PDF

By transmitting only one gray value each time, the data transfer rate under conventional methods is 480 bit/s [36]. In our experimental setup, the maximum refresh frequencies of the SLM and camera are 60 Hz and 90 fps, respectively. If a 128 × 128-pixel color image is transmitted using traditional methods, then the transmission time is 819.2 s. With the encoding/decoding scheme proposed in this paper, the data transfer rate can reach 11,520 bit/s, and the time to transfer a 128 × 128-pixel color image is reduced to 34.1 s. Experiments show that the method proposed in this paper benefits from its excellent transmission rate. Notably, in the experiment presented in this paper, only four different rotation angles and four different combinations of OAM states are selected for data encoding. Furthermore, more OAM degree-of-freedom state combinations can be added to complete much higher dimensional data encoding/decoding and further improve the transmission speed. Much more complicated vortex arrays can be transmitted owing to the object-oriented imaging property of the conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus while ensuring good transmission performance of the system.

5. Conclusion

A multichannel high-dimensional data encoding/decoding scheme for free-space optical communication is proposed in this study. The rotation angle and the composite OAM states with a special intensity distribution of the EOVs are used for encoding. The holograms with imaging position information are generated by the conjugate symmetric extension Fourier computer-generated holography algorithm with controllable reconstruction focus. After loading the pixel information of the R, G, and B channels of the color image into the SLM, it can be simultaneously transmitted to the different designated positions. At the receiver, the cameras can capture the vortex arrays containing pixel information of different channels at the set positions, and the data information can be recovered by analyzing the intensity distribution of each element. This study experimentally demonstrates a 64-bit (8 pixels per channel) three-channel high-dimensional encoding/decoding scheme for transmitting color images in a free-space link and the successful transmission of a 128 × 128-pixel color image. The results confirm the feasibility of our proposed coding/decoding scheme. The response speed of both the SLM and the camera is not very high, but our scheme can achieve a single hologram to transmit the information of three channels and use three cameras to receive the information at the same time. It demonstrates its promising application prospects for future large-capacity optical communications.

Funding

National Natural Science Foundation of China (62075125); 111 Project (D20031); Science and Technology Commission of Shanghai Municipality (20DZ2204900).

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, “Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes,” Phys. Rev. A 45(11), 8185–8189 (1992). [CrossRef]  

2. B. Gu, Y. Hu, X. Zhang, M. Li, Z. Zhu, G. Rui, Jun He, and Y. P. Cui, “Angular momentum separation in focused fractional vector beams for optical manipulation,” Opt. Express 29(10), 14705–14719 (2021). [CrossRef]  

3. J. Liu, Z. J. Hua, and C. Liu, “Compact dark-field confocal microscopy based on an annular beam with orbital angular momentum,” Opt. Lett. 46(22), 5591–5594 (2021). [CrossRef]  

4. Y. Ren, S. Qiu, T. Liu, and Z. Liu, “Compound motion detection based on OAM interferometry,” Nanophotonics 11(6), 1127–1135 (2022). [CrossRef]  

5. J. Zeng, Y. Dong, Y. Wang, J. Zhang, and J. Wang, “Optical Imaging Using Orbital Angular Momentum: Interferometry, Holography and Microscopy,” J. Lightwave Technol. 41(7), 2025–2040 (2023). [CrossRef]  

6. S. Fu, Y. Zhai, H. Zhou, J. Zhang, T. Wang, X. Liu, and C. Gao, “Experimental demonstration of free-space multi-state orbital angular momentum shift keying,” Opt. Express 27(23), 33111–33119 (2019). [CrossRef]  

7. J. Wang, J. Yang, I. M. Fazal, N. Ahmed, Y. Yan, H. Huang, Y. Ren, Y. Yue, S. Dolinar, M. Tur, and A. E. Willner, “Terabit free-space data transmission employing orbital angular momentum multiplexing,” Nat. Photonics 6(7), 488–496 (2012). [CrossRef]  

8. G. Gibson, J. Courtial, M. J. Padgett, M. Vasnetsov, V. Pas’ko, S. M. Barnett, and S. Franke-Arnold, “Free-space information transfer using light beams carrying orbital angular momentum,” Opt. Express 12(22), 5448–5456 (2004). [CrossRef]  

9. L. Zhu, M. Deng, B. Lu, X. Guo, and A. Wang, “Turbulence-resistant high-capacity free-space optical communications using OAM mode group multiplexing,” Opt. Express 31(9), 14454–14463 (2023). [CrossRef]  

10. B. Paroli, L. Cremonesi, M. Siano, and M. Potenza, “Hybrid OAM-Amplitude multiplexing and demultiplexing of incoherent optical states,” Opt. Commun. 524, 128808 (2022). [CrossRef]  

11. K. Zou, K. Pang, H Song, J. Fan, Z. Zhao, H. Song, R. Zhang, H. Zhou, A. Minoofar, C. Liu, X. Su, N. Hu, A. McClung, M. Torfeh, A. Arbabi, M. Tur, and A. E. Willner, “High-capacity free-space optical communications using wavelength-and mode-division-multiplexing in the mid-infrared region,” Nat. Commun. 13(1), 7662 (2022). [CrossRef]  

12. M. Singh, M. H. Aly, and S. A. A. El-Mottaleb, “6 G enabling FSO communication system employing integrated PDM-OAM-OCDMA transmission: impact of weather conditions in India,” Appl. Opt. 62(1), 142–152 (2023). [CrossRef]  

13. Y. Zhao, Y. Yao, K. Xu, Y. Yang, and J. Tian, “Circular polarization shift-keying modulation based on orbital angular momentum division multiplexing in free space optical communication,” Opt. Commun. 475, 126165 (2020). [CrossRef]  

14. Z. Shang, S. Fu, L. Hai, Z. Zhang, L. Li, and C. Gao, “Multiplexed vortex state array toward high-dimensional data multicasting,” Opt. Express 30(19), 34053–34063 (2022). [CrossRef]  

15. S. Li and J. Wang, “Experimental demonstration of optical interconnects exploiting orbital angular momentum array,” Opt. Express 25(18), 21537–21547 (2017). [CrossRef]  

16. X. Liu, S. Huang, W. Xie, and Z. Pei, “Topological charge parallel measurement method for optical vortices based on computer-generated holography,” J. Opt. Technol. 89(2), 94–100 (2022). [CrossRef]  

17. X. Fang, H. Ren, and M. Gu, “Orbital angular momentum holography for high-security encryption,” Nat. Photonics 14(2), 102–108 (2020). [CrossRef]  

18. J. Ma, Z. Li, S. Zhao, and L. Wang, “Encrypting orbital angular momentum holography with ghost imaging,” Opt. Express 31(7), 11717–11728 (2023). [CrossRef]  

19. X. Wang, Y. Song, F. Pang, Y. Li, Q. Zhang, and L. Zhuang, “Coding and decoding data by multiplexing vortex beams in free space,” Opt. Commun. 472, 125909 (2020). [CrossRef]  

20. W. Xie, S. Huang, W. Shao, F. Zhu, and M. Chen, “Free-space optical communication based on hybrid optical mode array encoding,” Acta. Phys. Sinica66(14), 1 (2017).

21. Z. Wan, Y. Shen, Z. Wang, Z. Shi, Q. Liu, and X. Fu, “Divergence-degenerate spatial multiplexing towards future ultrahigh capacity, low error-rate optical communications,” Light: Sci. Appl. 11(1), 144 (2022). [CrossRef]  

22. L. J. Kong, W. Zhang, P. Li, X. Guo, J. Zhang, F. Zhang, and X. Zhang, “High capacity topological coding based on nested vortex knots and links,” Nat. Commun. 13(1), 2705 (2022). [CrossRef]  

23. V. V. Kotlyar, S. N. Khonina, A. A. Almazov, V. A. Soifer, K. Jefimovs, and J. Turunen, “Elliptic laguerre-gaussian beams,” J. Opt. Soc. Am. A 23(1), 43–56 (2006). [CrossRef]  

24. V. V. Kotlyar, A. A. Kovalev, and A. P. Porfirev, “Elliptic Gaussian optical vortices,” Phys. Rev. A 95(5), 053805 (2017). [CrossRef]  

25. A. A. Kovalev and V. V. Kotlyar, “Orbital angular momentum of an elliptic beam after an elliptic spiral phase plate,” J. Opt. Soc. Am. A 36(1), 142–148 (2019). [CrossRef]  

26. G. Fan and D. Deng, “Control of Imbert-Fedorov shifts by the optical properties of rotating elliptical Gaussian vortex beams,” Opt. Express 29(22), 35182–35190 (2021). [CrossRef]  

27. X. Ji, M. Chen, P. Wu, S. Lin, Y. Zeng, and Y. Yu, “Study on the propagation characteristics of elliptical Airy vortex beam,” Opt. Commun. 519, 128389 (2022). [CrossRef]  

28. D. Yang, Y. Li, D. Deng, J. Ye, Y. Liu, and J. Lin, “Controllable rotation of multiplexing elliptic optical vortices,” J. Phys. D: Appl. Phys. 52(49), 495103 (2019). [CrossRef]  

29. Y. K. Wang, H. X. Ma, L. H. Zhu, Y. P. Tai, and X. Z. Li, “Orientation-selective elliptic optical vortex array,” Appl. Phys. Lett. 116(1), 011101 (2020). [CrossRef]  

30. P. Cheng, S. Huang, and C. Yan, “Ellipticity-encrypted orbital angular momentum multiplexed holography,” J. Opt. Soc. Am. A 38(12), 1875–1883 (2021). [CrossRef]  

31. H. Wang, S. Fu, and C. Gao, “Tailoring a complex perfect optical vortex array with multiple selective degrees of freedom,” Opt. Express 29(7), 10811–10824 (2021). [CrossRef]  

32. P. Kumar and N. K. Nishchal, “Array formation of optical vortices using in-line phase modulation,” Opt. Commun. 493, 127020 (2021). [CrossRef]  

33. S. Huang, S. Wang, and Y. Yu, “Computer generated holography based on Fourier transform using conjugate symmetric extension,” Acta Phys. Sin. 58(2), 952–958 (2009). [CrossRef]  

34. D. Wang, L. Jin, C. Rosales-Guzmán, and W. Gao, “Generating arbitrary arrays of circular Airy Gaussian vortex beams with a single digital hologram,” Appl. Phys. B 127(2), 22 (2021). [CrossRef]  

35. C. Li, S. Huang, and X. Liu, “Conjugate symmetric extension Fourier computer-generated holography with controllable reconstruction focus,” Appl. Phys. B 129(3), 44 (2023). [CrossRef]  

36. Y. Li and Z. Zhang, “Image information transfer with petal-like beam lattices encoding/decoding,” Opt. Commun. 510, 127931 (2022). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. EOVs with different rotation angles at l = 10 and α=3. θ0 = 0 for (a), (e), (i); θ0=π/4 for (b), (f), (j); θ0=π/2 for (c), (g), (k); θ0 = 3π/4 for (d), (h), (l). (a)-(d) Phase distribution. (e)-(h) Simulated intensity distributions. (i)-(l) Experimental intensity distributions.
Fig. 2.
Fig. 2. Optical intensity distribution with different topological charge combinations. (a)-(d) Simulation results. (e)-(h) Experimental results.
Fig. 3.
Fig. 3. Encoding/decoding rules.
Fig. 4.
Fig. 4. Schematic of the color image data encoding/decoding process.
Fig. 5.
Fig. 5. Multichannel hologram generation.
Fig. 6.
Fig. 6. Experimental setup. BE: beam expander; A1–A2: aperture; BS: beam splitter; SLM: spatial light modulator.
Fig. 7.
Fig. 7. Results of the data transfer test. (a)–(c) Experimental intensity distribution. (d)−(f) Simulated intensity distribution.
Fig. 8.
Fig. 8. Decoding with a CNN. (a) CNN structures. (b) Confusion matrix of the testing set. (c) Results of decoding the data received by the cameras.
Fig. 9.
Fig. 9. Color image encoding/decoding. (a) Experimental process. (b) Decoding results.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

E 0 l ( x , y , z ) = 1 w ( z ) ( x 2 + y 2 2 w ( z ) ) | l | exp ( x 2 y 2 w ( z ) 2 ) exp [ i ( k z k ( x 2 + y 2 ) 2 R + φ ) ] exp ( i l arctan ( y x ) )
w ( z ) = w 0 1 + ( z 2 z R ) z R = π w 0 2 λ ,   φ  = ( | l | + 1 ) tan 1 ( z z R )
E ( x , y , z = 0 ) = ( 2 x 2 + 2 α 2 y 2 w 0 2 ) l 2 exp ( x 2 + α 2 y 2 w 0 2 ) exp [ i l arctan ( α y x ) ]
E ( r , θ , z = 0 ) = ( 2 ( r cos θ ) 2 + 2 α 2 ( r sin θ ) 2 w 0 2 ) l 2 exp ( ( r cos θ ) 2 + α 2 ( r sin θ ) 2 w 0 2 ) exp [ i l arctan ( α r sin θ r cos θ ) ]
E ( r , θ , z = 0 ) = E ( r , θ θ 0 , z = 0 )
E = E l 1 + E l 2
f 0 ( x , y ) = A ( x , y ) exp [ ( j φ ( x , y ) ]   x = 1 , 2 , . , X 2 1 ; y = 1 , 2 , Y 1
f ( x , y ) = { f 0 ( x , y ) x = 1 , 2 , , X 2 1 ; y = 1 , 2 , Y 1 f 0 ( X x , Y y ) x = X 2 + 1 , X 1 ; y = 1 , 2 , Y 1 0 x = 0   o r   y = 0   o r   y = X 2
I F T [ f ( x , y ) ] = F ( μ , ν ) = 1 X Y x = 0 X 1 y = 0 Y 1 f ( x , y ) exp [ j 2 π ( μ x X + y ν Y ) ]   μ = 0 , 1 , X 1 ν = 0 , 1 , , Y 1
F ( μ , ν ) = 2 X Y x = 1 X / 2 1 y = 1 Y 1 { A ( x , y ) cos [ 2 π ( x μ X + y ν Y ) + φ ( x , y ) ] } , μ = 0 , 1 , , X 1 ν = 0 , 1 , , Y 1
F h o l o ( μ , ν ) = 2 π × F ( μ , ν ) min [ F ( μ , ν ) ] max [ F ( μ , ν ) ] min [ F ( μ , ν ) ]
G h o l o ( μ , ν ; d ) = [ F h o l o k 2 d ( μ 2 + ν 2 ) ] mod 2 π
G = arg [ i = 1 3 exp ( j G i ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.