Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial pilot-aided fast-adapted framework for stable image transmission over long multi-mode fiber

Open Access Open Access

Abstract

Multi-mode fiber (MMF) has emerged as a promising platform for spatial information transmission attributed to its high capacity. However, the scattering characteristic and time-varying nature of MMF pose challenges for long-term stable transmission. In this study, we propose a spatial pilot-aided learning framework for MMF image transmission, which effectively addresses these challenges and maintains accurate performance in practical applications. By inserting a few reference image frames into the transmitting image sequence and leveraging a fast-adapt network training scheme, our framework adaptively accommodates to the physical channel variations and enables online model update for continuous transmission. Experimented on 100 m length unstable MMFs, we demonstrate transmission accuracy exceeding 92% over hours, with pilot frame overhead around 2%. Our fast-adapt learning scheme requires training of less than 2% of network parameters and reduces the computation time by 70% compared to conventional tuning approaches. Additionally, we propose two pilot-insertion strategies and elaborately compare their applicability to a wide range of scenarios including continuous transmission, burst transmission and transmission after fiber re-plugging. The proposed spatial pilot-aided fast-adapt framework opens up the possibility for MMF spatial transmission in practical complicated applications.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical fiber has been a media commonly used in communication, among which multi-mode fiber (MMF) is especially concerned in the field of image transmission for its ability to transmit multiple spatial modes in parallel [1,2]. However, due to modal dispersion and mode coupling, spatially scattered images will be generated through MMF transmission, which raises an urgent demand of image reconstruction methods to restore the effective information. A variety of methods used in information transmission through MMF has been developed, either for light focusing or for image projection, including using phase conjugation [37] via the knowledge of phase recorded at the distal end and calibrating transmission matrix (TM) [811] of MMF channel to restore the image in reverse. Iterative optimization algorithms can also generate desired images by optimizing the phase of the input field in the MMFs [1215]. Apart from the ever-present scattering characteristic, researches have shown that MMF is highly variable and random channel subject to environmental changes such as temperature fluctuation and geometric deformation, meantime displaying a time-varying characteristic. These disturbances in MMF system raise difficulties for methods reported above to be applied in unstable conditions. For example, in commonly-used TM methods, recalibrations are necessary before each acquisition after a long-time interval [16].

The last decade has witnessed rapid advancements of deep learning in dealing with ill-posed image reconstruction problems [17]. Recently, researchers have developed neural networks to restore images from scattering MMF media [1822]. It has been reported that neural network has the ability to generalize on different channel states when the 35 cm-MMF was under constantly changing characteristic variations [23]. Long-term transmission for over one week through 1 m MMF was demonstrated with two different neural networks [24]. However, most MMFs applied to the image transmission systems in the above reports are under 10 m, and increase of the fiber length can aggravate the dispersion and time-varying characteristics of MMF channel, which brings difficulties for image reconstruction. To address the problem, Fan Pengfei et al. [25] proposed a semi-supervised learning approach in MMFs up to 1 km, achieving nearly errorless image transmission over a 200-second period. However, the approach is based on the premise that the model is valid within a short time interval, so it needs continuous data acquisition to accommodate gradual system drift. In addition, the corresponding models need to be trained from scratch at each time, which is time-consuming and restricts the possibility of online transmission.

In digital signal transmission through single-mode fibers (SMFs), due to phase shift of each subcarrier over time, it is necessary to estimate the channel frequency-offset with periodic phase reference. The most common method is to calibrate the transmission characteristics of the system through a large number of data pairs (i.e., preamble) at the initial stage, and then insert reference signals (i.e., pilots) of known labels at intervals during the actual transmission process to assist dynamic channel correction. The pilot-aided approach has been widely used in coherent passive optical network (CPON) to improve the transmission stability of digital signal [2630].

Here we introduce the concept of spatial pilot to address the problem of spatial information transmission under dynamic long MMF channels. Distributed spatial pilot frames are inserted into the transmitting data frame sequence, serving as the reference for regular transmission channel recalibration. To facilitate online transmission, we propose a fast-adapt learning strategy for the pilot-aided framework in MMF, which freezes 98% of the total parameters and quickly updates the network model using a small amount of pilot frames. Additionally, two different ways for pilot frame insertion is designed to apply for different transmission conditions. To demonstrate the framework, we pre-train a neural network on a large dataset collected through a 100 m MMF with 15 hours time span, and achieve >92% accuracy spatial transmission with the portion of pilot frames less than 1.8%. The fast-adapt strategy produced 70% convergency speed-up compared with traditional fine-tuning approach to support online network update. Transmissions under various scenarios, including long-term continuous transmission, burst transmission and transmission after fiber re-plugging, are demonstrated with two pilot-insertion strategies, implying its good practical applicability.

2. Experimental setup

The experimental setup of MMF spatial transmission is shown in Fig. 1. The continuous-wave laser beam at wavelength 561 nm is expanded by a pair of achromatic lenses L1 and L2 (${f_1}$=10 mm, ${f_2}$=100 mm), and projected onto a Digital Micromirror Device (DMD, ViALUX V-7001). The DMD is able to modulate the amplitude distribution of the light beam by switching the micromirrors “ON” or “OFF”, corresponding to input binary values “1” or “0” at each pixel. We select the MNIST dataset for the image transmission experiment. After resizing the shape to 32 ${\times} $ 32-pixel, the images are binarized and loaded to DMD. Each transmission channel (corresponding to a pixel in patterns) occupies 12 ${\times} $ 12 DMD micromirror pixels, resulting in an effective DMD area of 384 ${\times} $ 384-pixel. The incident laser beam carrying spatially encoded information is consequently coupled into a MMF (SI, 0.22 NA, ∅︀=200µm) using an objective lens (Obj1, Nikon, 20X, 0.25 NA).

 figure: Fig. 1.

Fig. 1. The experimental system of spatial transmission through unstable MMF.

Download Full Size | PDF

The incident light field propagates through the scattering MMF channel and suffers strong spatial-polarization mode distortion and coupling. The output speckle images at the distal end of the fiber are imaged by another objective lens (Obj2, Nikon, 20X, 0.25 NA) onto a monochromatic CMOS camera (MER2-230-167U3M). The changing speckles illustrated on the top right corner of Fig. 1 are the recorded output images generated by using the same pattern as the input at different timepoints, showing the time-varying characteristic of the MMF transmission system.

To quantify the time-varying characteristic of the MMF system in further experiments, we conducted a stability test. The stability over time is measured by the SSIM to an initial speckle pattern recorded with the same input modulated by the DMD, as shown in the lower right corner of Fig. 1. After about 100s, the SSIM has dropped to 0.6, showing a non-negligible degree of system drift in 100 m long fibers.

3. Principle

3.1 Spatial pilot-aided fast-adapt framework

As shown in Fig. 2, inspired by the concept of pilot in SMF data transmission, we propose a spatial pilot-aided fast-adapt framework by introducing spatial pilots in MMF spatial information transmission. The advanced neural network is leveraged to model the inverse process of scattering spatial transmission. Here, we denote the pre-train neural network model as ${M_0}$, which is initialized by training from the preamble data. The preamble is composed of a large number of known data pairs, aiming to cover as many of the states of the MMF channel as possible, as to calibrate the general MMF channel.

 figure: Fig. 2.

Fig. 2. Schematic illustration of image reconstruction with the spatial pilot-aided fast-adapt framework.

Download Full Size | PDF

In follow-up data transmission, the image data stream will be divided into consecutive sequences of data frames of fixed length, at the beginning of which paired pilot images ${P_i}$ are inserted. To characterize the high-dimensional transmission model in long MMF channel, spatial pilot are a sequence of known images of the same category and same pixel size with the transmission data. Paired pilot images in each data sequence will attach useful knowledge of the current channel states to the pre-train model ${M_0}$, thus generating a series of tuned model ${M_1},\; {M_2},\; {M_3},\; \ldots ,\; {M_N}$ to track the transient transmission characteristics.

During the process of generating tuned models, a fast-adapt strategy is applied that only a few parameters of ${M_0}$ are trainable while the parameters left are frozen to keep their original values. In this case, $\; {M_t},\; t \ge 1$ can be written as ${M_t}({{u_t},{v_0}} )$ in which ${v_0}$ represents the frozen parameters and ${u_t}$ represents the remaining parameters varying according to channel states. All the tuned models are derived from ${M_0}$ and do not depend on the transmission model of the previous status, which enables a fast and direct tracking of channel model without constant observation of state changes. Benefiting from the fast-adapt strategy, tuning time of M_0 will be greatly reduced to the order of tens of seconds, which makes it possible for M_0 to online tuning while transmitting data.

3.2 Network architecture and implementation details

We illustrate the architecture of the neural network and specific structural parameters in Fig. 3. The input layer feeds speckle with a size of 150 × 150-pixel into the neural network, following by three convolutional blocks, which consist of one convolutional layer, batch normalization (BN) and the rectified linear unit (ReLU) nonlinear activation. Three convolutional layers in these blocks have a same kernel size of 3 × 3, which help to abstract the relevance implied in the input data by extracting features. A max pooling layer is applied after the first convolutional block to decrease computation and balance the parameter distribution. The two dense layers combine all the features of images learned from the previous layer to make a general prediction of the object.

 figure: Fig. 3.

Fig. 3. Architecture diagram of the CNN model for image reconstruction. The specific values are: h = w = 150, c = 32, f1 = 90 × 90, f2 = 32 × 32.

Download Full Size | PDF

All of the parameters join the pre-training process. In the fine-tuning process, we froze the first fully-connected layer of the network, which has 98% of the total number of parameters, thus decreasing the training time to facilitate online transmission. The Binary Cross Entropy cost was selected for the loss function. In addition, AdamW optimizers with the learning rates of $5 \times \textrm{1}{\textrm{0}^{\textrm{ - 4}}}$ and $6 \times \textrm{1}{\textrm{0}^{\textrm{ - 4}}}$ is selected for pre-training and fine-tuning with paired pilot frames, respectively. The decaying weight is set to $\textrm{1}{\textrm{0}^{\textrm{ - 4}}}$. Based on the convergence of the validation losses, models were designed to stop at 50 epochs. All models were realized in TensorFlow on a workstation equipped with a NVIDIA RTX 3080 graphics processing unit (GPU).

4. Results

4.1 Performance test of the pre-train model

To begin with, we train the pre-train model ${M_0}$ by collecting long-term measured data and verify its reconstruction performance. To obtain a general channel model covering as many channel states as possible, we perform rotation enhancement on the MNIST test set to expand the data size from 10000 to 50000, and project the enhanced dataset onto the DMD 3 frames per second for 15 hours. Before pre-training, 500 data pairs are randomly singled out from the dataset for further performance test. After the training process of pre-train model, we measure the reconstruction accuracy of the 500 test images. We choose the pixel-wise prediction accuracy as the image quality evaluation metric, which is defined as the percentage of correctly predicted pixels within one 32 ${\times} $ 32-pixel input image. Accuracies of the 500 reconstructed images are plot in Fig. 4(a) with a mean value of 93.6%. Examples of reconstructed images are presented in Fig. 4(b), which indicates that the pre-train model performs well on the unseen data acquired within the period that we collected the training data.

 figure: Fig. 4.

Fig. 4. Evaluation on the pre-train model. (a) Accuracies of 500 reconstructed images obtained from pre-train model test. (b) Example reconstructed images in (a). (c) Evaluation on the pre-train model without model fine-tuning. (d) Reconstructed images at five timepoints showing deteriorating performance.

Download Full Size | PDF

We further applied the pre-train model for recovering newly collected data 30 seconds after the pre-training data collection. As shown in Fig. 4(c), the reconstruction accuracy using the fixed pre-train model ${M_0}$ will slowly decline within 30 min after the pre-training data collection. After about 15 minutes, reconstruction accuracy has dropped to 60%, where the reconstructed images in Fig. 4(d) are already indistinguishable for handwritten digits. This suggests that a single fixed model doesn’t work well on long optical fibers and regular model update is necessary for long-term transmission.

4.2 Stable transmission with spatial pilot-aided strategy

Our proposed spatial-pilot framework provides reference for supervised network tuning, so as to allow robust and accurate image transmission through varying MMFs over long time. Here, we conduct a comparison experiment of transmission with and without spatial pilot, to validate the feasibility of pilot-aided strategy in spatial information transmission. We have chosen the dataset acquired immediately after collection of pre-train dataset in Fig. 4(c) for stable transmission test, which is divided into several 400s-duration test datasets consisting of 40000 patterns from the transmission set are loaded onto DMD at a frame rate of 100 Hz. In each 400s dataset, 2% of the 40000 image frames at the beginning are selected as labeled pilot frames for fine-tuning the pre-train model. Figure 5(a) illustrates the transmission accuracy using the pre-train model ${M_0}$ and the pilot-aided fine-tuned model ${M_1}$ to ${M_5}$ during the whole time period. The curve is smoothed with a window size of 50. With spatial pilot, the transmission accuracy curve eliminates original downward trend and maintains a mean accuracy of 92.3%. Reconstructed images of ${M_0}$ and tuned models have quite obvious quality gap, as shown in Fig. 5(b), where images transmitted with pilots have only few error pixels and images transmitted without pilots are distinguishable over time. Although ${M_0}$ has generalized on a variety of channel states, it cannot work well for transmissions long time after pre-train dataset collection. With the help of spatial pilots, transmission model is able to adapt to the varying transmission characteristics, thus maintaining high accuracy during the transmission period.

 figure: Fig. 5.

Fig. 5. (a) Comparison in transmission accuracy of using pre-train and pilot-aided fine-tuned model. (b) Example reconstructed images in (a) after 800s.

Download Full Size | PDF

4.3 Evaluation on accuracy and time cost of the fast-adapt strategy

When regular update of the model is unavoidable, the fast-adapt strategy shows its advantages, mainly in the accuracy of reconstruction and greatly shortened training time. To quantify the advantages of our proposed fast-adapt approach, we measure the reconstruction accuracy and time cost for network training with different network update approaches using the same pilot dataset. Two baseline training approaches are compared: the first one retrains the network model with the pilot data from scratch, the other one fine-tunes all the network parameters.

We experiment on three training datasets of various pilot ratios ${\textrm{N}_\textrm{t}}\textrm{/N}$, where N is the total number of testing images and ${\textrm{N}_\textrm{t}}$ is the number of data pairs inserted as pilots. After 50 epochs, we evaluate their performance on the test dataset. Figure. 6(a) demonstrates the reconstruction accuracy of the three models with different ${\textrm{N}_\textrm{t}}\textrm{/N}$ ratios over time. The accuracy results of total 400s are divided to five groups corresponding to five time periods of 80s. For all ${\textrm{N}_\textrm{t}}\textrm{/N}$, our fast-adapt strategy has the highest transmission accuracy, which is slightly higher than model trained from scratch, and transmission accuracy of model fine-tuned without frozen layer is 0.5%-1.7% lower than the two approaches above. For example, in the 400s test with ${\textrm{N}_\textrm{t}}{/\textrm{N}\; = \; 2\%}$, the mean reconstruction accuracies of the three models are 92.8%, 92.7%, 92.2% from high to low, respectively. Figure 6(b)-(c) displays some predicted images at 150s and 300s predicted by the three different models when ${\textrm{N}_\textrm{t}}{/\textrm{N}\; = \; 2\%}$, from which we can see that our fast-adapt strategy better preserve the pixel-level details (e.g., the digit ‘5’ reconstructed at T = 300s). When the proportion of training data increases, the reconstruction accuracy of each model has some improvement with similar degrees.

 figure: Fig. 6.

Fig. 6. (a) Reconstruction accuracy columns of different tuning strategies in a single cycle of 400s with ${\textrm{N}_\textrm{t}}\textrm{/N}$ = 1.8%, 2.0%, 2.5%. (b) The reconstructed images using the three update models at T1 = 150s. (c) The reconstructed images using three update models at T2 = 300s.

Download Full Size | PDF

Our strategy uses only ∼8.5×$\textrm{1}{\textrm{0}^\textrm{6}}$ trainable parameters, about 2% of the total parameters of the neural network, bringing another advantage of reducing model adaptation time. As presented in Fig. 7, the required training time for one time cycle with 1.8%∼2.5% spatial pilots are recorded and compared. Our frozen strategy achieves more than 70% computation time reduction under the same configuration environment, compared to training from scratch and without frozen layer approaches. The fast adaption property guarantees online network tuning to support long-term continuous transmission.

 figure: Fig. 7.

Fig. 7. Comparison of training time using different tuning strategies at three pilot ratios.

Download Full Size | PDF

4.4 Analysis on different pilot-insertion strategies

In the above experiment, pilot frames are only inserted at the beginning of the transmission data sequence. For practical long-term transmission, multiple groups of spatial pilots will be inserted at intervals. Next, we design and analyze on two different strategies in terms of pilot distribution, and their implementation details are demonstrated in Fig. 8(a). The naive one inserts all spatial pilots within a time cycle to the beginning of the data frame, which is termed as the head-first pilot insertion strategy (P_HF). Another strategy inserts discrete single pilot frame equally spaced and distributed within the time cycle, which is termed as interleaved pilot insertion (P_IL). Both of the two strategies only fine-tune once within a single cycle and they share a same tuning time with the same type and amount of data. Because P_HF inserts pilots in the beginning of the cycle, the tuning process begins immediately after paired data acquisition. The difference is that the tuning process of P_IL needs to wait until all data throughout the whole cycle is acquired before it can start.

 figure: Fig. 8.

Fig. 8. (a) Schematic diagram of the headfirst and interleaved pilot insertion methods. (b) Reconstruction accuracy of two insertion methods in a single cycle of 400s.

Download Full Size | PDF

We use the same dataset applied in Section 4.2 for experiment and 2% image pairs are selected as pilots to fine-tune the pre-train model, making the remaining data to serve as the test portion. Figure. 8(b) illustrates image reconstruction accuracy of P_HF and P_IL in 400s. Their mean accuracies and mean absolute derivations (MADs) are also plotted on the graphs respectively. The P_IL and P_HF have a similar average accuracy, meanwhile displaying different curve trends. Because P_HF obtains pilot frames for fine-tuning at the beginning of the cycle, it tends to perform better at the beginning of the test set. However, its subsequent test shows fluctuant results, which could be seen clearly from the trend of curve and its MAD. However, the training data of P_IL covers the whole period of the test, which makes the reconstruction results more stable during the whole transmission period. Based on the experimental results, P_HF is applicable to scenarios where updates are more frequent, while P_IL is suitable for scenarios with high requirement for transmission stability.

4.5 Long-term continuous transmission

To further prove that our pilot-aided fast-adapt framework is applicable in a variety of situations, we conducted a series of long-term transmission tests under various scenarios with ${\textrm{N}_\textrm{t}}\textrm{/N}$=2%. Firstly, we demonstrated the long-term continuous transmission lasting over half an hour in the unstable 100 m MMF. 40000 digit patterns in the test set are loaded to DMD with the frame rate of 100 Hz and repeated for 5 times. The two insertion methods of P_HF and P_IL are both tested. Figure 9(a) demonstrates the mean transmission accuracies and MADs of 5 update cycles over 2000s. Since the experiment is conducted several days after the pre-training data collection, the pre-train model results in a poor reconstruction accuracy over the period. As a comparison, with the spatial-pilot aided network adaptation, the long-term test of continuous transmission demonstrates equally good performance compared with single update cycle, with a mean accuracy up to 92.6% during long time periods. Figure 9(b) shows the reconstructed images of P_HF and P_IL at five time points, where all the hand-written digitals have high reconstructed quality visually.

 figure: Fig. 9.

Fig. 9. Experiment of continuous image transmission over 30 min. (a) Illustration of pilot inserted continuous transmission and the reconstruction accuracy. (b) Reconstructed images at five time points.

Download Full Size | PDF

4.6 Long-term burst mode transmission

Burst transmission is a common scenario in practical applications. Burst transmission is a common scenario in practical applications, such as time-division multiple access (TDMA) systems in the uplink of the coherent access network. Next, we validate the feasibility of our spatial pilot approach in burst transmission mode. Different from the continuous test above, we deliberately set a blank interval of 400s without transmitting data between every two transmission time cycles, as shown in the top inset of Fig. 10(a), forming a transmission dataset of over 1 h. The mean transmission accuracies and MADs of 5 transmission cycles are also illustrated in Fig. 10(a), which shows a similar transmission performance to that of the continuous condition. A high accuracy of 92.7% in average is preserved throughout the transmission. Figure 10(b) displays the reconstructed images of P_HF and P_IL at five certain time points with most pixel-level details restored.

 figure: Fig. 10.

Fig. 10. Experiment of burst image transmission over 1 h. (a) Illustration of pilot inserted burst transmission and the reconstruction accuracy. (b) Reconstructed images at five time points.

Download Full Size | PDF

4.7 Transmission after dramatic configuration change

Another common scenario of optical transmission is that MMF channels are dramatically changed under different configurations. Examples of that are that the fiber is removed and re-plugged, which are often involved in system construction and testing, and it’s necessary for optical fibers to be periodically removed from the system for cleaning and other maintenance in actual communication scenarios. To figure out the effectiveness of our spatial pilot-aided fast-adapt strategy in such scenarios, we delivered a test of transmission after fiber re-plugging. Figure 11(a) illustrates the dataset composition over 30 min, where the removing and re-plugging operation takes about 5s between every two update cycles, while the spatial pilots aided network tuning are produced after each round of fiber re-plugging. The mean accuracies and MADs of the 5 update cycles are plot in Fig. 11(b) and Fig. 11(c) displays the reconstructed images of P_HF and P_IL at five certain time points with most pixel-level details restored.

 figure: Fig. 11.

Fig. 11. Experiment of robust image transmission with fiber re-plugging over 30 min. (a) Illustration of pilot inserted resumed transmission. (b) Reconstruction accuracy over time. (c) Reconstructed images at five time points.

Download Full Size | PDF

5. Conclusion

This work is meant to provide a deep learning-based general framework for stable transmission of spatial information over long MMFs. The proof-of-concept experiment has demonstrated the no obvious performance degradation of the proposed framework on overcoming the high instability of a specific MMF channel (ø = 200um, 100 m) over 1 h. Theoretically, our approach has no upper limit of application time. Moreover, when a longer MMF is applied, a higher pilot ratio for tuning or slightly more frequent updates may be required, whereas the overall spatial pilot aided fast-adapt framework is still valid.

In summary, we propose a pilot-aided learning framework for robust MMF image transmission. A fast-adapt strategy of freezing 98% of the total parameters is developed to reduce the network-tuning time by 70%, hence facilitating online transmission. We experimentally validate the fast-adapt strategy of hand-written digits transmission through 100 m MMF, showing that only ∼1.8% of the pilot frames are adequate for model updating to maintain stable transmission over hours. Later, we make an exploration of two pilot insertion methods, head-first insertion and interleaved insertion, and discuss their applicable transmission conditions. Finally, a series of long-term transmission test under different scenarios including continuous transmission, burst transmission and transmission after configuration change are delivered to validate the universality and applicability of our strategy.

Funding

Shanghai Science and Technology Development Foundation (2021SHZDZX0103); National Natural Science Foundation of China (62231018).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper can be obtained from the authors upon reasonable request.

References

1. Y. Liu and L. Wei, “Low-cost high-sensitivity strain and temperature sensing using graded-index multimode fibers,” Appl. Opt. 46(13), 2516–2519 (2007). [CrossRef]  

2. Y. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109(20), 203901 (2012). [CrossRef]  

3. M. Cui and C. Yang, “Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation,” Opt. Express 18(4), 3444–3455 (2010). [CrossRef]  

4. C. L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18(20), 20723–20731 (2010). [CrossRef]  

5. Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. 3(1), 928 (2012). [CrossRef]  

6. T. R. Hillman, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3(1), 1909 (2013). [CrossRef]  

7. D. Wang, E. H. Zhou, J. Brake, H. Ruan, M. Jang, and C. Yang, “Focusing through dynamic tissue with millisecond digital optical phase conjugation,” Optica 2(8), 728–735 (2015). [CrossRef]  

8. A. M. Caravaca-Aguirre, E. Niv, D. B. Conkey, and R. Piestun, “Real-time resilient focusing through a bending multimode fiber,” Opt. Express 21(10), 12881–12887 (2013). [CrossRef]  

9. R. Y. Gu, R. N. Mahalati, and J. M. Kahn, “Design of flexible multi-mode fiber endoscope,” Opt. Express 23(21), 26905–26918 (2015). [CrossRef]  

10. L. Deng, J. D. Yan, D. S. Elson, and L. Su, “Characterization of an imaging multimode optical fiber using a digital micro-mirror device based single-beam system,” Opt. Express 26(14), 18436–18447 (2018). [CrossRef]  

11. T. R. Zhao, S. Ourselin, T. Vercauteren, and W. F. Xia, “Seeing through multimode fibers with real-valued intensity transmission matrices,” Opt. Express 28(14), 20978–20991 (2020). [CrossRef]  

12. T. Čižmár and K. Dholakia, “Shaping the light transmission through a multimode optical fibre: complex transformation analysis and applications in biophotonics,” Opt. Express 19(20), 18871–18884 (2011). [CrossRef]  

13. R. D. Leonardo and S. Bianchi, “Hologram transmission through multi-mode optical fibers,” Opt. Express 19(1), 247–254 (2011). [CrossRef]  

14. T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3(1), 1027 (2012). [CrossRef]  

15. E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013). [CrossRef]  

16. I. T. Leite, S. Turtaev, X. Jiang, M. Siler, A. Cuschieri, P. S. Russell, and T. Cizmar, “Three-dimensional holographic optical manipulation through a high-numerical-aperture soft-glass multimode fibre,” Nat. Photonics 12(1), 33–39 (2018). [CrossRef]  

17. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

18. R. Takagi, R. Horisaki, and J. Tanida, “Object recognition through a multi-mode fiber,” Opt. Rev. 24(2), 117–120 (2017). [CrossRef]  

19. N. Borhani, E. Kakkava, C. Moser, et al., “Learning to see through multimode fibers,” Optica 5(8), 960–966 (2018). [CrossRef]  

20. B. Rahmani, D. Loterie, G. Konstantinou, D. Psaltis, and C. Moser, “Multimode optical fiber transmission with a deep learning network,” Light: Sci. Appl. 7(1), 69 (2018). [CrossRef]  

21. M. Yang, Z. H. Liu, Z. D. Cheng, et al., “Deep hybrid scattering image learning,” J. Phys. D: Appl. Phys. 52(11), 115105 (2019). [CrossRef]  

22. P. Caramazza, O. Moran, R. Murray-Smith, and D. Faccio, “Transmission of natural scene images through a multimode fiber,” Nat. Commun. 10(1), 2029 (2019). [CrossRef]  

23. P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibers,” Opt. Express 27(15), 20241–20258 (2019). [CrossRef]  

24. C. Y. Zhu, E. A. Chan, Y. Wang, et al., “Image reconstruction through a multimode fiber with a simple neural network architecture,” Sci. Rep. 11(1), 896 (2021). [CrossRef]  

25. P. F. Fan, M. Ruddlesden, Y. F. Wang, et al., “Learning Enabled Continuous Transmission of Spatially Distributed Information through Multimode Fibers,” Laser Photonics Rev. 15(4), 2000348 (2021). [CrossRef]  

26. M. Luise and R. Reggiannini, “Carrier frequency recovery in all-digital modems for burst-mode transmissions,” IEEE Trans. Commun. 43(2/3/4), 1169–1178 (1995). [CrossRef]  

27. U. Mengali and M. Morelli, “Data-aided frequency estimation for burst digital transmission,” IEEE Trans. Comm. 45(1), 23–25 (1997). [CrossRef]  

28. B. C. Thomsen, “Burst mode receiver for 112 Gb/s DP-QPSK with parallel DSP,” Opt. Express 19(26), B770 (2011). [CrossRef]  

29. C. Zhu and Noriaki Kaneda, “Discrete Cosine Transform based Pilot-Aided Phase Noise Estimation for High-Order QAM Coherent Optical Systems,” OFC, pp. Th4C.1 (2017).

30. G. Li, A. Yan, S. Xing, Z. Li, W. Shen, J. Wang, J. Zhang, and N. Chi, “Pilot-Aided Continuous Digital Signal Processing for Multiformat Flexible Coherent TDM-PON in Downstream,” OFC, pp. W1I.3 (2023).

Data availability

Data underlying the results presented in this paper can be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The experimental system of spatial transmission through unstable MMF.
Fig. 2.
Fig. 2. Schematic illustration of image reconstruction with the spatial pilot-aided fast-adapt framework.
Fig. 3.
Fig. 3. Architecture diagram of the CNN model for image reconstruction. The specific values are: h = w = 150, c = 32, f1 = 90 × 90, f2 = 32 × 32.
Fig. 4.
Fig. 4. Evaluation on the pre-train model. (a) Accuracies of 500 reconstructed images obtained from pre-train model test. (b) Example reconstructed images in (a). (c) Evaluation on the pre-train model without model fine-tuning. (d) Reconstructed images at five timepoints showing deteriorating performance.
Fig. 5.
Fig. 5. (a) Comparison in transmission accuracy of using pre-train and pilot-aided fine-tuned model. (b) Example reconstructed images in (a) after 800s.
Fig. 6.
Fig. 6. (a) Reconstruction accuracy columns of different tuning strategies in a single cycle of 400s with ${\textrm{N}_\textrm{t}}\textrm{/N}$ = 1.8%, 2.0%, 2.5%. (b) The reconstructed images using the three update models at T1 = 150s. (c) The reconstructed images using three update models at T2 = 300s.
Fig. 7.
Fig. 7. Comparison of training time using different tuning strategies at three pilot ratios.
Fig. 8.
Fig. 8. (a) Schematic diagram of the headfirst and interleaved pilot insertion methods. (b) Reconstruction accuracy of two insertion methods in a single cycle of 400s.
Fig. 9.
Fig. 9. Experiment of continuous image transmission over 30 min. (a) Illustration of pilot inserted continuous transmission and the reconstruction accuracy. (b) Reconstructed images at five time points.
Fig. 10.
Fig. 10. Experiment of burst image transmission over 1 h. (a) Illustration of pilot inserted burst transmission and the reconstruction accuracy. (b) Reconstructed images at five time points.
Fig. 11.
Fig. 11. Experiment of robust image transmission with fiber re-plugging over 30 min. (a) Illustration of pilot inserted resumed transmission. (b) Reconstruction accuracy over time. (c) Reconstructed images at five time points.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.