Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

An experimental demonstration of a soft-failure approach to PMD mitigation in an installed optical link

Open Access Open Access

Abstract

We present a field-trial implementation of the soft-failure approach to polarization-mode dispersion (PMD) impairment mitigation, in which information about the PMD of the installed link is utilized by our modified control plane software to make decisions on data routing over available links. This allows us to maintain loss-free end-to-end data service, even at high PMD levels.

©2007 Optical Society of America

1. Introduction

Traditionally, optical networks have treated the physical and control layers as two separate and distinct entities. The fiber link was designed to meet some specifications on various physical impairments, which were for the most part independent of the nature of the signals being routed over them. The control plane on the other hand could treat the fiber links as static, and made routing decisions without the need for up-to-date information on the physical link. However, modern optical networks require the added flexibility that can be obtained when the control plane makes use of physical layer information. Reconfigurable optical networks, for example, must be able to quickly adjust the chromatic dispersion as different links are switched in and out of a signal path, in order to minimize impairments [1, 2]. A move to higher bit rates per channel will make the network more sensitive to the time- and wavelength-varying nature of impairments such as polarization mode dispersion (PMD) [3, 4]. Much effort has been focused on compensation at the physical layer, which would allow one to maintain the separation of the control plane and the physical layer. While significant progress continues to be made on PMD compensators [5, 6, 7, 8, 9, 10], such a solution remains inefficient, in that the compensators must be in place and turned on all of the time, although their services may be required for only some small fraction of the time. In addition, the cost required to compensate every channel or link can be prohibitive for many applications.

An alternative approach, sometimes dubbed “soft-failure,” is to instead monitor the impairments in the physical layer and to use that information to re-route signals to avoid problem links or wavelengths before errors have occurred [11, 12]. This approach effectively bridges the gap between the physical layer and the control plane. Rather than having dedicated compensators, network channels or links can be removed and added to service based on their current impairment level. Such an approach can allow for more efficient use of network resources and can generally be made more cost-effective than the use of compensators, as it requires only the monitor subsystem, which would typically be a small part of such a compensator.

In this work, we experimentally demonstrate a field-trial implementation in which such a soft-failure approach is taken to deal with changing levels of PMD in a physical link. We utilize an RF tone monitor to collect data on the PMD level of the fiber link. This information is then migrated to the modified control plane software and used to decide whether to route packets over a 10 Gb/s channel. When the PMD becomes too high to ensure reliable service on the 10 Gb/s channel, traffic is instead routed over a back-up 1 Gb/s channel. As a result, we are able to maintain end-to-end data service over the link even in the presence of variable and severe PMD.

2. Network testbed

We performed our demonstration of the integration of a PMD sensor with network control software on an installed fiber network connecting College Park, MD and Baltimore, MD. Our implementation uses standard Generalized Multiprotocol Label Switching (GMPLS) open shortest path first traffic engineering(OSPF-TE) mechanisms [13], through a modified version of the DRAGON (Dynamic Resource Allocation via GMPLS Optical Networks) control plane software [14]. As a result, only those network nodes which have sensors installed would need to have the software extended in order for the entire existing network to be able to use the sensor information.

Our demonstration used two optical channels, operating at data rates of 1 and 10 Gb/s, which traversed the same fiber connecting two network switching nodes. Switching nodes in the DRAGON network are called Virtual Link State Routers (VLSR), as the PC protocol engine that runs the DRAGON software controls a data switch via the simple network management protocol (SNMP) in order to manage the data plane connection. We made one of these VLSRs “PMD-aware” with the addition of an RF tone PMD sensor to the 10 Gb/s link, described below. Software components were added to monitor the sensor output, and the control plane software was extended so that the traffic-engineering metrics on the monitored link could be adjusted when a chosen threshold was crossed. When the sensor initiates a metric update, the OSPF-TE instance connected to the sensor issues OSPF-TE link state advertisements (LSAs) throughout the network. As a result, a circuit set up from any source and going to any destination would be able to take the health of the physical link into account.

In order to allow unmodified DRAGON nodes to participate in this test, we chose to adjust the already existing metric termed “unreserved bandwidth.” Under normal, low PMD conditions, both the 1 and 10 Gb/s link are set to have unreserved bandwidth available for use. In this case, the control plane routing software will compare the link cost metrics of the two paths, and choose the one with the lowest total cost. We set the link cost metrics of our 1 and 10 Gb/s links to be 20 and 10, respectively, so that the 10 Gb/s link will be preferentially chosen under these conditions. However, when the sensor output drops below a set threshold level, indicating that the 10 Gb/s link is unsuitable, the unreserved bandwidth of that link will be set to 0 (no bandwidth available for use). The DRAGON path computation component will discover the changed link metric and no longer include the 10 Gb/s link in its path computation. In our case, traffic will then be forced to be routed over the 1 Gb/s link, until the 10 Gb/s link is usable again.

A simplified schematic of our experimental testbed is shown in Fig. 1. The VSLR switching nodes were located at our laboratories in College Park, MD and in Baltimore, MD. In our experiment, a VLSR was composed of a computer, a Raptor ER-1010 ethernet switch, and a Movaz RayExpress optical add-drop multiplexer (OADM), each handling its respective network layer. The OADMs included the optical transponders (XPDR), the requisite MUX/DEMUX filters, and an erbium-doped fiber amplifier (EDFA) on the receiver side to compensate for the fiber loss. Two ITU 100 GHz grid compatible [15] optical channels were utilized for our tests: Ch. 31 (1552.52 nm) which operated at 10 Gb/s, and Ch. 35 (1549.32 nm) which operated at 1 Gb/s. Both channels utilized the non-return-to-zero (NRZ), on-off keyed (OOK) format.

The two nodes were connected by an installed 52 km single-mode fiber pair. We performed our measurements on the southbound (from Baltimore to College Park) fiber path, which had a total loss of approximately 20 dB, and total accumulated dispersion of ~875 ps/nm near 1550 nm. Because the accumulated dispersion was well within the limits of the transponders, no dispersion compensation was used. In order to have more control over the level of system impairment due to PMD, we placed a commercial DGD emulator (General Photonics DynaDelay 90) in the southbound fiber path at the output of the Baltimore node OADM, as indicated in Fig. 1. A programmable polarization controller was also inserted before the emulator, in order to vary the signal state of polarization (SOP) at the emulator input. We note that both the 1 Gb/s and 10 Gb/s signal channels pass through the emulator setup, so that both experience a similar physical fiber link. To compensate for the loss of these components, we added an additional EDFA at the emulator output.

 figure: Fig. 1.

Fig. 1. Schematic of the experimental network testbed, consisting of an installed fiber link between Baltimore and College Park, Maryland. Signal routing was controlled using the DRAGON User Interface (UI), while end-to-end packet loss was measured using the NUTTCP test program. ES: End Station; VLSR: Virtual Link State Router; SNMP: Simple Network Management Protocol; OADM: Optical Add-Drop Multiplexer; XPDR: Optical Transponder; EDFA: Erbium doped fiber amplifier; SMF: single mode fiber

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. (a) Schematic of the PMD sensor, based on detection of the half-bit-rate RF tone. PD: Photodiode; MPD: Microwave power detector. (b) The measured back-to-back sensor response as a function of the DGD emulator setting. For each setting, 100 random input SOPs were used; the worst case output is highlighted as the blue dots.

Download Full Size | PDF

For the PMD monitor, we utilized a simple scheme based on the detection of the half-bit rate (~5 GHz) RF tone from the signal spectrum [16]. The PMD is kept low enough that it does not impair the 1 Gb/s link, so for this proof-of-principle experiment we only monitored the 10 Gb/s link. A schematic of our sensor is shown in Fig. 2(a). The 10 Gb/s optical signal was detected using a standard photodetector (Agilent 83440C), and then amplified using a wide-bandwidth (2–18 GHz) electrical amplifier, which was followed by a 250 MHz bandpass filter (BPF) centered near 5 GHz to isolate the desired RF tone from the signal spectrum. In order to ensure that no other spectral regions contributed to the sensor output, the BPF had a stopband that extended past 20 GHz. An additional narrow-bandwidth electrical amplifier (4–8 GHz) was used to boost the RF tone at the input to the microwave power detector (Narda 4503A-03). In Fig. 2(b) we show the relation between the sensor output voltage and the DGD level. For this result, we utilized a back-to-back geometry, with the sensor located immediately after the DGD emulator. In this way, we could avoid any effects resulting from signal transmission over the fiber link. For each DGD setting, the sensor output was recorded for 100 random settings of the polarization controller. The worst case SOP results are indicated by the connected blue points. As expected for the 5 GHz tone, these decrease towards zero output as the DGD increases towards the bit slot time of 100 ps [16].

We placed the sensor at the College Park VLSR, to make it the PMD-aware node on the network. After wavelength demultiplexing, but before detection by the XPDR receiver, the 10 Gb/s channel was split and approximately -15 dBm was tapped off to the input of the sensor subsystem. We connected the output voltage from the sensor to a commercial data acquisition card (Measurements Computing PCI-DAS1002), which we installed in the VLSR computer. The sensor output was polled every 2 s to monitor the PMD level on the link. In principle, the analog voltage output of the sensor could be polled more often, allowing one to respond more rapidly to changing link conditions. However, the current implementation of the control plane software exhibits a latency of roughly 3 s, so that polling the sensor more often than once every 2 s did not yield any additional benefits for our experimental setup. The sensor output voltage was compared to a pair of pre-programmed threshold voltages. When the lower threshold voltage is crossed, the control plane will take the 10 Gb/s channel out of service, and begin routing traffic over the 1 Gb/s channel. The software continues to monitor the sensor output, and when the level crosses the upper threshold value, the 10 Gb/s channel can be brought back into service.

In order to quantify the system performance as the DGD is varied, we measured the packet loss experienced between the two end stations ES1 and ES2. We note that in order to simplify the interpretation of the observed system performance, forward error correction (FEC) was disabled for all cases. For each of the data points, we use the DRAGON user interface (DRAGON-UI) to create an end-to-end circuit (referred to as a label switched path or LSP) and then send a single Internet Control Message Protocol (ICMP) “echo request,” with a 1 s timeout, to verify connectivity. In the event that no reply is received within the specified time limit (which can occur for high DGD levels on the 10 Gb/s path), connectivity between the end stations has been lost and we therefore record 100% packet loss. Otherwise, we obtain the packet loss by using the freely-available “NUTTCP” test program [17], a client/server tool which utilizes a Transmission Control Protocol (TCP) control connection. The client (located at ES2) issues a known number of User Datagram Protocol (UDP) packets, and the server (located at ES1) communicates the receipt stats back to the client. In addition, we record a number of other diagnostic metrics, including the sensor output voltage, the available bandwidth for each of the channels, the explicit route object (ERO) used for the established LSP, and the power levels at various points in the OADM architecture. The LSP is then deleted before the next measurement scenario is initiated.

3. Results and discussion

We first investigated the relation between the sensor reading and the system performance on the 10 Gb/s link, in order to determine the threshold level that should be used for determining the usability of the link. For this measurement, we disabled the software component which allows the sensor output to modify the link metrics, so that the 10 Gb/s path would be used regardless of the sensor voltage level. We measured both the sensor output voltage and the end-to-end packet loss for 9600 different cases, obtained by using 400 random settings of the polarization controller for each of 24 different settings of the DGD emulator between 0 ps and 88 ps. We show the results of this test in Fig. 3. We observe no packet loss for sensor readings above ~200mV, except for a periodic 1% loss event which is an artifact resulting from the combination of the UDP implementation of NUTTCP used and the end station operating systems. As this does not correlate with either the DGD level or the sensor reading, it did not impact the results of this demonstration.

 figure: Fig. 3.

Fig. 3. Measured packet loss for the 10 Gb/s path as a function of the PMD sensor voltage. A packet loss of 100% indicates that connectivity between end stations could not be established, as determined by an ICMP echo request test. The dashed vertical lines indicate the upper and lower threshold voltage settings of 242 mV and 198 mV, used in subsequent measurements.

Download Full Size | PDF

As the DGD increases and the sensor output voltage decreases below 200 mV, the system performance begins to deteriorate, with an increasing rate of packet loss. Sensor output voltages below ~100 mV correspond to DGD emulator settings above 75 ps combined the worst case input SOPs to the emulator. In these cases, we cannot establish connectivity between the end stations (as determined by the ICMP echo test described in Section 2) and therefore record 100% packet loss. We note that the received power at the College Park OADM EDFA input fluctuated at most 0.7 dB over the entire data set, so that signal power fluctuations do not account for the observed variation in sensor values and system performance.

The selection of the optimum threshold level for these systems will in general require a trade-off between two competing goals. A higher threshold level will reduce the likelihood of a system outage, but increases the probability that a link will be taken out of service unnecessarily [11], leading to an underutilization of some network resources. More complex networks, combining multiple fiber paths and many channels, will require a more complete and long-term characterization, including link PMD statistics, in order to achieve optimal performance. It might also be advantageous for a system to use the sensor reading history to modify the threshold voltage as the network ages. For the basic demonstration network utilized here, however, we found the characterization in Fig. 3 to be sufficient, and selected 198 mV and 242 mV for the lower and upper threshold voltages, respectively. These are indicated by the dashed lines in Fig. 3. These settings were found to be adequate for meeting both of our performance goals: first, the 10 Gb/s channel could be taken out of service soon enough to maintain a packet loss level of less than 1%; and second, unnecessary switching to the back-up 1 Gb/s channel (i.e., when the 10 Gb/s channel would still function normally) was prevented.

Finally, in order to demonstrate that the overall system performance was improved when the PMD sensor information was utilized, we compare the results for DGD levels of 0 ps and 80 ps, with the switching turned off or on. For each of these four cases, the polarization controller was programmed to follow the same set of adjustments during the elapsed measurement time. In Fig. 4 we plot the sensor voltage (upper trace), packet loss (center trace), and unreserved bandwidths for each channel (lower trace) recorded for emulator settings of (a) 0 ps and (b) 80 ps of DGD. The unreserved bandwidth is normalized such that for either link, “1” corresponds to the link being available, “0.5” indicates that an LSP has been created using that link, and “0” indicates the link has been taken out of service. In this case the sensor’s ability to update the link metrics was disabled. As a result, the 10 Gb/s is always used, even when the threshold level is crossed. When the emulator is set to 0 ps, as in Fig. 4(a), the system performs well, as can be expected. The sensor voltage fluctuates, but shows no systematic variation, and the packet loss is 0%, excluding the periodic ~1% loss artifacts which have been described previously. When the DGD is increased to 80 ps in case (b), however, we see a great deal of variation in performance as the polarization is adjusted, which is tracked by the sensor. In the worst cases, the link connectivity fails, resulting in complete packet loss between the end stations.

 figure: Fig. 4.

Fig. 4. Sensor voltages (upper trace), packet losses (center trace), and unreserved bandwidth (BW) for each channel (lower trace), recorded while the polarization controller settings were varied over time. For all measurements, the sensor’s ability to modify the link metrics was disabled. The emulator DGD settings were (a) 0 ps and (b) 80 ps. The polarization controller was adjusted through the same series of points for both cases.

Download Full Size | PDF

We next enabled the sensor to update the link metrics, and performed the test for the same two cases. These results are shown in Fig. 5. For the case of 0 ps DGD, in Fig. 5(a), the system behaves similarly to that in Fig. 4(a). Because the sensor voltage stays well above the threshold level, traffic is only routed over the 10 Gb/s link, as indicated by the unreserved bandwidths (lower trace). The packet loss is again measured to be 1% or less. However, for the 80 ps DGD emulator setting, shown in Fig. 5(b), the results are quite different. Now, when the sensor crosses the threshold level, the 10 Gb/s link is taken out of service, and the traffic switches to the 1 Gb/s link, as indicated by the changes in the unreserved bandwidths (lower trace). Note that as the sensor output voltage increases, the system is able to bring the 10 Gb/s link back into service. As a result, although we see that the sensor output follows an almost identical path as in Fig. 4(b), the end-to-end packet loss is now as good as the 0 ps DGD cases shown in Figs. 4(a) and switchfig2(a). These results demonstrate that by monitoring the health of the physical layer link in real time, and utilizing this data in the control plane software, we have been able to improve the end-to-end system performance by eliminating the outages that would have resulted from the severe PMD levels. Finally, we note that for our configuration used here, the response time of our network to changes in the physical layer was dominated by the control plane software. It was found to take from 3 to 5 seconds to change the ERO to enable the transition between the two links after a threshold crossing had occurred. We anticipate that future revisions will allow for a more rapid response to impairment changes, if necessary.

 figure: Fig. 5.

Fig. 5. Sensor voltages (upper trace), packet losses (center trace), and unreserved bandwidth (BW) for each channel (lower trace), recorded while the polarization controller settings were varied over time. For all measurements, the sensor’s ability to modify the link metrics was enabled. The emulator DGD settings were (a) 0 ps and (b) 80 ps. The polarization controller was adjusted through the same series of points for both cases.

Download Full Size | PDF

While the simple routing scheme used here was adequate for a single link demonstration, a number of refinements can be made in the future in order to meet more demanding application requirements. Rather than simply setting the unreserved bandwidth to zero, the sensor can instead be used to modify the link weight metric, so that links with smaller PMD levels are preferred during path computation. In addition, future protocols, such as next generation SONET [18] could be used to groom traffic from an existing link to a backup path, based on the reports from the sensor. In addition, our scheme could be extended to multiple links within a network, allowing one to reflect the current impairment levels when computing the best path through a chain of links [19]

4. Conclusion

We have experimentally demonstrated a soft-failure approach to dealing with PMD impairments over an installed fiber link. Using a simple PMD sensor based on RF tone detection, and extending the control plane software, we were able to make the network aware of the physical health of the link. In this way, we were able to avoid packet loss in end-to-end transmission by taking the failing 10 Gb/s link out of service and switching to a lower data rate channel. This resulted in no packet loss, even in the presence of severe levels of PMD on the link.

Acknowledgments

The authors would to thank the following people for their assistance and encouragement: C. Tracy and D. Magorian of Mid-Atlantic Crossroads; J. Zweck, J.Wen, and C. Menyuk of the University of Maryland Baltimore County; X. Yang of the USC Information Sciences Institute; W. Chimiak of the Laboratory for Telecommunication Sciences; and P. Lang and B. Fink of the NASA Goddard Space Flight Center. This work was financially supported by the Laboratory for Telecommunication Sciences, and by the National Science Foundation under grant numbers 0400535 and 0335266.

References and links

1. M. Yagi, S. Tanaka, S. Satomi, S. Ryu, K. Okamura, M. Aoyagi, and S. Asano, “Field Trial of GMPLS triple plane integration for 40 Gbit/s dynamically reconfigurable wavelength path network,” Electron. Lett. 41,492–494 (2005). [CrossRef]  

2. A. S. Lenihan, O. V. Sinkin, B. S. Marks, G. E. Tudury, R. J. Runser, A. Goldman, C. R. Menyuk, and G. M. Carter, “Nonlinear Timing Jitter in an Installed Fiber Network With Balanced Dispersion Compensation,” IEEE Photon. Technol. Lett. 17,1558–1560 (2005). [CrossRef]  

3. M. Karlsson, J. Brentel, and P. A. Andrekson, “Long-Term Measurement of PMD and Polarization Drift in Installed Fibers,” J. Lightwave Technol. 18,941–951 (2000). [CrossRef]  

4. H. Kogelnik, R. M. Jopsen, and L. E. Nelson, “Polarization-Mode Dispersion,” in Optical Fiber Communications, vol. IVb, I. Kaminow and T. Li, Ed., pp.725–861 (Academic Press, San Diego, CA, 2002).

5. M. Akbulut, A. M. Weiner, and P. J. Miller, “Broadband All-Order Polarization Mode Dispersion Compensation Using Liquid-Crystal Modulator Arrays,” J. Lightwave Technol. 24,251–261 (2006). [CrossRef]  

6. H. Miao and C. Yang, “Feed-Forward Polarization-Mode Dispersion Compensation With Four Fixed Differential Group Delay Elements,” IEEE Photon. Technol. Lett. 16,1056–1058 (2004). [CrossRef]  

7. P. Oswald, C. K. Madsen, and R. L. Konsbruck, “Analysis of Scalable PMD Compensators Using FIR Filters and Wavelength-Dependent Optical Power Measurements,” J. Lightwave Technol. 22,647–657 (2004). [CrossRef]  

8. P. B. Phua, H. A. Haus, and E. P. Ippen, “All-Frequency PMD Compensator in Feedforward Scheme,” J. Lightwave Technol. 22,1280–1289 (2004). [CrossRef]  

9. D. Peterson, B. Ward, K. Rochford, P. Leo, and G. Simer, “Polarization mode dispersion compensator field trial and field fiber characterization,” Opt. Express 10,614–621 (2002). [PubMed]  

10. H. Sunnerud, C. Xie, M. Karlsson, R. Samuelsson, and P. J. Andrekson, “A Comparison Between Different PMD Compensation Techniques,” J. Lightwave Technol. 20,368–378 (2002). [CrossRef]  

11. J. Zweck and C. R. Menyuk, “Detection and Mitigation of Soft Failure due to Polarization-Mode Dispersion in Optical Networks,” In Proc. Opt. Fiber Commun. Conf. (OFC2006), Anaheim, CA, Paper OFG5.

12. H. Kogelnik, P.J. Winzer, L. E. Nelson, R. M. Jopsen, M. Boroditsky, and M. Brodsky, “First-Order PMD Outage for the Hinge Model,” IEEE Photon. Technol. Lett. 17,1208–1210 (2005). [CrossRef]  

13. A. Farrel and I. Bryskin, GMPLS: Architecture and Applications, The Morgan Kaufmann Series in Networking (Elsevier Inc., San Francisco, CA, 2006).

14. Information on the DRAGON project is available at http://dragon.maxgigapop.net.

15. International Telecommunication Union, Telecommunication Standardization Sector of ITU, ITU-T Standard G.694.1, Spectral grids for WDM applications: DWDM frequency grid (2002).

16. G. Ishikawa and H. Ooi, “Polarization-mode dispersion sensitivity and monitoring in 40-Gbit/s OTDM and l0-Gbit/s NRZ transmission experiments,” In Proc. Opt. Fiber Commun. Conf. (OFC1998), San Jose, CA, Paper WC5.

17. The NUTTCP software is the product of B. Fink and is available at ftp://ftp.lcp.nrl.navy.mil/pub/nuttcp/latest/nuttcp.html.

18. H. van Helvoort, Next Generation SONET: Evolution or Revolution (John Wiley and Sons, Ltd., 2005). [CrossRef]  

19. X. Yang and B. Ramamurthy, “Dynamic Routing in Translucent WDM Optical Networks: The Intradomain Case,” J. Lightwave Technol. 23,955–971 (2005). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic of the experimental network testbed, consisting of an installed fiber link between Baltimore and College Park, Maryland. Signal routing was controlled using the DRAGON User Interface (UI), while end-to-end packet loss was measured using the NUTTCP test program. ES: End Station; VLSR: Virtual Link State Router; SNMP: Simple Network Management Protocol; OADM: Optical Add-Drop Multiplexer; XPDR: Optical Transponder; EDFA: Erbium doped fiber amplifier; SMF: single mode fiber
Fig. 2.
Fig. 2. (a) Schematic of the PMD sensor, based on detection of the half-bit-rate RF tone. PD: Photodiode; MPD: Microwave power detector. (b) The measured back-to-back sensor response as a function of the DGD emulator setting. For each setting, 100 random input SOPs were used; the worst case output is highlighted as the blue dots.
Fig. 3.
Fig. 3. Measured packet loss for the 10 Gb/s path as a function of the PMD sensor voltage. A packet loss of 100% indicates that connectivity between end stations could not be established, as determined by an ICMP echo request test. The dashed vertical lines indicate the upper and lower threshold voltage settings of 242 mV and 198 mV, used in subsequent measurements.
Fig. 4.
Fig. 4. Sensor voltages (upper trace), packet losses (center trace), and unreserved bandwidth (BW) for each channel (lower trace), recorded while the polarization controller settings were varied over time. For all measurements, the sensor’s ability to modify the link metrics was disabled. The emulator DGD settings were (a) 0 ps and (b) 80 ps. The polarization controller was adjusted through the same series of points for both cases.
Fig. 5.
Fig. 5. Sensor voltages (upper trace), packet losses (center trace), and unreserved bandwidth (BW) for each channel (lower trace), recorded while the polarization controller settings were varied over time. For all measurements, the sensor’s ability to modify the link metrics was enabled. The emulator DGD settings were (a) 0 ps and (b) 80 ps. The polarization controller was adjusted through the same series of points for both cases.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.