Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

4x4 optical packet switching of asynchronous burst optical packets with a prototype, 4x4 label processing and switching sub-system

Open Access Open Access

Abstract

We report a prototype, 4×4 (4 input/4 output) label processing and switching sub-system for 10-Gb/s asynchronous burst variable-length optical packets. With the prototype, we perform a 4×4 optical packet switching demonstration, achieving error-free (BER<10−12) label processing and switching operation for all possible input/output combinations (16 switching paths) simultaneously. Power consumption and latency of the entire, self-contained sub-system is 83 W (includes fan power) and 300 ns, respectively.

©2010 Optical Society of America

1. Introduction

With the ever-constant demand for more bandwidth as well as the need to support a variety of services, major challenges are expected in building future photonic networks. In particular, the rapid increase of the power consumed by the network has become a growing concern, compounded by requirements for decreasing or at a minimum, maintaining at the current level, the network power. The performance of electrical routers/packet switches is seen as a major contributor to this issue as the power consumption of these systems is increasing dramatically, to the point of constraining scalability [1]. Conversely, implementation of reconfigurable optical add/drop multiplexer (ROADM) technologies has increased transparency of the network to eliminate processing of the bits and reduce power. However, the lack of packet-level data granularity ultimately limits scalability, flexibility, and intelligence of the network. With the concept of performing more of the functions inside the router with optical and optoelectronic technologies, optical packet switching (OPS) presents a potential solution for reducing power, size, and latency of the node while maintaining the ideal characteristics of a packet switched network [26].

However, the realization of an OPS node requires forwarding functions of label processing, switching, and buffering to be achieved, ideally for high-speed asynchronous burst variable-length optical packets. In the past, we have developed key optical and optoelectronic technologies for realizing each of these functions. We have demonstrated a label processing approach based on an optically clocked transistor array (OCTA) optoelectronic integrated circuit (OEIC) consisting of metal-semiconductor-metal (MSM) photodetectors (PDs) and 0.1-μm-gate-length high-electron-mobility transistors (HEMTs) [7]. The OCTA creates a single-chip, low-power interface between the input/output high-speed asynchronous baseband labels and a CMOS processor, enabling a highly functional, low-power label processor. For the switching function, we have developed a double-ring-resonator-coupled tunable laser diode (DRR TLD) capable of fast (less than 11 ns) and stable (less than 5 GHz wavelength drift) wavelength tuning which, combined with a cyclic arrayed waveguide grating (AWG), provides fast, low-power N×N wavelength-based switching [8]. For buffering, we have developed three key interface devices: an all-optical serial-to-parallel (SP) converter, a parallel-to-serial (PS) converter, and an optical clock-pulse-train generator (OCPTG) [2,9]. These devices create the input/output interfaces needed for storing/retrieving the asynchronous optical packets to/from CMOS memory. Finally, we have proposed a node architecture which combines these functional blocks, shown in Fig. 1 [2,3]. The packet data is organized into wavelength layers (four vertically overlayed planes in figure), separated and combined with AWGs at the inputs and outputs, respectively. Each incoming packet passes through a label processor, followed by the N×N optical switch. If there is no contention caused by other incoming packets, the packet is routed through the switch to the desired output port and passes through the node transparently (i.e., no buffering). If there is contention, the packet is forwarded through the switch to the shared buffer. The buffer has input/output interfaces (SP/PS converters, OCPTGs) for each wavelength layer and a shared electronic (CMOS) buffer at the core. Buffering/processing of the entire packet is thus done only in specified instances, when there is the need to resolve contention or implement various services (multicast, Quality-of-Service (QoS), etc.), reducing the total required buffering capacity of the node and hence power and size [2,3].

 figure: Fig. 1

Fig. 1 Diagram of proposed OPS node. AWG: arrayed waveguide grating. SPC: serial-to-parallel converter. PSC: parallel-to-serial converter. Each of the overlayed planes (containing four label processors and an optical switch in the figure) represents a wavelength layer of the node.

Download Full Size | PDF

In this paper, we describe a prototype 4×4 sub-system (one wavelength layer of the node in Fig. 1, enclosed by dashed line) which includes four label processors, a 4×4 switch, scheduling, and all other peripheral components required for a fully-functional 4×4 OPS demonstration. With the prototype, error-free label processing and switching operation (BER<10−12) is achieved for all input/output (I/O) combinations (16 switching paths) simultaneously.

2. Sub-system description

Figure 2(a) illustrates the constructed sub-system. Input optical packets are amplified and first enter a label processor (LP). Within the LP (Fig. 2(b)), an electrical clock-pulse generator (ECG) device [2] drives a 1×2 switch (SW) to separate the payload of the input packet from the label, which is fed into a PD. The PD output electrical signal is then sampled in parallel (SP conversion or demux) by triggering the OCTA OEIC with the input label itself. The input label is thus written into CMOS, where a processor subsequently checks the address of the label against a forwarding table to determine the desired output port. To monitor congestion of the switch, a payload envelope signal is also detected with a slow PD. Based on the requests from all four label processors and the state of the switch, a scheduler then determines the output port for the packet and in turn, the appropriate output label and control signals to configure the switch accordingly. Next, the output label is entered in parallel into the OCTA, with optical triggering generating the new label as a serial voltage signal (PS conversion or mux). Electrical-to-optical conversion with a monolithic electro-absorption modulator-distributed feedback laser device (EA-DFB) creates the output optical label, which is passively coupled to the payload (delayed in a fixed length of fiber) to produce the label swapped optical packet.

 figure: Fig. 2

Fig. 2 Diagrams of (a) the entire 4 × 4 label processing and switching sub-system, (b) a single label processor, and (c) the 4 × 4 switch. The optical packet and control signal output from the label processor are entered into the APD-TIA and DRR TLD of the switch, respectively.

Download Full Size | PDF

Within the switch (Fig. 2(c)), the label swapped packet first enters a tunable wavelength converter (TWC), consisting of a burst-mode receiver front-end (Avalanche PD (APD) and transimpedance amplifier (TIA)), a drive amp, a Lithium Niobate (LN) modulator, and the DRR TLD. Control signals from the scheduler modulate the phase and filter sections of the TLD [8] to tune the wavelength for the desired path across the AWG. The packet data is then encoded onto the TLD output with the other components of the TWC (wavelength conversion by optical-electrical-optical (OEO) conversion). The resulting wavelength converted signal is received at the output of the AWG by a fixed wavelength converter (FWC) consisting of the same APD-TIA front-end, drive amp, and an EA-DFB to convert the signal wavelength back to the original input wavelength (to λ1, Fig. 1, wavelength conversion by OEO conversion) before exiting the node.

3. Experimental results

Four OCTA devices, the scheduler (field-programmable gate array (FPGA)), and other discrete components required for implementing the label processing function were integrated onto a printed circuit board (PCB). A PCB containing a four-array ECG device was fabricated. All other components (TLDs, APD-TIAs, EA-DFBs, thermo-electric cooler drivers, voltage supplies, etc.) were integrated onto boards to create the prototype sub-system. An 8 × 8, cyclic AWG (silica planar lightwave circuit) with 400-GHz channel spacing was used for the switch. The entire, self-contained sub-system was housed in a 2U shelf (43×76×8.5 cm3), powered by a single 24 V power supply connection on the back panel. A photograph of the 19-inch-rack-mountable prototype is shown in Fig. 3 .

 figure: Fig. 3

Fig. 3 Photograph of the prototype 4 × 4 sub-system, housed in a 2U shelf (43 × 76 × 8.5 cm3) and powered by a single 24 V power supply connection on the back panel. Two fans were mounted on the back panel for cooling.

Download Full Size | PDF

The input optical packet stream consisted of four repeating packets (Packet A-D, Fig. 2(a)) with various lengths (100-300 ns) and guard bands between packets (50-85 ns). Packets were 10-Gb/s, non-return-to-zero (NRZ) format, with a 27-1 pseudo-random-bit-stream pattern for the payload, four different 16-bit labels (La-Ld) in front of the payloads, and a three-bit-long guard band between the label and payload. The label consisted of, in order, a leading ”1” bit, two QoS bits, three time-to-live (TTL) bits, and ten address bits.

For the experiment, the above packet stream was fed into each input (IN1-4), with packets cyclically switched through the AWG (i.e., for IN1, packet A1 goes to OUT1, packet B1 goes to OUT2, etc., as shown in Fig. 2(a)) to test all 16 switching paths simultaneously. Figures 4(a) and 4(c) show the input and output packet stream waveforms, respectively (for IN1, OUT1), and Figs. 4(b), 4(d), and 4(e) the head of each packet. Figure 4(d) shows the label during the BER measurement, when the label processor sets the output label as the input label recognized by the processor, including errors. Figure 4(e) shows results for standard forwarding operation, when the TTL field is decremented by one bit and the address is swapped to its inverse pattern as set by the forwarding table (La’-Ld’). Similar waveforms were obtained for the other inputs/outputs (IN2-4/OUT2-4).

 figure: Fig. 4

Fig. 4 (a) Input and (c) output packet stream waveforms for IN1, OUT1. (b), (d), and (e) show the waveform enlarged at the head of each packet, and the corresponding 16-bit label. Similar waveforms were obtained for the other inputs/outputs of the sub-system (IN2-4/OUT2-4).

Download Full Size | PDF

Error measurements were performed for all 16 output packets individually as bit-level synchronization is lost between packets after passing through the sub-system, due to its asynchronous nature. Results are shown in Fig. 5 , including a back-to-back (B-to-B) measurement. Error-free operation (BER<10−12) was achieved for all output packets, and hence, all possible switching paths. Bit errors due to packets lost as a result of label recognition errors are included within the measurement results (i.e., the packet is switched to a discard port of the optical switch if a label recognition error occurs). The same burst-mode receiver (APD-TIA, drive amp) was employed as the receiver for the measurement, as this would be the applicable receiver at the next node of the network. The setting of this DC-coupled receiver (i.e., the fixed decision level of the discrimination stage) was optimized to increase input dynamic range, causing the slightly low sensitivity. Power penalty variation (0.25-1.7 dB) is likely due to the TLD output power and LN modulator transfer function (Vπ, offset) varying as a function of the TLD wavelength. Power consumption of the entire 4 × 4 sub-system was 83 W, including power for the fans placed on the back panel of the shelf. Although the power consumption increases for higher packet arrival rates, because the components whose power is independent of the rate are dominant, only a slight increase (less than a few Watts) is expected beyond this value. The distribution of power was 66 W, 12 W, and 5 W for the label processor sub-system, switch sub-system (as shown in Figs. 2(b), 2(c)), and fans, respectively. Total latency for label processing and switching was 300 ns, with CMOS label processing and scheduling accounting for 90 ns of this delay.

 figure: Fig. 5

Fig. 5 BER measurement results for all 16 output packets shown in Fig. 2(a). Each line indicates results for a particular output packet and the switching path it traversed. Power penalty varied from 0.25 to 1.7 dB.

Download Full Size | PDF

4. Conclusion

We have described a demonstration of 4 × 4 optical packet switching with a 4 × 4 label processing and switching sub-system prototype. Error-free label processing and switching operation are simultaneously achieved for all possible switching paths.

By leveraging the advantages of optical clocking and integrating the functions of SP/PS conversion and clock generation, the OCTA forms a compact, low-power interface between the high-speed asynchronous labels and a CMOS processor with a simple configuration of conventional OEIC components, enabling a low-power, highly functional (large address table, multiple label fields, etc.) label processor for preamble-free burst optical packets [7]. For the TLD, the superior characteristics of the ring-resonator filters (compact, high Q factor) employed within the laser cavity lead to low tuning current operation which is critical for reducing wavelength drift due to thermal transients [8]. This results in a fast, stable, low-power switch. The label processing and switching sub-systems are thus enabled by these key devices, leading to the realization of a low power (83 W), low latency (300 ns), 4×4 optical packet switch.

Larger scale integration of the OCTAs, TLDs, and other components (one chip for multiple ports) will further reduce power and size. With the present approach, we expect the energy consumed per bit routed will significantly decrease as the line rate increases. Finally, the extra ports of the optical switch (8×8 cyclic AWG) allow application of the current sub-system to a node which includes the shared buffer sub-system for realization of the entire OPS node.

Acknowledgments

This work is partially supported by the National Institute of Information and Communications Technology (NICT), and is done within the framework of the OPS collaboration between Alcatel-Lucent Bell Laboratories and NTT Photonics Laboratories.

References and links

1. D. T. Neilson, “Photonics for switching and routing,” IEEE J. Sel. Top. Quantum Electron. 12(4), 669–678 (2006). [CrossRef]  

2. R. Takahashi, T. Nakahara, K. Takahata, H. Takenouchi, T. Yasui, N. Kondo, and H. Suzuki, “Ultrafast optoelectronic packet processing for asynchronous, optical-packet-switched networks,” J. Opt. Netw. 3(12), 914–930 (2004). [CrossRef]  

3. R. Takahashi, R. Urata, H. Takenouchi, and T. Nakahara, “Hybrid optoelectronic router for asynchronous optical packets,” in Photonics in Switching Conference, Technical Digest (CD) (IEEE, 2009), paper WeII2–2.

4. S. J. B. Yoo, “Optical packet and burst switching technologies for the future photonic internet,” J. Lightwave Technol. 24(12), 4468–4492 (2006). [CrossRef]  

5. D. Wolfson, V. Lal, M. Masanovic, H. N. Poulsen, C. Coldren, G. Epps, D. Civello, P. Donner, and D. J. Blumenthal, “All-optical asynchronous variable-length optically labeled 40 Gbps packet switch,” in 31st European Conference on Optical Communication (ECOC 2005), Technical Digest (CD) (IET, 2005), paper Th4.5.1.

6. D. Chiaroni, “Optical packet add/drop multiplexers for packet ring networks,” in 34th European Conference on Optical Communication (ECOC 2008), Technical Digest (CD) (IEEE, 2008), paper Th.2.E.1.

7. R. Urata, R. Takahashi, T. Suemitsu, T. Nakahara, and H. Suzuki, “An optically clocked transistor array for high-speed asynchronous label swapping: 40 Gb/s and beyond,” J. Lightwave Technol. 26(6), 692–703 (2008). [CrossRef]  

8. T. Segawa, S. Matsuo, T. Kakitsuka, T. Sato, Y. Kondo, and R. Takahashi, “Semiconductor double-ring-resonator-coupled tunable laser for wavelength routing,” IEEE J. Quantum Electron. 45(7), 892–899 (2009). [CrossRef]  

9. T. Nakahara, R. Takahashi, T. Yasui, and H. Suzuki, “Optical clock-pulse-train generator for processing preamble-free asynchronous optical packets,” IEEE Photon. Technol. Lett. 18(17), 1849–1851 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Diagram of proposed OPS node. AWG: arrayed waveguide grating. SPC: serial-to-parallel converter. PSC: parallel-to-serial converter. Each of the overlayed planes (containing four label processors and an optical switch in the figure) represents a wavelength layer of the node.
Fig. 2
Fig. 2 Diagrams of (a) the entire 4 × 4 label processing and switching sub-system, (b) a single label processor, and (c) the 4 × 4 switch. The optical packet and control signal output from the label processor are entered into the APD-TIA and DRR TLD of the switch, respectively.
Fig. 3
Fig. 3 Photograph of the prototype 4 × 4 sub-system, housed in a 2U shelf (43 × 76 × 8.5 cm3) and powered by a single 24 V power supply connection on the back panel. Two fans were mounted on the back panel for cooling.
Fig. 4
Fig. 4 (a) Input and (c) output packet stream waveforms for IN1, OUT1. (b), (d), and (e) show the waveform enlarged at the head of each packet, and the corresponding 16-bit label. Similar waveforms were obtained for the other inputs/outputs of the sub-system (IN2-4/OUT2-4).
Fig. 5
Fig. 5 BER measurement results for all 16 output packets shown in Fig. 2(a). Each line indicates results for a particular output packet and the switching path it traversed. Power penalty varied from 0.25 to 1.7 dB.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.