Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A multi-ring optical packet and circuit integrated network with optical buffering

Open Access Open Access

Abstract

We newly developed a 3 × 3 integrated optical packet and circuit switch-node. Optical buffers and burst-mode erbium-doped fiber amplifiers with the gain flatness are installed in the 3 × 3 switch-node. The optical buffer can prevent packet collisions and decrease packet loss. We constructed a multi-ring optical packet and circuit integrated network testbed connecting two single-ring networks and a client network by the 3 × 3 switch-node. For the first time, we demonstrated 244 km fiber transmission and 5-node hopping of multiplexed 14-wavelength 10 Gbps optical paths and 100 Gbps optical packets encapsulating 10 Gigabit Ethernet frames on the testbed. Error-free (frame error rate < 1 × 10−4) operation was achieved with optical packets of various packet lengths. In addition, successful avoidance of packet collisions by optical buffers was confirmed.

©2012 Optical Society of America

1. Introduction

As communication networks get to be the most important infrastructure, not only the amount but also the type of traffic are increasing. In the near-future, various contents from small-data-size, low-quality content (e.g., E-mails, sensor data collection) to large-data-size, high-quality (e.g., high-definition video distribution, remote surgery) will be transported on networks. To efficiently transmit those contents, it is expected to employ suitable transport schemes according to the property of contents. Recently, the convergence of packet switch and circuit switch architectures on control-plane or data-plane has received much attention because both best-effort and quality of service (QoS) guaranteed services can be provided on the same infrastructure. We have proposed an optical packet and circuit integrated (OPCI) network in a new-generation network design project [1,2] and other research projects have also investigated [3,4]. We have strongly introduced optical packet switching (OPS) and optical circuit switching (OCS) technologies into OPCI networks to decrease the power consumption of node equipments [5,6]. In OPCI networks, OPS links serve bandwidth-sharing and best-effort data transfer, while OCS links enable a fully occupied bandwidth and end-to-end QoS guaranteed data transport. Moreover, wavelength resources can be dynamically allocated to OPS or OCS links according to traffic conditions.

Figure 1 shows the development roadmap of OPCI networks. Before now, we have developed a 2 × 2 OPCI node and constructed a single-ring OPCI network testbed with two 2 × 2 OPCI nodes [7]. We have confirmed basic operations such as add/drop and through operations for 14-wavelength 10 Gbps optical paths and 100 Gbps optical packets in the single-ring OPCI network testbed. Here, we transiently focus on multi-ring topology upgraded from single-ring topology while our final target is OPCI networks with mesh topology. Previously, an optical multi-ring network for only OPS links has been demonstrated [8]. Recently, we developed a 3 × 3 OPCI node as a central node to connect two single-ring OPCI networks and a client network, and constructed a multi-ring OPCI network testbed for both OPS and OCS links [9]. The multi-ring OPCI network testbed consists of one 3 × 3 OPCI node and 2 × 2 OPCI nodes. On the other hand, because the possibility of packet collisions is increased in more complicated networks such as multi-ring or mesh networks, optical buffers are required to avoid packet collisions in OPS links. Before now, various optical buffer architecture and implementation have been demonstrated [1012]. This time, we newly developed low polarization-dependent optical buffers for handling wide-colored optical packets, and implemented the optical buffers into the 3 × 3 OPCI node. We also improved a burst-mode erbium-doped fiber amplifier (EDFA) [13] to maintain the gain flatness for target wavelength range. In this paper, for the first time, we demonstrated error-free (frame-error-rate < 1 × 10−4) 5-node hopping and 244 km fiber transmission of multiplexed 14-wavelength 10 Gbps optical paths and 100 Gbps optical packets in the multi-ring network. In addition, we presented the successful operation of optical buffering in the 3 × 3 OPCI node.

 figure: Fig. 1

Fig. 1 Development roadmap of optical packet and circuit integrated networks.

Download Full Size | PDF

2. 3 × 3 central OPCI node with optical buffers

A 2 × 2 OPCI node for ring networks mainly consists of seven 10 Gbps optical transport network (10G-OTN) transponders, a 100 Gbps optical packet (100G-OP) transponder, two wavelength-selective switches (WSS) for add/drop functions, an OPS system and some optical amplifiers [7]. The OPS system is composed of an electronic switch controller (SW-CONT) and a broadcast-and-selective 4 × 4 semiconductor optical amplifier (SOA) switch subsystem [14,15]. Wavelength resources are divided by waveband and allocated to OPS and OCS links. WSSs are used for combining or dividing OPS and OCS wavebands. Also, two WSSs are used as an OCS system. In OCS links, to send data on optical paths, a 10G-OTN transponder encapsulates 10 Gigabit Ethernet (10GbE) frames from a client network into OTN format. Because optical paths are established by control packets in advance, there is no need to read the IP destination address of incoming 10GbE frames. In OPS links, a 100G-OP transponder encapsulates an incoming 10GbE frame from the client side into a 100 Gbps colored optical packet which consists of ten 10 Gbps optical payloads with different wavelengths and a destination optical label [16]. The destination label is determined according to a mapping table between destination labels and the IP destination addresses of incoming 10GbE frames. A SW-CONT reads the destination label and controls a SOA switch subsystem to forward an optical packet to the correct output port according to a switching table in each input port. Also, control optical packets for path signaling and wavelength resource control are exchanged via OPS links.

For a multi-ring network, a central OPCI node has 3 × 3 input/output ports to connect two single-ring networks and a client network. Figures 2(a) and 2(b) show a photograph and a configuration diagram of a 3 × 3 OPCI node, respectively. The 3 × 3 OPCI node is based on the 2 × 2 one with an extended OPS system. The extended OPS system has 3 × 3 input/output ports and three optical buffers, each of which is attached to each output port. The optical buffer consists of a 4 × 4 SOA switch subsystem and 4-fiber delay lines (FDLs) with different lengths (called as Delay 0, 1, 2, 3 in ascending order according to length), and acts as both switching and buffering functions. The buffer size is 3 packets. The SOA switch subsystem has a switching speed of several nanoseconds, low polarization-dependency within C-band, and loss compensation. Previously, the switch subsystem had a minimum channel-space limitation of 400 GHz to avoid the crosstalk caused by a four-wave mixing effect. This time, the switch subsystem separates 100 GHz-spacing colored optical packets into four wavelength groups by using 100/400 GHz interleavers to switch each wavelength group [14,15]. Therefore, the optical buffer based on the switch subsystem can handle 100 GHz-spacing colored optical packets without crosstalk. Note that a 2 × 2 SOA switch subsystem and 2-FDLs are attached to output port 3 due to the limited number of SOA components. Therefore, the buffer size is 1 packet in the optical buffer attached to output port 3. This OPS system has an extra input/output port to upgrade into a 4 × 4 OPCI node.

 figure: Fig. 2

Fig. 2 (a) Photograph and (b) configuration diagram of central 3 × 3 OPCI node with optical buffers.

Download Full Size | PDF

From two OPCI ring networks and a client network, optical packets are input to the 3 × 3 OPS system. If optical packets from different input ports are switched into the same output port at the almost same timing, the collision of those packets might be caused at the output port. Here, the SW-CONT of the extended OPS system has buffer management function which can support asynchronously arriving and variable-length optical packets [17]. To avoid packet collisions, the SW-CONT receives labels of optical packets from all input ports before optical packets arrive at SOA switch subsystems, and acquires packet-destination, arrival-timing and packet-length information of all optical packets. By using the information, the SW-CONT determines an appropriate output port to the destination and calculates a delay value which is given to each packet coming from all ports to avoid packet collisions. Then, control signals are output to all SOA switch subsystems from the SW-CONT. Each optical packet is broadcasted to all SOA switch subsystems by couplers. Since each SOA component opens or closes according to control signals, each optical packet is switched to an output port to the destination and to a FDL which delayed for an appropriate time corresponding to the given delay value. Because optical packets with the possibility of collisions are switched to different FDLs respectively, packet collisions are avoided. If the number of buffered optical packets exceeds the buffer size, some optical packets are discarded by switch-closing.

In OPS links, a transient-suppressed EDFA (TS-EDFA) was used to eliminate optical surges and gain transients for shorter optical packets (~100 ns) [13]. Here, we newly improve the TS-EDFA by optimizing EDF doping profile to further increase the saturation power and to have a better transient performance. Then, we install a custom gain flattening filter (GFF) to the TS-EDFA to maintain the gain flatness for the C-band. We put the improved TS-EDFAs in front and back of each SOA switch subsystem.

In the central OPCI node, OCS links between two ring networks are established through transponders for each ring network and a layer-2 switch (L2-SW) because OCS links of each ring network cannot be all-optically connected due to the shortage of the number of WSS ports. However, it is possible to all-optically connect with two ring networks if WSSs has more ports. Fourteen 10 Gbps data transmission on OCS links in a single-ring OPCI network has been already demonstrated [7].

3. Multi-ring optical packet and circuit integrated network demonstration

Figure 3 shows a multi-ring OPCI network testbed with two 2 × 2 OPCI nodes (called as Node 1 and 3) and one 3 × 3 OPCI node (called as Node 2). The 3 × 3 OPCI node connects two single-ring networks (called as OPCI ring network 1 and 2). Connection between nodes was established by a 61 km single-mode fiber (SMF) and a 28 km dispersion-compensating fiber (DCF). Each node device can handle 40 wavelength channels 1531.90–1563.05 nm (λ1–λ40) with 100 GHz channel spacing. The wavelength resource for OPS links was 1547.72–1554.94 nm (λ21–λ30). The wavelength resource for OCS links from Node 1 to Node 2 and from Node 2 to Node 3 was 1538.98–1543.73 nm (λ10–λ16), and that for OCS links from Node 2 to Node 1 and from Node 3 to Node 2 was 1558.17–1563.05 nm (λ34–λ40). In OPCI ring network 1, multiplexed 100 Gbps colored optical packets and 14-wavelength 10 Gbps optical paths are transmitted. However, in OPCI ring network 2, due to the limited number of 10G-OTN transponders, 1 Gbps small form factor pluggable (1G-SFP) modules or distributed-feedback laser diodes (DFB-LDs) are set at Node 2 and Node 3 for OCS links between Node 2 and Node 3. Therefore, the data rate on an optical path between Node 2 and Node 3 is less than 1 Gbps. Each network tester (Tester) was used as a client having some IP source addresses (IP-SA) and was connected with each node via a 10GbE interface or L2-SW. The IP addresses of each client are described in Fig. 3.

 figure: Fig. 3

Fig. 3 Multi-ring optical packet and circuit integrated network testbed and transmission routes.

Download Full Size | PDF

Figure 3 also shows tables of 2, 3, 4, or 5 node hopping routes on OPS and OCS links. At a sender node, an incoming 10GbE frame from a client network is converted to a 100 Gbps colored optical packet with a label by a 100G-OP transponder. Additionally, the 100G-OP transponder has a copy function of optical packets to easily increase the optical packet rate for experiments. At a transit node, according to a switching table, an optical packet is forwarded to a correct route. At a receiver node, a 10GbE frame is recovered from a received optical packet and sent to a client. In OCS links, a sender node and a receiver node are directly connected and data are transmitted. Each node can send a data on optical paths and optical packets not only to other nodes but also to itself via the multi-ring network for an optical loopback test.

We transmitted 10GbE frames by optical packets and an optical path through 2, 3, 4 or 5 node hopping routes as shown in Fig. 3, and measured the error rate of transmitted 10GbE frames. The frame length of 10GbE frames transmitted by optical packets was changed at 64, 1518 or 9000 bytes. Under three kinds of frame length, the optical packet rate was fixed at 20% by adjusting the frame gap of 10GbE frames and by using the copy function of 100G-OP transponders. The average throughputs were 5.4, 8.6 and 9.6 Gbps in cases of 64, 1518 and 9000 byte frame length, respectively. On the other hand, the frame length of 10GbE frames transmitted on an optical path was fixed at 1518 byte, and the average throughput was 984 Mbps. Figures 4(a) -4(d) show the eye-diagrams of one optical packet payload of 1550.92 nm measured in the 5-node hopping route from Node 2. The measured points of Figs. 4(a)4(d) are shown as “(a)-(d)” in Fig. 3, respectively. When the eye-diagrams were measured, the frame length was at 1518 byte. The Q-factors in Figs. 4(a)-4(d) were 10.16, 9.20, 7.77 and 6.76, respectively. These results indicated that the high signal quality of optical packets was kept after 5-node hopping. Figure 5(a) shows the spectrum of multiplexed optical packets and optical paths at the output of Node 2. Figure 5(b) shows the temporal waveform of only optical packets extracted by a band-pass filter. Figure 6 shows the measured frame error rates of transmitted 10GbE frames with various frame lengths by optical packets and an optical path in 2, 3, 4, or 5 node hopping routes. While the error rates in OPS links were degraded with each hopping, the loss rates of less than 1 × 10−4 were kept under all conditions. Because a frame error rate of less than 1 × 10−4 is regarded as high quality [18], we confirmed error-free operation on OPS and OCS links even under 5-node hopping 244 km transmission. Note that data on optical paths are not all-optically transmitted in cases of over 3-node hopping because optical paths are switched by a L2-SW with optical-electrical conversions in Node 2.

 figure: Fig. 4

Fig. 4 (a)–(d) Eye diagrams and Q-factors of one optical payload of 100 Gbps optical packets in 5-node hopping route from Node 2, measured at points (a)–(d) shown in Fig. 3.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 (a) Spectrum waveform of multiplexed optical packets and optical paths measured at output of Node 2. (b) Temporal waveform of extracted 100 Gbps optical packets.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Error rates of 10GbE frames with various frame lengths transmitted by OPS and OCS links in 2, 3, 4, or 5 node hopping routes.

Download Full Size | PDF

Next, we examined the operation of an optical buffer in the central 3 × 3 OPCI node. We sent optical packets from three input ports to output port 1 at the same time. It means that packet collisions might occur. From OPCI ring networks 1 and 2, optical packets accommodating 64 byte and 9000 byte 10GbE frames, whose optical packet duration times were fixed at 19.2 ns and 140.8 ns, were launched into input port 1 and 2 of the 3 × 3 OPCI node, respectively. From a client network, optical packets accommodating 1518 byte 10GbE frames, whose optical packet duration time was fixed at 57.6 ns, were launched into input port 3. In operation, the optical packet rate at each input port was randomly changed within a range from 1% to 10% to generate a bursty traffic. The length of each FDL is increased by 20 m, which corresponding delay time is about 100 ns. In the optical buffer attached at output port 1, optical packets were switched to Delays 0, 1, 2, 3 or discarded according to traffic conditions. The switched optical packets were merged in front of output port 1. Figure 7 shows input packet sequences at input ports 1, 2, 3 and a merged packet sequence at output port 1, which were measured at different timing. We measured the packet loss rates in the optical buffer under conditions that the optical packet rate at each input port was randomly changed within a range from 1% to 10% (called as Case 1) and from 5% to 10% (called as Case 2). In Case 1, the average throughputs were 0.48, 1.57 and 1.25 Gbps at input port 1, 2 and 3, respectively. In Case 2, the average throughputs were 1.73, 5.74 and 4.59 Gbps at input port 1, 2 and 3, respectively. The packet loss rate is defined as the ratio of the number of discarded packets to that of all input packets from three input ports in the optical buffer. The SW-CONT with buffer management function has a packet counter for buffering operation. Figure 8(a) shows the number of switched optical packets to Delays 0, 1, 2, 3 and discarded packets by each input port in Case 1. Figure 8(b) shows the average throughput at each input port, the total of the average throughput at three input ports and the packet loss rates. In Case 1 and Case 2, the packet loss rates were 3.8 × 10−5 and 2.38 × 10−4, respectively. These results showed that the optical buffer successfully operated to avoid packet collisions.

 figure: Fig. 7

Fig. 7 Optical packet sequences at input ports 1, 2, 3 and output port 1 of OPS system in buffering operation.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) Packet count of switched optical packets to each delay line and discarded one by each input port in Case 1. (b) Average throughput at each input port, the total of the average throughput at three input ports and the packet loss rates in Case 1 and Case 2.

Download Full Size | PDF

4. Conclusion

We developed a novel 3 × 3 integrated OPS/OCS node with optical buffers and built a multi-ring optical packet and circuit integrated network testbed. We demonstrated 5-node hopping 244 km transmission of 100 Gbps colored optical packets with 14-wavelength 10 Gbps optical paths. In addition, we achieved the successful operation of optical buffering of 100 Gbps optical packets in the 3 × 3 OPCI node. On the other hand, as the total offered load at a 3 × 3 OPCI node increases, the packet loss rate gets higher in optical buffers. This is due to the increase of the possibility of packet collisions by the higher load and the small buffer size of 3 packets in optical buffers. Therefore, our future work is to install the larger-scale optical buffers into the 3 × 3 OPCI node and to make it work stably. Recently, it was reported that the buffer size in the core routers could be reduced to 10-20 packets at the expense of a small amount of bandwidth utilization [19]. To realize the buffer size of 10-20 packets, we need large-scale optical switches in optical buffers, for example, 4 × 16 or 4 × 32 switches. However, broadcast-and-selective switching architecture such as our SOA switch subsystem causes high coupling loss due to many couplers as the scale of switch is enlarged. Therefore, large-scale optical switches with low insertion loss are indispensable to solve the scalability issue of optical buffers. In addition, low polarization-dependency in wide-band is also required for handling wide-colored optical packets. Before now, we have demonstrated an optical buffer with 31 packets buffer size by cascaded 1 × 8 Plomb Lanthanum Zirconate Titanate (PLZT) optical switches with low polarization-dependency on bench-top [20]. However, the insertion loss of the cascaded 1 × 8 PLZT switches was over 28 dB. In the near future, we intend to improve the insertion loss of PLZT switches and show the possibility of large-scale optical buffers.

Acknowledgments

The authors would like to thank Takeshi Makino, Wei Ping Ren, Ryo Mikami and Tomoji Tomuro of the National Institute of Information and Communications Technology for their support in the experiments.

References and links

1. “AKARI architecture conceptual design ver1.0 (2007),” http://akari-project.nict.go.jp/eng/index2.htm.

2. H. Harai, “Optical packet & circuit integrated network for future networks,” IEICE Trans. Commun. E95-B(3), 714–722 (2012).

3. S. Das, G. Parulkar, N. McKeown, P. Singh, D. Getachew, and L. Ong, “Packet and circuit network convergence with OpenFlow,” in Proc. Optical Fiber Communications Conference (2010), no. OTuG1.

4. H. Wang, A. S. Garg, K. Bergman, and M. Glick, “Design and demonstration of an all-optical hybrid packet and circuit switched network platform for next generation data centers,” in Proc. Optical Fiber Communications Conference (2010), no. OTuP3.

5. S. Shinada, H. Furukawa, and N. Wada, “Huge capacity optical packet switching and buffering,” Opt. Express 19(26), B406–B414 (2011). [CrossRef]   [PubMed]  

6. T. Miyazawa, H. Furukawa, K. Fujikawa, N. Wada, and H. Harai, “Development of an autonomous distributed control system for optical packet and circuit integrated networks,” J. Opt. Commun. Netw. 4(1), 25–37 (2012). [CrossRef]  

7. H. Furukawa, H. Harai, T. Miyazawa, S. Shinada, W. Kawasaki, and N. Wada, “Development of optical packet and circuit integrated ring network testbed,” Opt. Express 19(26), B242–B250 (2011). [CrossRef]   [PubMed]  

8. D. Chiaroni, “Optical packet add/drop multiplexers for packet ring networks,” in Proc. 34th European Conference and Exhibition on Optical Communication (2008), no. Th.2.E.1.

9. H. Furukawa, S. Shinada, T. Miyazawa, H. Harai, W. Kawasaki, T. Saito, K. Matsunaga, T. Toyozumi, and N. Wada, “A multi-ring optical packet and circuit integrated network with optical buffering,” in Proc. 38th European Conference and Exhibition on Optical Communication (2012), no. We.2.D.2.

10. H. Yang and S. J. B. Yoo, “All-optical variable buffering strategies and switch fabric architectures for future all-optical data routers,” J. Lightwave Technol. 23(10), 3321–3330 (2005). [CrossRef]  

11. T. Zhang, K. Lu, and J. P. Jue, “Shared fiber delay line buffers in asynchronous optical packet switches,” IEEE J. Sel. Areas Comm. 24(4), 118–127 (2006). [CrossRef]  

12. T. Tanemura, I. M. Soganci, T. Oyama, T. Ohyama, S. Mino, K. A. Williams, N. Calabretta, H. J. S. Dorren, and Y. Nakano, “Large-capacity compact optical buffer based on InP integrated phased-array switch and coiled fiber delay lines,” J. Lightwave Technol. 29(4), 396–402 (2011). [CrossRef]  

13. Y. Awaji, H. Furukawa, N. Wada, P. Chan, and R. Man, “Mitigation of transient response of Erbium-doped fiber amplifier for traffic of high speed optical packets,” in Proc. Conf. on Lasers and Electro-Optics (2007), no. JTuA133.

14. K. Sone, S. Yoshida, Y. Kai, G. Nakagawa, G. Ishikawa, and S. Kinoshita, “High-speed 4×4 SOA switch subsystem for DWDM systems,” in Proc. 16th OptoElectronics and Communications Conference (2011), no.8A2_2.

15. G. Nakagawa, Y. Kai, K. Sone, S. Yoshida, S. Tanaka, K. Morito, and S. Kinoshita, “Ultra-high extinction ratio and low cross talk characteristics of 4-array integrated SOA module with compact-packaging technologies,” in Proc. 37th European Conference and Exhibition on Optical Communication (2011), no. Mo.2.LeSaleve.4.

16. H. Harai and N. Wada, “More than 10 Gbps photonic packet-switched networks using WDM-based packet compression,” in Proc. 8th OptoElectronics and Communications Conference pp. 703–704 (2003)

17. H. Furukawa, H. Harai, M. Ohta, and N. Wada, “Implementation of high-speed buffer management for asynchronous variable-length optical packet switch,” in Proc. Optical Fiber Communications Conference (2010), no. OWM4.

18. ITU-T Recommendation Y.1541.

19. D. Wischik and N. McKeown, “Part I: Buffer sizes for core routers,” ACM SIGCOMM Comp. Comm. Rev. 35(3), 75–78 (2005). [CrossRef]  

20. H. Furukawa, H. Harai, N. Wada, T. Miyazaki, N. Takezawa, and K. Nashimoto, “A 31-FDL buffer based on trees of 1×8 PLZT optical switches, ” in Proc. 32nd European Conf. and Exhibition on Optical Communication (2006), no. Tu4.6.5.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Development roadmap of optical packet and circuit integrated networks.
Fig. 2
Fig. 2 (a) Photograph and (b) configuration diagram of central 3 × 3 OPCI node with optical buffers.
Fig. 3
Fig. 3 Multi-ring optical packet and circuit integrated network testbed and transmission routes.
Fig. 4
Fig. 4 (a)–(d) Eye diagrams and Q-factors of one optical payload of 100 Gbps optical packets in 5-node hopping route from Node 2, measured at points (a)–(d) shown in Fig. 3.
Fig. 5
Fig. 5 (a) Spectrum waveform of multiplexed optical packets and optical paths measured at output of Node 2. (b) Temporal waveform of extracted 100 Gbps optical packets.
Fig. 6
Fig. 6 Error rates of 10GbE frames with various frame lengths transmitted by OPS and OCS links in 2, 3, 4, or 5 node hopping routes.
Fig. 7
Fig. 7 Optical packet sequences at input ports 1, 2, 3 and output port 1 of OPS system in buffering operation.
Fig. 8
Fig. 8 (a) Packet count of switched optical packets to each delay line and discarded one by each input port in Case 1. (b) Average throughput at each input port, the total of the average throughput at three input ports and the packet loss rates in Case 1 and Case 2.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.