Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks

Open Access Open Access

Abstract

Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

©2014 Optical Society of America

1. Introduction

Scalability and flexibility requirements are challenging the current optical networks, especially in the datacenter optical network scenarios with highly burst traffic load. Elastic transport plane and dynamic control plane are important trends for the future optical networks. Flexi-grid optical network has attracted much attention recently because of flexible bandwidth provisioning and high spectrum efficiency [1, 2]. In contrast to the conventional, fixed-grid, rigid-bandwidth wavelength division multiplex (WDM) networks, flexi-grid optical network can provide a flexible sub- or super-wavelength granularity by elastic allocation of low rate subcarriers. Enabling technologies, such as bandwidth-variable transponders and bandwidth-variable wavelength cross connects (WXC), have been designed and demonstrated [3]. Then, how to improve the control efficiency of flexi-grid optical networks, especially the large scale flexi-grid optical network is becoming a serious problem. Software defined networking (SDN) enabled by OpenFlow protocol may become technology candidate for this problem because of its programmable feature [48].

OpenFlow-based control architectures can provide high flexibility for the operators and can help to build a unified control over various resources by abstracting them as unified interfaces for joint optimization [9]. A centralized control plane based on a stateful path computation element (PCE) is designed and experimentally demonstrated acting as an OpenFlow controller in flexi-grid optical networks due to its powerful capability of computation [10]. And two experiments have been conducted to evaluate the performance of OpenFlow based control plane in dynamic optical networks [1114]. However, there are no reports about the implementation and performance evaluation of OpenFlow based control plane for large scale flexi-grid optical networks, which is very important for its future deployment.

Large scale networking testbed has been built with GMPLS protocol set [15], and SDN based control architecture for flexi-grid optical network has also been demonstrated [11]. Different from our previous works, in the paper we build the first large scale flexi-grid optical networks testbed with 1000 nodes to evaluate the performance of OpenFlow based control plane for large scale flexi-grid optical networks. The rest of this paper is organized as follows. Section 2 introduces the SDN based control architecture for flexi-grid optical network, and the functional models of NOX (network operation system) based controller are also described. The experimental scenario and testbed configuration are described in section 3. Section 4 shows the experimental results and performance evaluations. Section 5 concludes the paper and discusses the future work.

2. SDN based flexi-grid optical networks

As shown in Fig. 1, the transport plane of SDN based flexi-grid optical networks consists of many software defined optical transport nodes (S-OTN), which are implemented based on flexi-grid wavelength selectively switch (WSS) and flexi-grid transponder [11]. The modulation format of the signal can be changed dynamically, such as BPSK (Binary Phase Shift Keying), QPSK (Quadrature Phase Shift Keying), 8PSK, 16QAM (Quadrature Amplitude Modulation) and 64QAM, and the bandwidth of WSS can also be changed dynamically. A software agent is embedded in S-OTN to keep the communication between NOX based controller and S-OTN with OpenFlow (OF) protocol. Through the software agent, the S-OTN can maintain optical flow table, model node information (the abstracted information including port number, adjacent node and links, and so on), and control the physical hardware. On the other hand, NOX based controller contains the transport plane information abstracted from physical network and lightpath provisioning in flexi-grid optical networks. In NOX based controller, two control modules, i.e., physical network control and abstraction network control are included. The physical network control module is responsible for discovering physical layer network elements and controlling the spectrum bandwidth and modulation format in underlying layer network. The abstraction network control module can abstract and manage the network topology through the computation of path computation element (PCE + ), which is capable of the path computation, spectrum resource assignment and abstract topology maintenance different from PCE standardized by IETF RFC4655.

 figure: Fig. 1

Fig. 1 SDN based flexi-grid optical networks architecture.

Download Full Size | PDF

NOX based controller for flexi-grid optical networks

Replacing the traditional control method, NOX based controller cannot only communicate with the application controller through application and network interface (ANI), but also complete the control for the optical transport nodes through OpenFlow protocol. Five modules are contained in NOX based controller architecture as shown in Fig. 2, which are NOX gateway (NOXGW), Network resource abstraction module (NRA), Network controller (NC), Path computation element (PCE + ), and Database management module (DBM). The functions of these modules are described as follows.

 figure: Fig. 2

Fig. 2 NOX based controller architecture.

Download Full Size | PDF

NOX gateway (NOXGW)

NOXGW is responsible for the communication with application controller (AC) through ANI interface, and provide the service requests and decision strategies for PCE + . There are mainly four tasks that NOXGW is responsible for.

  • a) Listen for the messages. When a message is sent from AC to NOX, NOXGW will listen for the message first, then receive and analyze it. After the message is processed, NOXGW will transmit the results to NOX.
  • b) Parse the configuration files. NOXGW will read the configuration information in the network initialization, including service ports, other modules address and configuration information.
  • c) Drive other NOX functional modules. NOXGW will drive other NOX functional modules to complete the corresponding request collaboratively through multi-NOX control protocol (MNCP, such as PCEP) and return the results to AC.
  • d) Map service parameters to network parameters. Different user requests will be mapped to different bandwidth, delay, jitter and other parameters. Appropriate constraints will be provided for PCE + .

Network resource abstraction (NRA) module

NRA is responsible for the abstraction of the network resources in NOX, which is provided for AC to make cross stratum optimization decision. NRA can get the link layer information from the database, call PCE + to assess the path, and perform resource abstraction based on the resource information provided by the database module. Finally the abstraction results will be reported to AC. The information submitted to AC is ultimately required to be extracted by NRA, including the IP address of all nodes within the NOX controlled domain, the corresponding optical node number, the logical topology and resource occupancy of the datacenter (DC) nodes and user nodes, the remaining bandwidth, delay, jitter and other information of the DC and user nodes, and so on.

Network controller (NC)

NC is responsible for the communication with the transport layer. It is the only bridge between NOX and transport plane approached with the extended OpenFlow protocol. The functions of NC are listed as follows.

  • a) Receive the link resource information reported by the transport plane and forward it to the database management module through OpenFlow protocol.
  • b) Get the ultimate service route from PCE + , and assign setup instructions to the optical switching nodes within the domain.
  • c) In the initialization stage, each optical switching node reads the configuration information and transmission resource information, and reports it to the NC. NC is responsible for maintaining and reporting topology information of the domain.

Path computation element (PCE + )

PCE + is the core module in the NOX, which is different from the normal PCE. It is responsible for routing and spectrum (wavelength) assignment (RSA), and can be designed to be plug-editable NOX, which means that RSA algorithm can be implemented based on different plug-ins. It makes NOX control function more flexible for transport plane and more adaptable to various network environment and service demands.

Database management (DBM) module

DBM stores and manages all the information, which includes network resource database (NRDB), traffic engineering database (TED), management information base (MIB), and configuration information. Due to the request for high frequency and accuracy of PCE + operation, the link-state advertisement (LSA) of OSPF-TE protocol which communicates the router's local routing topology to all other local routers will be checked whether to update the database before each path computation. However the interworking process will increase the response time for the network resource computation and decision. Therefore, the automatic update mechanism is added in DBM. If some information needs to be updated, the new information will be pushed into PCE + automatically with less control traffic and high computation success rate.

3. Experimental scenario and testbed configuration

To verify the performance of SDN based control architecture, a large scale flexi-grid optical networks testbed is setup as shown in Fig. 3. In transport plane, four OpenFlow-enabled flexi-grid optical nodes are equipped, each of which comprises flex reconfigurable optical add-drop multiplexer (ROADM) and optical data unit (ODU) cards. The other nodes are realized on an array of virtual machines created by VMware software running on X3650 servers. Since each virtual machine has the operation system and its own independent IP address, CPU and memory resource, it can be considered as a real node. The virtual machine technology makes it easy to set up experiment topology with 1000 nodes. In the OpenFlow-based control plane, the NOX based controller support the flexible spectrum control, physical layer parameter adjustment, path computation and resource abstraction, while the database server are responsible for maintaining traffic engineering database (TED), management information base, and the connection status. The OpenFlow protocol extension solution can be found in reference [7]. Application plane is deployed in a server and applies the required application resource.

 figure: Fig. 3

Fig. 3 Experimental scenario.

Download Full Size | PDF

Five network topologies have been adopted for the experiment with 200 (624 links), 400 (1260 links), 600 (1890 links), 800 (2546 links), and 1000 (3188 links) virtualized S-OTN nodes respectively in a single optical domain as shown in Fig. 4. It is assumed that there are 640 spectrum slots on each link, and the bit rate of each slot is 12.5Gbps. The traffic load is changed from 200 erlangs to 1100 erlangs. There are five kinds of services, the request bandwidth of which are 100, 200, 400, 500, and 1000 Gbps respectively. These five kinds of service are distributed uniformly on the testbed, and all of them follow Poisson distribution.

 figure: Fig. 4

Fig. 4 Network topology with various nodes.

Download Full Size | PDF

4. Experimental results and analysis

We have conducted a series of experiments on the testbed. Three kinds of network performance have been evaluated for SDN based control architecture, network scalability, DCN bandwidth limitation, and failure restoration time. 10000 services, deployed by Poisson distribution, have run on the testbed to evaluate the network performance. The arrival rate (the reciprocal of the average time between events, and the time between events follows the exponential distribution) of service has been set as 0.8 for all experiments. Various parameters including blocking probability, resource utilization, lightpath provisioning time, and failure restoration time are gained under different experimental scenarios.

A. Network scalability

In order to evaluate the network scabiclity under heavy traffic load, the node number in the experimental topology is changing from 200, to 400, 600, 800, and 1000, and the traffic load is changing from 200 to 1100 erlangs. Blocking probability, resource utilization, lightpath provisioning time are gained under different network sizes and traffic loads.

As shown in Fig. 5, blocking probability is increasing with traffic load changing from 200 to 1100 erlangs. However, blocking probability is reducing with node number changing from 200 to 1000. The reason is that the traffic load is fixed for different network topologies, and the average traffic load for each node in larger topology is smaller. Moreover, the differences of the blocking probability among different network topologies are becoming smaller and smaller with the node number increasing, because the differences of average traffic load among nodes are becoming smaller and smaller too. Due to the same reason, we can see that the resource utilization is increasing with traffic load changing from 200 to 1100 erlangs, and the resource utilization is also reducing with node number changing from 200 to 1000 as shown in Fig. 6. But from Fig. 7, we can see that the lightpath provisioning time is increasing with the node number changing from 200 to 1000, because with the network size increasing, the average hop number for each service is also increasing. But even in the network topology with 1000 nodes, the average lightpath provisioning time is lower than 25 ms, which is a good performance for large scale network.

 figure: Fig. 5

Fig. 5 Blocking probability under different network sizes.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Resource utilization under different network sizes.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Lightpath provisioning time under different network sizes.

Download Full Size | PDF

B. Impact of DCN bandwidth

In constrast to GMPLS based DCN, SDN based DCN mainly depends on the centralized control model. So the bandwidth of DCN, especially the southbound interface bandwith of NOX based controller will affect the network performance to the greatest extent. In order to evaluate the bandwidth impact of Ethernet based DCN in a lab environment, we conduct the experiment on the network topology with 1000 nodes under different southbound interface bandwiths of NOX based controller. Blocking probability and lightpath provisioning time are gained from the testbed.

From Fig. 8 and 9, we can see that the lightpath provisioning time is affected heavily by the DCN bandwidth. The reason is that there are many communication packets queued at the egress of the controller, which increases the process time of the controller. Then the lightpath provisioning time increases, and the improvement extent will become larger and larger with the DCN bandwidth reducing. Meanwhile, the experimental results also show that no communication packets are lost even with 400 kbps DCN bandwidth, so the blocking probability did not change in different DCN bandwidth.

 figure: Fig. 8

Fig. 8 Blocking probability under different DCN bandwidths.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Delay time under different DCN bandwidths.

Download Full Size | PDF

Then in order to find how the DCN bandwidth affects the connectable nodes number, we conducted a lot of experiments. Because we cannot get the accurate nodes number that each DCN bandwidth can support, then we increase 50 nodes each step from 50 to 1000 nodes, and test the convergence delay time of these nodes connecting the controller under different DCN bandwidths. The average convergence delay time is obtained after the same experiment runs 20 times. From Fig. 10, we can see that the node number that each DCN bandwidth can support becomes smaller with the decreasing of DCN bandwidth. If we assume that 100 ms is acceptable for the convergence delay time, we can get the conclusion that the bandwidth of 50kbps and 60kbps can support about 400 nodes, and the bandwidth of 70kbps can support 700 nodes. In order to ensure that 1000 nodes can be connected to the controller, the DCN bandwidth must be larger than 100kbps. Of course, the result is related with the size of communication packets, which will not affect the trend.

 figure: Fig. 10

Fig. 10 Node number under different DCN bandwidths.

Download Full Size | PDF

C. Failure restoration time

Finally, in order to evaluate the capability of failure dealing, we set a node failure after the network works for a period of time and some network resource has been occupied. Then we gain the convergence time of the failure topology and restoration time for the failure as shown in Fig. 11. Both convergence time and restoration time sustain at about 240 ms, and do not change significantly with the node number increasing, and the restoration time is a little larger than the convergence time. The reason is that the convergence time mainly consists of failure discovery time and topology updating time, while the restoration time consists of failure discovery time, topology updating time, path computation time and resource allocation time. The first part is due to the interconnection of OpenFlow protocol, and the other parts are caused by the NOX process which is small as shown in Fig. 11. So both convergence time and restoration time are mainly caused by the failure discovery process. Because there is only one node failure, then the convergence time and the restoration time does not change significantly in the centralized control model, which is different from the OSPF working mechanism.

 figure: Fig. 11

Fig. 11 Convergence and restoration time.

Download Full Size | PDF

5. Conclusion

SDN enabled large scale flexi-grid optical networks testbed has been first experimentally demonstrated with OpenFlow protocol deployed. A lot of experiments have been conducted on the testbed, and various performances of SDN based DCN control architecture have been evaluated. Experimental results including blocking probability, resource utilization, lightpath setup delay time, and failure restoration time are given to validate the performance of SDN based DCN control architecture. More experimental works under multi-domain optical network will be done compared with GMPLS based control architecture in the future.

Acknowledgment

This work has been supported in part by 863 program (2012AA011301), 973 program (2010CB328204), NSFC project (61201154, 60932004), Ministry of Education-China Mobile Research Foundation (MCM20130132), RFDP Project (20120005120019), the Beijing Youth Elite Project for Universities, the Fundamental Research Funds for the Central Universities (2013RC1201), and Fund of State Key Laboratory of Information Photonics and Optical Communications (BUPT).

References and links

1. M. Jinno, T. Ohara, Y. Sone, A. Hirano, O. Ishida, and M. Tomizawa, “Elastic and adaptive optical networks: possible adoption scenarios and future standardization aspects,” IEEE Commun. Mag. 49(10), 164–172 (2011). [CrossRef]  

2. K. Sato, “Recent developments in and challenges of elastic optical path networking,” in Proceedings of ECOC2011, Geneva, Switzerland, Mo.2.K (2011).

3. M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, “Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies,” IEEE Commun. Mag. 47(11), 66–73 (2009). [CrossRef]  

4. D. Simeonidou, R. Nejabati, and M. Channegowda, “Software defined optical networks technology and infrastructure: enabling software-defined optical network oOperations,” in Proceedings of OFC/NFOEC 2013, Anaheim, CA, USA, Mar.2013.OFC2013, OTh1H.3 (2013)

5. L. Liu, R. Muñoz, R. Casellas, T. Tsuritani, R. Martínez, and I. Morita, “OpenSlice: an OpenFlow-based control plane for spectrum sliced elastic optical path networks,” Opt. Express 21(4), 4194–4204 (2013). [CrossRef]   [PubMed]  

6. M. Channegowda, R. Nejabati, M. Rashidi Fard, S. Peng, N. Amaya, G. Zervas, D. Simeonidou, R. Vilalta, R. Casellas, R. Martínez, R. Muñoz, L. Liu, T. Tsuritani, I. Morita, A. Autenrieth, J. P. Elbers, P. Kostecki, and P. Kaczmarek, “Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed,” Opt. Express 21(5), 5487–5498 (2013). [CrossRef]   [PubMed]  

7. J. Zhang, H. Yang, Y. Zhao, Y. Ji, H. Li, Y. Lin, G. Li, J. Han, Y. Lee, and T. Ma, “Experimental demonstration of elastic optical networks based on enhanced software defined networking (eSDN) for data center application,” Opt. Express 21(22), 26990–27002 (2013). [CrossRef]   [PubMed]  

8. Y. Zhao, J. Zhang, T. Zhou, H. Yang, Y. Lin, J. Han, G. Li, and H. Xu, “Time-aware software defined networking (Ta-SDN) for flexi-grid optical networks supporting data center application,” GlobeCom2013, Atlanta, USA (2013).

9. R. Nejabati, Y. Shu, B. J. Puttnam, W. Klaus, N. Wada, Y. Awaji, M. Channegowda, N. Amaya, H. Harai, Y. Ou, D. Simeonidou, M. Rashidi, T. Miyazawa, J. Sakaguchi, G. Zervas, S. Yan, and B. R. Rofoee, “First demonstration of software defined networking (SDN) over space division multiplexing (SDM) optical networks,” in Proceedings of ECOC2013, London, UK, Sep.2013, paper PDP4-f-3 (2013).

10. R. Casellas, R. Martínez, R. Munoz, L. Liu, T. Tsuritani, and I. Morita, “An integrated stateful PCE/OpenFlow controller for the control and management of flexi-grid optical networks,” OFC/NFOEC 2013, Anaheim, CA, USA, OW4G (2013).

11. J. Zhang, Y. Zhao, H. Yang, Y. Ji, H. Li, Y. Lin, G. Li, J. Han, Y. Lee, and T. Ma, “First demonstration of enhanced software defined networking (eSDN) over elastic grid (eGrid) optical networks for data center service migration,” in Proceedings of OFC/NFOEC 2013, Anaheim, CA, USA, Mar.2013, paper PDP5B.1 (2013).

12. A. Lord, A. Autenrieth, P. Gunning, T. Szyrkowiec, J. Elbers, A. Lumb, and P. Wright, “First field demonstration of cloud datacenter workflow automation employing dynamic optical transport network resources under OpenStack & OpenFlow orchestration,” in Proceedings of ECOC2013, London, UK, Sep.2013, paper PDP4-f-1 (2013).

13. L. Liu, W. R. Peng, R. Casellas, T. Tsuritani, I. Morita, R. Martínez, R. Muñoz, and S. J. Yoo, “Design and performance evaluation of an OpenFlow-based control plane for software-defined elastic optical networks with direct-detection optical OFDM (DDO-OFDM) transmission,” Opt. Express 22(1), 30–40 (2014). [CrossRef]   [PubMed]  

14. L. Liu, H. Choi, R. Casellas, T. Tsuritani, I. Morita, R. Martinez, and R. Munoz, “Demonstration of a dynamic transparent optical network employing flexible transmitters/receivers controlled by an OpenFlow–stateless PCE integrated control plane,” J. Opt. Commun. Netw. 5(10), A66–A75 (2013). [CrossRef]  

15. J. Zhang, Y. Zhao, X. Chen, Y. Ji, M. Zhang, H. Wang, Y. Zhao, Y. Tu, Z. Wang, and H. Li, “The first experimental demonstration of a DREAM-based large-scale optical transport network with 1000 control plane nodes,” Opt. Express 19(26), B746–B755 (2011). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 SDN based flexi-grid optical networks architecture.
Fig. 2
Fig. 2 NOX based controller architecture.
Fig. 3
Fig. 3 Experimental scenario.
Fig. 4
Fig. 4 Network topology with various nodes.
Fig. 5
Fig. 5 Blocking probability under different network sizes.
Fig. 6
Fig. 6 Resource utilization under different network sizes.
Fig. 7
Fig. 7 Lightpath provisioning time under different network sizes.
Fig. 8
Fig. 8 Blocking probability under different DCN bandwidths.
Fig. 9
Fig. 9 Delay time under different DCN bandwidths.
Fig. 10
Fig. 10 Node number under different DCN bandwidths.
Fig. 11
Fig. 11 Convergence and restoration time.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.