Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

The first experimental demonstration of a DREAM-based large-scale optical transport network with 1000 control plane nodes

Open Access Open Access

Abstract

A scalable framework named Dual Routing Engine Architecture in Multi-layer, multi-domain and multi-constraints scenarios (DREAM) is proposed for the routing issue in large-scale dynamic optical networks. DREAM-based optical transport network testbed with 1000 control plane nodes and multi-terabit per second ODUk electrical cross-connects is first experimentally demonstrated. Two proposed routing schemes, i.e. DRE Forward Path Computation (DRE-FPC) and Hierarchical DRE Backward Recursive PCE-based Computation (HDRE-BRPC) are deployed and validated on the testbed, compared with traditional hierarchical routing (HR) scheme. Experimental results show the good performance of DREAM.

©2011 Optical Society of America

1. Introduction

With the rapid growth of internet users and the explosive increase of traffic demands, optical transport networks (OTN) become more and more complex in terms of topology construction and lightpath provisioning. It is inevitable to meet the scalability and flexibility requirements for OTN especially in ultra-high capacity large-scale networking environment which could be characterized by multi-layer, multi-domain and multi-constraint (i.e. triple M) conditions. Some requirements and GMPLS protocol extensions have been listed in the recent RFCs of IETF [1,2]. As a key technology against the scalability of optical networks, routing and resource assignment in triple M scenarios has gained much attention recently. Path computation element (PCE)-based architecture considering triple M scenarios was standardized by the IETF for multi-layer and multi-domain routing [3]. Moreover, some other RFCs related to PCE technology have also been standardized by IETF, such as requirements for PCE discovery, PCE Communication Protocol (PCEP), Backward Recursive PCE-based Computation (BRPC) procedure, and so on [47]. Especially, a hierarchical PCE architecture is proposed to fix the sequence of domains [8], and has been implemented and validated in multi-domain optical networks [911]. Also, some other PCE extension functions have been proposed for multi-layer optical networks [12,13]. Similar with PCE, remote routing controller (RC) was introduced into ASON/GMPLS control plane [14] to provide similar path computation capability. To pursue the best path computation performance in both centralized and distributed routing schemes, a novel scalable framework named as Dual Routing Engine Architecture in triple M scenarios (DREAM) is proposed [15], which takes the advantages of both distributed RC and centralized PCE. It cannot only employ the distributed control approaches to achieve fast routing and path establishment but also provide centralized path computation capabilities in multi-layer, multi-domain and multi-constraints scenarios and thus achieve effective resource allocation and routing optimization [16].

The rest of the paper is organized as follows. Section 2 introduces the architecture of DREAM and the cooperation modes between GE and UE. Two routing schemes based on DREAM are proposed in section 3, i.e. DRE Forward Path Computation (DRE-FPC) and Hierarchical DRE Backward Recursive PCE-based Computation (HDRE-BRPC). Section 4 describes the DREAM-based OTN testbed with 1000 GMPLS-based control plane nodes and 4 optical transport nodes with fully non-blocking ODUk (k=0, 1, 2, 3) electrical cross-connect capacity up to 3.2Tbit/s, and gives the experimental results of DRE-FPC and HDRE-BRPC compared with HR. Section 5 is the conclusion.

2. A scalable framework for triple M routing in dynamic optical networks: DREAM architecture

DREAM is designed as a scalable framework for triple M routing in Fig. 1 , which emphasizes on the cooperation between Group Engine (GE) and Unit Engine (UE) to achieve the performance optimization for the entire networks. These two routing engines and their cooperation relationship form the main feature of DREAM, i.e., Dual Routing Engine (DRE). Both GE and UE maintain link and LSP Traffic Engineering Databases (TEDs) to keep status information (e.g., multi-layer network topologies) within the scope of their own management. Routing Engine Selector (RES) implemented in Connection Controller (CC) of each control plane node can choose a proper routing engine in different routing schemes.

 figure: Fig. 1

Fig. 1 DREAM architecture.

Download Full Size | PDF

UE embedded in the equipment is mainly used for path computation and resource configuration in local domain, while GE is capable of complicated and high load computation because of its strong computation capability, large storage space and broad field of vision. In GE, Integrated Computation Element (ICE) can complete the policy-enabled, multi-constraint-aware, inter-layer and inter-domain path computation. Multi-Constraint Tailor (MCT) can provide different kinds of constraints for ICE. Message Policy Analyzer (MPA) is the broker of GE, and can finish the interpretation of different kinds of messages and policies delivered through communication protocols, such as PCE communication protocol (PCEP). OSPF-TE module updates link TED and LSP TED through intra-domain and inter-domain information flooding.

GE and UE have their own advantages. However, the cooperation between them would be more prominent for DRE. Through different cooperation modes, fast and exact end-to-end routing can be achieved. Some cooperation modes can be used in DREAM as listed in Table 1 .

Tables Icon

Table 1. Cooperation Modes between GE and UE

Network and node mode are specially emphasized here. As the huge bandwidth requirement emerges, Tbit/s transmission links and Pbit/s switching node are necessary for next generation optical networks. According to the current photonics technology, the node architecture will be very complicated and of large power consumption. Then how to configure the switching architecture in the node will be very important for the entire network performance. In network and node cooperation mode, GE is designed to optimize path computation at network level, especially in the condition of multi-constraints, while UE is designed to optimize the configuration of different resource status at node level, such as time, space, lambda, phase, polarization and sub-carrier.

3. Routing schemes based on DRE: DRE-FPC and HDRE-BRPC

Compared with the traditional hierarchical routing scheme (HR) as shown in Fig. 2(a) , two novel routing schemes are proposed based on DRE, i.e. DRE-FPC and HDRE-BRPC, as shown in Fig. 2(b) and 2(c). Details of the three routing schemes are listed as follows.

 figure: Fig. 2

Fig. 2 Routing schemes.

Download Full Size | PDF

  • A. HR

    In the hierarchical routing architecture, each domain can be abstracted as a single node in the higher layer. There is one node in each domain selected as the speaker node, which maintains the information of the abstracted inter-domain topology. When the source control node receives a path computation request and if the destination node is not in the local domain, it will resort to the speaker node of the local domain. Then the speaker nodes will cooperate with each other to compute a loose inter-domain path and return it to the source control node. Then, the source node will supplement additional path information if the route within its domain is incomplete. This process will be carried out sequentially using signaling messages to the downstream domains until a complete node list along the path is obtained.

  • B. DRE-FPC

For DRE-FPC, when the destination is in the local domain, path computation is completed by UE. When the destination is not in the local domain, the end-to-end path should be gained by GE. In the latter case, an inter-domain loose path is gained first by the entrance GE, and each section is complemented by the downstream GEs forwardly. The GE sequence is gained by the first GE in advance. We can get the detailed procedure from Fig. 3 .

 figure: Fig. 3

Fig. 3 DRE-FPC flowchart.

Download Full Size | PDF

  • C. HDRE-BRPC

    Similar with DRE-FPC, when the destination is in the local domain, path computation is completed by UE. When the destination is not in the local domain, the end-to-end path is gained by GE. Different from DRE-FPC, for HDRE-BRPC a GE sequence is gained first by the parent GE and the path computation is conducted from the last child GE to the first child GE backwardly along the GE sequence. When a path computation request arrives at a GE, the GE will send the request to other GEs after it finds that the destination node is not in the local domain. When the GE in charge of the domain that the destination node belongs to receive the request information, it will launch the path computation along a GE sequence. Then a shortest path tree will be built and the shortest path can be selected from it. The flowchart is shown in Fig. 4 .

     figure: Fig. 4

    Fig. 4 HDRE-BRPC flowchart.

    Download Full Size | PDF

4. Experimental results

The performance of the proposed DREAM architecture has been evaluated on a large-scale optical transport network testbed, which consists of control plane, transport plane, management plane and service plane as shown in Fig. 5 . The distributed control plane with a total of 1000 GMPLS-based nodes was implemented on an array of virtual machines created by VMWare software running at IBM servers. Since a virtual machine has its own computation resource and operation system, each control node including UE could independently carry out the extended versions of GMPLS protocols for supporting DRE cooperation, such as PCEP and OSPF-TE extensions as shown in Fig. 6 . Tens of control nodes form a routing domain which is suitable to setup GE engine. The division of domains and the number of nodes in each domain are executed in a dynamic manner. The multi-granularity transport plane was consisted of 4 OTN/ROADM nodes furnished by commercial ZTE transmission equipment. Each transport node supports fully non-blocking electrical cross-connect capacity up to 3.2Tbit/s as well as OCh switching in optical layer based on ROADM fabric can be up to 12.8Tbit/s, which has been tested using concatenation method as shown in Fig. 7 .

 figure: Fig. 5

Fig. 5 Large-scale OTN testbed.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Protocols extension.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Cross capacity of OTN nodes.

Download Full Size | PDF

In addition it provides a great variety of client-side interfaces such as OTUk, STM-N, Ethernet, and etc. Besides performance management, fault management, and configuration management under multi-layer and multi-domain environment, the management plane is also responsible for the division of routing domains. Compared with usual ASON functional architecture a separated service plane has been introduced and regarded as middleware between users and networks to handle various service logics independent of physical infrastructure [17].

In our experiment a network topology consisting of 1000 control plane nodes was built according to edge, metro and core network configuration as shown in Fig. 8 . Furthermore it was divided into 20 domains implemented by ring or mesh respectively. Electrical sub-layers of ODU0/1/2/3 and optical sub-layers of OCh/fiber were considered for multi-layer routing. Also we take into account multiple constraints such as cross-layer/cross-domain traffic engineering and physical transmission impairments. Constraint-based path computation is a basic function especially for TE-LSP establishment. Available bandwidth, diversity, Shared Risk Link Group (SRLG), optical impairments, wavelength continuity and other constraints are likely to be considered. However, it is difficult to compute an optimal path with these constraints under the condition of the general GMPLS/ASON routing architecture. The centralized operation manner makes GE easy to fulfill constraint-based path computation. Except the available bandwidth, wavelength continuity and link weight constraints, linear physical impairments including OSNR and chromatic dispersion (CD) incurred by optical signal transmission along the fiber or at each optical network element are considered in reconfigurable all-optical networks to determine whether the BER performance of signal quality on a particular lightpath meet the transmission requirements.

 figure: Fig. 8

Fig. 8 Experimental network topology.

Download Full Size | PDF

Three routing schemes were implemented and compared on the testbed, i.e., HR, DRE-FPC and HDRE-BRPC. HR may be found in ITU-T G.8080, which is usually used for multi-domain routing, while DRE-FPC and HDRE-BRPC are proposed based on DRE. For both schemes, when the destination is in the local domain, path computation is completed by UE. When the destination is not in the local domain, the end-to-end path should be gained by GE. The difference of the two schemes happens in the latter case. For DRE-FPC, an inter-domain loose path is gained first by the entrance GE, and each section is complemented by the downstream GEs forwardly. While for HDRE-BRPC, a GE sequence is gained first by the parent GE and the path computation is conducted from the last child GE to the first child GE backwardly along the GE sequence.

To evaluate the service scalability of DREAM, we increase network traffic from 300 to 500 Erlang with 1000 nodes and 20 domains. The service ratio in local domain is varied from 0.3 to 0.7. The average blocking probability is calculated when the 10000th service arrives. As shown in Fig. 9 , our proposed two DRE strategies yield smaller blocking probability than HR, while HDRE-BRPC has better performance than DRE-FPC. Blocking probability reduces with the increasing of service ratio in local domain for all the three schemes. This shows that the inter-domain service is easier to be blocked for more resource consumption, which can be seen more clearly in Fig. 10(a) . At the same time, with the increasing of service ratio of inter-domain, the path setup delay time increases obviously, because there are more services of inter-domain which takes more paths hops, as shown in Fig. 10(b). The same reason leads to the experimental results shown in Figs. 10(c) and 10(d), i.e. the bandwidth utilization and the wavelength utilization decrease with the decreasing of inter-domain services.

 figure: Fig. 9

Fig. 9 Blocking probability performance of DREAM.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Performance of HDRE-BRPC in different service ratios of local domain.

Download Full Size | PDF

5. Conclusion

In order to overcome various problems against the scalability of future optical networks in triple M scenarios, we propose a novel routing architecture DREAM and experimentally demonstrate a DREAM-based OTN tesedbed with 1000 control plane nodes and 4 optical transport nodes with multi-terabit per second ODUk electrical cross-connect capability. Numeric results show that our proposed DRE strategy, especially HDRE-BRPC can achieve much better performance, and contribute to better service scalability and dimension scalability.

Acknowledgments

This work was supported in part by 973 Program of China (2010CB328204), NSFC project (60932004), 863 program (2008AA01A328, 2009AA01Z255), and RFDP Project (20090005110013).

References and links

1. K. Shiomoto, D. Papadimitriou, J. L. Le Roux, M. Vigoureux, and D. Brungard, “Requirements for GMPLS-based multi-region and multi-layer networks (MRN/MLN),” RFC5212, July 2008.

2. D. Papadimitriou, M. Vigoureux, K. Shiomoto, D. Brungard, and J. L. Le Roux, “Generalized MPLS (GMPLS) protocol extensions for multi-layer and multi-region networks (MLN/MRN),” RFC6001, Oct. 2010.

3. A. Farrel, J.P. Vasseur, and J. Ash, “A path computation element (PCE)-based architecture,” RFC4655, Aug. 2006.

4. J. Ash and J. L. Le Roux, “Path computation element (PCE) communication protocol generic requirements,” RFC4657, Sept. 2006.

5. J. L. Le Roux, “Requirements for path computation element (PCE) discovery,” RFC4674, Oct. 2006.

6. J. P. Vasseur and J. L. Le Roux, “Path computation element (PCE) communication protocol (PCEP),” RFC5440, Mar. 2009.

7. J. P. Vasseur, R. Zhang, N. Bitar, and J. L. Le Roux, “A backward-recursive PCE-based computation (BRPC) procedure to compute shortest constrained inter-domain traffic engineering label switched paths,” RFC5441, Apr. 2009.

8. D. King and A. Farrel, “The application of the path computation element architecture to the determination of a sequence of domains in MPLS & GMPLS,” draft-king-pce-hierarchy-fwk-01.txt, July 2011.

9. R. Casellas, R. Muñoz, and R. Martinez, “Lab trial of multi-domain path computation in GMPLS controlled WSON using a hierarchical PCE,” OFC/NFOEC 2011, Los Angeles, CA, USA, Mar. 2011.

10. A. Giorgetti, F. Paolucci, F. Cugini, and P. Castoldi, “Hierarchical PCE in GMPLS-based multi-domain wavelength switched optical networks,” OFC/NFOEC 2011, Los Angeles, CA, USA, Mar. 2011.

11. R. Casellas, R. Martínez, R. Muñoz, L. Liu, T. Tsuritani, I. Morita, and M. Tsurusawa, “Dynamic virtual link mesh topology aggregation in multi-domain translucent WSON with hierarchical-PCE,” ECOC2011, Geneva, Switzerland, Sept. 2011.

12. F. Cugini, N. Andriolli, G. Bottari, P. Iovanna, L. Valcarenghi, and P. Castoldi, “Designated PCE election procedure for traffic engineering database creation in GMPLS multi-layer networks,” ECOC2010, Torino, Italy, Sept. 2010.

13. E. Oki, T. Takeda, JL. Le Roux, and A. Farrel, “Framework for PCE-based inter-layer MPLS and GMPLS traffic engineering,” RFC5623, Sept. 2009.

14. D. Cheng, “ASON routing architecture and requirements for remote route query,” ITU-T G.7715.2, Feb. 2007.

15. Y. Zhao, J. Zhang, R. Jing, D. Wang, and X. Fu, “Protocol extension requirement for cooperation between PCE and distributed routing controller in GMPLS networks,”draft-zhaoyl-pce-dre-01.txt, Oct. 2010.

16. J. Zhang, X. Chen, Y. Ji, M. Zhang, H. Wang, Y. Zhao, Y. Zhao, Y. Tu, Z. Wang, and H. Li, “Experimental demonstration of a DREAM-based optical transport network with 1000 control plane nodes,” ECOC2011, Geneva, Switzerland, Sept. 2011.

17. J. Zhang, L. Wang, X. Chen, and W. Gu, “AMSON: an extended architecture for adaptive service provisioning in transport networks,” OFC/NFOEC 2008, Los Angeles, CA, USA, Mar. 2008.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 DREAM architecture.
Fig. 2
Fig. 2 Routing schemes.
Fig. 3
Fig. 3 DRE-FPC flowchart.
Fig. 4
Fig. 4 HDRE-BRPC flowchart.
Fig. 5
Fig. 5 Large-scale OTN testbed.
Fig. 6
Fig. 6 Protocols extension.
Fig. 7
Fig. 7 Cross capacity of OTN nodes.
Fig. 8
Fig. 8 Experimental network topology.
Fig. 9
Fig. 9 Blocking probability performance of DREAM.
Fig. 10
Fig. 10 Performance of HDRE-BRPC in different service ratios of local domain.

Tables (1)

Tables Icon

Table 1 Cooperation Modes between GE and UE

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.