Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic virtual link mesh topology aggregation in multi-domain translucent WSON with hierarchical-PCE

Open Access Open Access

Abstract

We present a lab-trial of the designed and implemented path computation architecture for multi-domain translucent wavelength switched optical networks (WSON), using hierarchical Path Computation Elements (H-PCE). The approach is based on the dynamic aggregation of the domain topologies, represented as a virtual link meshes at a higher level, in order to perform dynamic domain sequence selection. We detail the extended path computation procedures, involving both domain selections considering inter domain links and segment expansions which are impairment-aware, along with the proposed control plane protocol extensions. We validate the approach with the overall performance of the end to end path computation latency, highlighting the benefits of parallelization.

©2011 Optical Society of America

1. Introduction

Multi-domain, translucent Wavelength Switched Optical Networks (WSON) with a Generalized Multi Protocol Label Switching (GMPLS) control plane are becoming a key component in core transport architectures, given the fact that an operator’s network may include several equipment vendors and segmenting the network into domains is a means to increase the overall scalability. Its associated multi-domain path computation function is one of the main drivers behind the adoption (and subsequent extension) of Path Computation Elements (PCE) [1]. For this purpose, the method known as Backwards Recursive Path Computation (BRPC) [2] was standardized as an efficient approach, compared to the per-domain method, provided that the domain sequence is either known or can be discovered.

For arbitrary domain meshes, an approach named hierarchical PCE (H-PCE) is being proposed [3], in which e.g. the domain selection may be delegated to a higher-level entity (a parent PCE). In general, the term hierarchical PCE refers to a family of functional architectures where collaborating PCEs are coupled by a hierarchy relationship, in which each hierarchy level defines the roles and functionalities of each PCE (e.g. domain selection, topology aggregation, path segment computation, etc.).

In this paper, we extend our previous work [4], which was based on a two-level H-PCE with virtual nodes representing domains. In this work, a transit domain’s connectivity is dynamically mapped to a mesh of virtual links, which constitutes the basis of the topology aggregation mechanism. We show the design and implementation of the complete architecture, we detail the control plane protocol extensions and we validate the system empirically in the ADRENALINE testbed. In the scope of this work, a translucent WSON means sparse Optical-Electrical-Optical (OEO) 3R regeneration with optional wavelength conversion, i.e., we consider pools of 3R elements per node that use, at the output, full tunable lasers (for the C-band), which allows their use as wavelength converters. Within a domain, the allocation of a 3R regenerator may be triggered by unsatisfactory signal quality (Optical Signal to Noise Ratio or OSNR below a threshold) and/or by failure to fulfill the wavelength continuity constraint (WCC). Note that, for simplicity, it is assumed that 3R regenerators are placed at domain boundaries, which is reasonable considering operator deployment modes and, in turn, simplifies the WCC problem to a local issue within each domain.

2. H-PCE deployment model

The following deployment model, shown in Fig. 1(a) , has been adopted for this work: a child PCE (c-pce) is deployed at each domain, which, in our case, corresponds to an Autonomous System (AS) with a single Open Shortest Path First-Traffic Engineering (OSPF-TE) area. A single parent PCE (p-pce) is deployed, being responsible for the domain sequence selection using aggregated topologies, as detailed in Section 4.

 figure: Fig. 1

Fig. 1 (a) H-PCE deployment model: single p-pce and one c-pce per domain. (b) Multi-domain parent aggregated topology view (left) and detailed child domain 12 topology view (right).

Download Full Size | PDF

Each c-pce establishes a persistent PCEP protocol adjacency with the p-pce, and the adjacency is used for in-band notifications, topology aggregation updates, and segment expansions. It is the responsibility of each c-pce to perform path computations (e.g. upon request from the p-pce) within its domain, using the detailed topology and impairment aware algorithms. When these computations are part of an end-to-end path computation they are referred to as segment expansions.

3. Topology aggregation mechanism

The topology aggregation mechanism refers to the procedure by which a domain is represented at a higher level, and this representation is always a trade-off combining confidentiality concerns to not disclose topology internals, performance and scalability, given the number of network elements to manage at higher levels.

In this work, all c-pces cooperate in a distributed way to construct a parent Traffic Engineering Database (TED), composed of virtual link meshes within the domains. Each c-pce is thus responsible for dynamically updating its own virtual mesh, as a result of a change in the domain or periodically. Consequently, a mesh of paths between border nodes is computed, using either Minimum Cost Path–objective function (OF) code 1–or Maximum Residual Bandwidth–code 3 – algorithms, which are implemented based on a modified Dijkstra/Constrained Shortest Path First (CSPF) algorithm with the WCC taken into account or a two-step Shortest-Widest path, respectively. The former basically computes the shortest path between two endpoints, while the latter uses two different iterations: in the first one, the path with maximum bandwidth is found, using bandwidth as the criteria for comparison and link relaxation. For the second iteration, links with less than the maximum bandwidth are trimmed, resulting in the shortest path with all the links having (at least) the maximum available bandwidth between the endpoints.

Several metrics are bound to each resulting path: the additive traffic engineering (TE) metric, obtained by the underlying link cost model, and the convex path residual bandwidth. Each path induces a virtual link, using dynamically allocated unnumbered interfaces. The residual path bandwidth defines the link unreserved bandwidth attribute for all 8 priorities (mapped to TE classes), and the link Shared Risk Link Group (SRLG) list is the union of the path links SRLGs.

Virtual links are then announced to the p-pce. The virtual links, along with the inter-domain links, allow the p-pce to maintain an aggregated overall topology. Note that the parent’s resulting directed graph only includes domain border nodes, cfr. Figure 1(b) right and that the H-PCE architecture defined still applies in transparent networks without significant changes with, if appropriate, the additional consideration of end-to-end wavelength continuity, which needs to be addressed by the parent PCE, both at the time of the domain sequence selection and the segment expansions. For this, we are currently considering PCEP protocol extensions and their adaptation for H-PCE, notably in the form of the addition of a Wavelength-Availability Type-Length-Value (TLV) to the virtual TE link and extra computation restrictions.

Similarly, in the proposed scenario, the transmission rates are the same for all wavelengths and assumed to be 10 Gbps, being the requested “service” a lightpath. This allows unifying bandwidth management, and simplifies the computation of TE link attributes such as unreserved bandwidth or maximum Label Switched Path (LSP) bandwidth per priority. The multi bit-rate case is out of scope and would require additional considerations for an optimum usage of the resources, including operator policy and the addition of a Wavelength-Availability TLV to the virtual TE link that reflects the available end to end wavelengths, similar to what it is being currently proposed at the IETF CCAMP WG regarding WSON.

4. Multi-domain path computation procedure

We detail the path computation procedure with the help of Fig. 2(a) , considering domains 10, 12 and 13. Nodes within a domain Y have a node identifier of the form Y.0.50.Z, with Z ranging from 1 to the number of nodes within that domain (e.g. from node 10.0.50.1 within domain 10 to node 13.0.50.14 within domain 13). For the specific example, the optimum domain sequence is assumed to be 10-12-13, being 10.0.50.14 the exit border node for the source domain and 12.0.50.14 the exit border node for domain 12.

 figure: Fig. 2

Fig. 2 (a) H-PCE multi-domain path computation procedure. (b) Detailed PCEP capture for an end to end path computation.

Download Full Size | PDF

The method is as follows: an end path computation client (PCC) located in node 10.0.50.1 requests a multi-domain path computation to the PCE in its domain (src-c-pce) (1), which forwards the request to the p-pce (2). The p-pce identifies the source and destination domains using the reachability information announced by c-pces during PCEP handshake (3).

The p-pce requests a Virtual Shortest Path Tree (VSPT) to the PCE within the destination domain (dst-c-pce) and an inverse-VSPT (i-VSPT) to the PCE within the source domain (src-c-pce). The VSPT includes the paths from the destination domain entry border nodes to the destination endpoint [2], and we define the inverse-VSPT as the tree containing the paths from the source endpoint to the source domain exit border nodes. The p-pce extends the aggregated topology with virtual links induced from the paths of the VSPT/inverse-VSPT (4), and performs a Core Path Computation, based on the directed graph formed by the virtual edges. The resulting Core Explicit Route Object (ERO) includes both the source and destination endpoints as well as intermediate border nodes from transit domains (5).

For each Core ERO segment, a request is sent to the corresponding c-pce to expand the segment, which may cache the path that induced the virtual link or compute it dynamically using intra-domain computation (6). Note that two strengths of this approach are that, first, the p-pce can parallelize the expansion of path segments, significantly reducing overall path computation latency [4], and, second, within a domain, the responsible child PCE may compute the optimum segment using the full topology, including extended Traffic Engineering such as Impairment Aware attributes and regenerator placement, fulfilling all the requirements and constraints in the original request.

Finally, the p-pce combines the domain segments for an end to end path (7) and sends the reply to the src-c-pce who ultimately forwards it to the end client (8). Figure 2(b) shows the capture of the control plane PCEP exchanges and the resulting strict ERO.

5. Intra-domain path computation algorithm

Within each translucent optical domain, PCE discovery is statically configured at each GMPLS-enabled connection controller, which acts as a Path Computation Client (PCC) at each request.

An OSNR-aware Routing and Wavelength Assignment (RWA) algorithm [5], with OF code 0x8000 (within the proprietary range) is deployed at each PCE within an OSPF-TE area or domain, which enables intra-domain path computation insuring path segments with an OSNR level above a given threshold, allocating available 3R regenerators if needed. The regenerators are allocated dynamically, either to mitigate the wavelength continuity constraint or to ensure a satisfactory end-to-end OSNR, above a given threshold.

Each child PCE obtains a copy of TED by parsing OSPF-TE Node and Link Type Length Value (TLVs) of the TE Link State Advertisements (LSAs), which were, in turn, extended with OSNR information and 3R regenerator availability. The PCE performs RWA, with explicit label and regenerator control by means of LABEL and REGENERATOR Explicit Route Object (ERO) sub-objects [5].

6. PCE communication protocol (PCEP) extensions

Although, functionally, TED management is decoupled from the path computation function and considered orthogonal to it (e.g. in the context of single domain, the mechanism by which a PCE obtains the TED is unspecified), in some cases both functions (path computation and TED management) are slightly coupled, especially for multi-domain aspects such as domain border node identification, endpoint localization, reachability dissemination, inter-domain TE link management and topology aggregation for domain sequence selection.

Consequently, control plane protocol extensions not only address path computation procedures but also functions which are, from a simplistic perspective, part of the topology management functions. We briefly enumerate the PCEP protocol [6] extensions, which extend our previous work [4] and are being proposed for standardization at the IETF PCE working group [7].

Parent/Children relationships and mappings: PCE_ID and DOMAIN_ID TLVs are included in the PCEP OPEN object of the Open message. During handshake, a c-pce announces its ID and the domains it is responsible for.

Endpoint location and reachability information: aggregated per-domain end-point reachability is announced using TLVs that contain, notably, classless inter-domain routing (CIDR) IPv4 prefix sub-objects (e.g. 10.0.50.0/24 for domain 10). This allows the p-pce to locate a given endpoint for a PCEP request, and is proposed as an alternative to polling as discussed in [3,7].

Path Computation Procedures: a new bit is used in the Request Parameters (RP) object to request the inverse VSPT, so a child PCE knows that the parent is requesting the i-VSPT rather than a segment expansion.

Topology Summarization: within a domain, domain border nodes (ABRs and ASBRs) are learnt from Summary and External OSPF LSAs and forwarded to the parent, as do OSPF-TE Inter-AS-TE-v2 LSAs [8], which contain inter-domain link TE attributes, and Remote AS Number / ASBR IDs. PCEP Notifications are used to wrap topology notifications in the vertical direction, using new PCEP TLVs that wrap topology updates, either as OSPF-TE LSU packets, OSPF-TE LSA updates and/or XML files with aggregated topology changes. A new proprietary metric type (201), Residual Path Bandwidth, is used within the METRIC object.

7. Testbed setup and performance evaluation

The performance evaluation of the H-PCE architecture involves several separate issues, regarding the trade-off in the topology aggregation mechanism and the path optimality, the path computation latency, parent PCE potential congestion, etc. In this work, we evaluate, respectively, the scalability of the topology aggregation mechanism, the benefits of parallel segment expansions and the end-to-end path computation latency.

For the first test, we obtain the average time it takes a c-pce to compute the virtual link mesh and update the p-pce TED, using a 50-node random graph with standard GMPLS TE link attributes (Fig. 3 ). As detailed in Table 1 , the aggregation time depends, as expected, on the number of domain border nodes, Nb, since a total of Nb(Nb-1) paths need to be computed. For example, a child PCE running in commodity hardware (Intel PC Core 2 Duo) is able to compute a full virtual link mesh with 7 border nodes (42 links) in less than 20 ms, including the announcement of the links to the parent PCE. This value range seems reasonable for the problem targeted. Of course the values may increase not only with the number of border nodes / virtual links but also with the network congestion towards the parent PCE (the test was carried out in a LAN environment), the parent PCE congestion status as well as the number of managed child domains, the number of concurrent PCEP requests and so on.

 figure: Fig. 3

Fig. 3 Random 50-node topology used to evaluate virtual mesh aggregation latency c-pce → p-pce.

Download Full Size | PDF

Tables Icon

Table 1. Virtual mesh domain aggregation latency, Intel Core2 Duo E8400 3.00GHz, 50-node random graph

Next, the complete architecture has been implemented and deployed in the GMPLS control plane emulator of the ADRENALINE testbed, and evaluated using a four domain network (Fig. 4 ) with 14 Label Switch Routers (LSRs) per domain.

 figure: Fig. 4

Fig. 4 Multi-domain network topology, composed of 4 translucent optical domains (Autonomous Systems).

Download Full Size | PDF

The control plane emulator of the ADRENALINE testbed [9,10] has 74 GMPLS-enabled controllers without associated optical hardware enabling maximum flexibility in topology configuration (e.g., number of available wavelengths, connectivity, number of fibers per link, etc.). The GMPLS controllers can be inter-connected following any devised topology, by means of Ethernet point-to-point channels carried over emulated optical links. IP Control Channels (IPCC) in the Control Data Communications Network (DCN) are point-to-point IP interfaces over Ethernet interfaces (IEEE 802.1q VLANs) with optional Generic Routing Encapsulation (GRE) or IP-in-IP tunneling. The set of 74 GMPLS-enabled controllers are implemented in Linux-based routers with Intel Xeon 3.0GHz, Core 2 Duo E6550 2.33GHz and Intel Core 2 Duo E8200 2.66GHz processors. The PCE is a network appliance that implements a Path Computation Element, a multi-threaded, asynchronous process, developed in C + + . Dedicated thread pools are responsible for updating the TED, and for the actual path computations. Extended functionality and algorithms are implemented as shared libraries / plug-ins, using an abstracted algorithm Application Programming Interface (API). It supports most current IETF PCE WG standards and has successfully been deployed in single and multi-domain scenarios within both WSON and MPLS Transport Profile (MPLS-TP) networks.

The second test highlights the benefits of parallelizing the parent-child expansion requests (Fig. 5 ), obtaining the end to end path computation latency, for a variable number of threads in the parent (i.e. the ability to launch requests to children in parallel) and a variable average intra-domain computation latency (modeling more complex algorithms). To this end, a set of 104 requests was generated, sequentially, following a Poisson process (after a computation, the next one was issued after waiting X seconds, where X follows a negative exponential distribution with avg. 1s., selecting endpoints from domains 10-13, executed between two mesh computations. As expected, the longer the intra-domain path computation, the longer the end-to-end latency, which can be reduced parallelizing the requests: the slope of the line depends on the ratio of involved domains and the available threads, with single thread or serial expansions being the worst scenario. For considerably long intra-domain delays, the reduction for a full parallel expansion is more than 50% (from ~200 ms to ~80 ms).

 figure: Fig. 5

Fig. 5 End-to-end average path computation delay based on parallelization (parent threads) and intra-domain path latency (intra domain child algorithm).

Download Full Size | PDF

As expected, there is a clear linear relationship between the intra-domain path computation and the end-to-end delay, the slope being dependent on the number of threads that are available for segment expansion (the higher the thread count, the lower the slope), up to a number of threads equal to the domain sequence size. In order to evaluate higher moments of the end-to-end delay, we obtain the path computation histogram, as shown in Fig. 6 . Note that, in general, shown values may increase with frequent or concurrent re-layout of link meshes, a parameter that would be left to the network operator. Like the previous key performance indicators, the overall computation latency confirms the applicability of the approach, at least for mid-sized domains, which is the size of domain meshes in scope for H-PCE. Note that the update also includes the transmission of the information to the parent PCE (in a dedicated LAN).

 figure: Fig. 6

Fig. 6 Histogram for the path computation latency in the 4-domain network, OSNR-aware algorithm with 3R allocation and full parallelization.

Download Full Size | PDF

The shape of the histogram shows that most of the time path computation latency is between 4 and 6 ms, with the eventual spikes due to PCE’s Operating System non real-time, scheduling, with concurrent tasks such as virtual lay-out re-computation and network jitter (the latter being almost negligible).

Finally, for completeness and regarding the numerical results, the consideration of the wavelength continuity constraint at the multi-domain level is not expected to significantly increase concrete key performance indicators such as the end-to-end path computation latency or the setup delay (assuming PCE based RWA) although, as expected, the blocking probability would increase due to the additional restrictions.

8. Conclusions

We have presented a complete system deployed in a lab trial, evaluating both the scalability of the domain topology aggregation mechanism based on a virtual-link full mesh, along with the end-to-end path computation latency.

The H-PCE is considered as a promising solution for multi-domain path computation with arbitrary domain meshes. It is of particular interest in multi-domain optical networks, since the approach combines dynamic domain selection taking into account the aggregated state of each translucent domain and the state of interdomain links, while allowing advanced (e.g. impairment aware) path computation within a domain. The scope of H-PCE is limited to small domain meshes, such as areas within an AS, or ASes with peering agreements within e.g. a single carrier. The solution has important strengths such as minimizing overall path computation latency since it allows parallelizing segment expansions, and it is complementary to the already existing BRPC, allowing arbitrary meshes and different deployment models.

An open issue remains ownership of the parent PCE, which, in this work, is assumed unique for the multi-domain network.

Acknowledgments

This work has been partially funded by the MICINN (Spanish Ministry of Science and Innovation) through the Cecyt project DORADO (TEC2009-07995), and by the European Community’s Seventh Framework Programme FP7 / 2007-2013 through the Integrated Project (IP) STRONGEST under grant agreement no 247674.

References and links

1. A. Farrel, J. P. Vasseur, and J. Ash, “A path computation element (PCE)-based architecture,” IETF RFC 4655 (2006), http://tools.ietf.org/html/rfc4655.

2. J. P. Vasseur, R. Zhang, N. Bitar, and J. L. Le Roux, “A BRPC procedure to compute shortest constrained inter-domain TE label switched path,” IETF RFC 5441 (2009), http://tools.ietf.org/html/rfc5441.

3. D. King, A. Farrel, Q. Zhao, and F. Zhang, “The application of the PCE architecture to the determination of a sequence of domains in MPLS and GMPLS,” Internet-Draft, draft-king-pce-hierarchy-fwk (work in progress), http://tools.ietf.org/html/draft-king-pce-hierarchy-fwk-06.

4. R. Casellas, R. Muñoz, and R. Martínez, “Lab trial of multi-domain path computation in GMPLS controlled WSON using a hierarchical PCE,” in Optical Fiber Communication Conference and Exposition and National Fiber Optic Engineers Conference (OFC/NFOEC 2011), Technical Digest (CD) (Optical Society of America, 2011), paper OThI5, http://www.opticsinfobase.org/abstract.cfm?URI=OFC-2011-OThI5.

5. R. Martinez, R. Casellas, R. Muñoz, and T. Tsuritani, “Experimental translucent-oriented routing for dynamic lightpath provisioning in GMPLS-enabled wavelength switched optical networks,” J. Lightwave Technol. 28(8), 1241–1255 (2010). [CrossRef]  

6. J. P. Vasseur and J. L. Le Roux, eds., “Path computation element (PCE) communication protocol (PCEP),” IETF RFC 5440 (2009), http://tools.ietf.org/html/rfc5440.

7. F. Zhang, Q. Zhao, O. González de Dios, R. Casellas, and D. King, “Extensions to path computation element communication protocol for hierarchical PCE,” Internet-Draft, draft-zhang-pce-hierarchy-extensions-00 (work in progress), http://tools.ietf.org/html/draft-zhang-pce-hierarchy-extensions-00.

8. M. Chen, R. Zhang, and X. Duan, “OSPF extensions in support of inter-autonomous system (AS) MPLS and GMPLS traffic engineering,” IETF RFC 5392 (2009), http://tools.ietf.org/html/rfc5392.

9. R. Muñoz, R. Martinez, and R. Casellas, “Challenges for GMPLS lightpath provisioning in transparent optical networks: Wavelength constraints in routing and signalling,” IEEE Commun. Mag. 47(8), 26–34 (2009). [CrossRef]  

10. CTTC Optical Networking Area GMPLS Control Plane Emulator wiki, http://wikiona.cttc.es/ona/index.php/GMPLS_Control_Plane_Emulator_with_PCE.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 (a) H-PCE deployment model: single p-pce and one c-pce per domain. (b) Multi-domain parent aggregated topology view (left) and detailed child domain 12 topology view (right).
Fig. 2
Fig. 2 (a) H-PCE multi-domain path computation procedure. (b) Detailed PCEP capture for an end to end path computation.
Fig. 3
Fig. 3 Random 50-node topology used to evaluate virtual mesh aggregation latency c-pce → p-pce.
Fig. 4
Fig. 4 Multi-domain network topology, composed of 4 translucent optical domains (Autonomous Systems).
Fig. 5
Fig. 5 End-to-end average path computation delay based on parallelization (parent threads) and intra-domain path latency (intra domain child algorithm).
Fig. 6
Fig. 6 Histogram for the path computation latency in the 4-domain network, OSNR-aware algorithm with 3R allocation and full parallelization.

Tables (1)

Tables Icon

Table 1 Virtual mesh domain aggregation latency, Intel Core2 Duo E8400 3.00GHz, 50-node random graph

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.