Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Providing deterministic quality of service in slotted optical networks

Open Access Open Access

Abstract

This paper proposes an efficient framework for deterministic service guarantees in slot-based optical networks. The framework uses the combination of a control plane and a data plane to solve the complex problem of capacity allocation, slot-matching and traffic scheduling. The control plane implements admission control and capacity allocation to source-destination node pairs and the data plane handles traffic aggregation, buffering, and scheduling. We propose an efficient algorithm in the control plane for slot collision-resolution. In the data plane, we present a comprehensive aggregation and scheduling mechanism that realizes Service Curves assurance. We use the Time-Domain Wavelength Interleaved Network (TWIN) architecture for the proof of concept and conduct extensive simulations to assess the performance of the algorithm and scheduling mechanism.

©2006 Optical Society of America

1. Introduction

The massive growth of real-time traffic, from voice, games to video applications, over the best-effort Internet requires reconsideration of how quality of service (QoS) ought to be provided in the next generation Internet (NGI). The existing Internet service, based on synchronous digital hierarchy (SONET/SDH) network, perceived as a transmission pipe, can only provide a simple service - best-effort and cannot support service differentiation for applications with different QoS requirements. Hence, the existing service cannot support transmission of multimedia traffic with quantitative QoS requirement, the likely source of future revenue for the Telecom carriers.

One of the leading solutions is wavelength division multiplexing (WDM) networks. WDM networks can offer multiple channels with each having a huge bandwidth of 10Gbps between a source-destination pair, while SONET is limited to one point-to-point channel. However, the traffic on the current end-to-end wavelength connections has not scaled to the full capacity, limiting the economy of scale for end-to-end wavelength services. There are several proposals addressing this problem, i.e. Optical Packet Switching (OPS) [1] and Optical Burst Switching (OBS) [2]. However both of them need ultra-fast optical switches for packet forwarding and optical buffering for resolving packet contention (in OBS, optical buffering is an option but can improve performance). Nowadays the technology for manufacturing these optical devices is still far from mature. Recently, a new optical transport network called Time-Domain Wavelength Interleaved Network (TWIN) with light core and intelligent edges has been proposed in Ref. [3]. TWIN is designed to serve as a metropolitan area network (MAN). It uses a completely different transmission mechanism from those of OPS and OBS. TWIN addresses practical difficulties such as requirements of optical buffer and fast optical switching in the optical domain.

Briefly speaking, TWIN [3] defines an slot based optical network consisting of a simple core based on wavelength-selective cross-connects capable of merging incoming signals of the same wavelengths to the same outgoing fiber, and an intelligent edge that uses fast tunable lasers that emulate fast switching of data in the core. Figure 1 shows a simple TWIN architecture with two destination nodes G and F, and the letter in the box (slot) denotes the source of that traffic burst. In TWIN, a unique wavelength λj is assigned to a destination node j, and source nodes tune to this wavelength λj to transmit to node j. Route between each source-destination node-pair is fixed in TWIN architecture. The routes from multiple sources to a destination form a multipoint-to-point tree (see dashed line and dotted line trees in Fig. 1). Signals are aggregated at the aggregate devices (ADs) and routed in the network by passive components such as the wavelength-selective cross-connects (WSXCs). Once preprovisioned, each WSXC performs self-routing of various optical signals on-the-fly based on their wavelengths. Compared to OPS [1] and OBS [2], TWIN physically eliminates the need for fast optical switching and optical buffering. Tunable lasers at the sources emulate fast switching in the network core and enable efficient traffic grooming by tuning the wavelength at designated times. Inherent in the TWIN architecture, the number of nodes in the network is restricted by the feasible number of wavelengths in a fiber. This issue has been addressed in a recent work called TWIN with wavelength reuse (TWIN-WR) [4]. It successfully achieved the goal of supporting N nodes with roughly N 1/2 wavelengths. This enhancement has made TWIN more practical for implementation. In this paper, we use the TWIN architecture to demonstrate the operation of our proposed slot-based optical framework.

 figure: Fig. 1.

Fig. 1. Traffic flow to nodes G and F in a TWIN network

Download Full Size | PDF

As more and more multimedia traffic will be transmitted on the Internet, there is a growing need to devise quality of service (QoS) models to support applications that have strict performance requirements beyond the traditional IP best-effort service. However, to provide good QoS for customers in WDM optical networks is a challenging task. Despite the large number of work on per-flow guarantees in Internet, there is very little research work in this area, especially the deterministic QoS provisioning in optical networks. Reference [5] proposed a differentiated service model for WDM-based optical networks. Such a differentiated service model could offer only very coarse QoS provisioning based on classifying each alternate optical lightpath according to its quality of optical transmission and reliability. Reference [6] introduced a hierarchical scheduling framework for WDM networks, which could offer QoS guarantee for two kinds of traffic applications. Reference [7] considered supporting service differentiation for different traffic flows in optical burst switching network by setting different offset times to control signal for different traffic flows. Most of these works provides statistical averages derived from queueing theory for equilibrium systems. Reference [8] provided deterministic QoS guarantees in WDM optical networks based on max-plus algebra [15]. The work in Ref. [8] is based on a passive star coupler and does not apply to a more complex network. The network based on passive star coupler is only a single-hop local area network in which the distances among node pairs are assumed to be the same. To this end, our work proposes deterministic QoS guarantees in a WDM optical network with a mesh topology, which allows different distances among node pairs.

In this paper, we present an efficient framework for deterministic service guarantees in slot-based optical networks. The framework uses the combination of a control plane and a data plane to solve the complex problem of capacity allocation, slot-matching and traffic scheduling. We apply the Time-Domain Wavelength Interleaved Network (TWIN) architecture for the proof of concept. Notably, our framework is completely different from OPS [1] or OBS [2]. In the framework, the control plane implements admission control and capacity allocation to source-destination node pairs and the data plane handles traffic aggregation, buffering, and scheduling. We propose an efficient algorithm in the control plane for slot collision-resolution. In the data plane, we present a comprehensive aggregation and scheduling mechanism that realizes Service Curves assurance. We conduct extensive simulations to assess the performance of the algorithm and scheduling mechanism.

Bandwidth provisioning without considering the effects of non-conformant greedy flows will not be sufficient to provide end-to-end guarantees. Our mechanism solves this problem by monitoring the input traffic and redirect non-conformant traffic for best-effort service. The delay bound of a flow can normally be resolved into a fixed component and a variable component [22]. The fixed delay component is due mainly to the fixed overheads for packet delivery and the variable component normally to queueing. Our proposed mechanism controls the queueing delay by specific assembly and scheduling schemes.

The idea of fixed delay may be questionable for wide area networks that spans over regions with different climates. In this work, we limit the consideration of TWIN as a MAN network; factors such as temperature and position that will affect long haul networks have very little effect on TWIN. Hence the propagation delays between node pairs in TWIN can be deemed constant. We apply this assumption and thereby transform the end-to-end delay guarantee problem to a queueing delay guarantee problem at the source node. This is a desirable simplification because it would be extremely difficult to provide deterministic delay bounds for traffic flows if the propagation delays are uncertain (e.g. uncertain routes).

The paper is organized as follows. Section 2 describes the propose framework. We introduce the details of the control and data plane functions in section 3 and section 4 respectively, and provide numerical results. Our conclusion is provided in section 5.

2. Framework configuration overview

 figure: Fig. 2.

Fig. 2. TWIN network with a bandwidth broker controller

Download Full Size | PDF

Figure 2 shows the main structure of our proposed framework. As introduced in section 1, the framework consists of a control plane and a data plane. Most of the control plane is implemented in the Bandwidth Broker (BB) shown in Fig. 2. The flow of information in Fig. 2 shows how aggregated requests propagate to the BB of a TWIN, consider as an autonomous region [9]. The BB [9, 10] stores state information of local connections that become inputs to the admission control function of the control plane. Each ingress node acts as the external interface of the control plane. On receiving a flow request, the control plane verifies whether there is available capacity in source-destination node pairs to meet the requests before admitting the flow. Details are discussed in Section 3.

The data plane function is implemented in every edge node. An edge node is equipped with a slot assembly and scheduling mechanism. We will consider how to handle multiple classes in the mechanism, and single class case can be deemed as its subset. In the mechanism, traffic is channeled into queues according to their delay budgets (explained later in section 4.2.1). Just before a slot transmission, the node assembles traffic from these queues into a burst and transmits into its assigned slot. Details of the process are discussed in Section 4.

3. Control plane

3.1 Overview

Capacity allocation and matching of conflict-free source-destination time slots are two of the main functions in the control plane. Capacity in terms of periodic slots is allocated for traffic between a source-destination node pair according to specific flow requests during admission control. While TWIN architecture aggregates traffic from different source nodes for a destination node using a particular wavelength, it assumes that the source nodes have perfect knowledge of which slot to transmit. Ross et al. [11] proposed a centralized controller for slot scheduling. In contrast, we propose to allocate periodic slots to edge nodes according to traffic requirements. In so doing, we encounter the problem of slot contention due to different propagation delays between different source-destination pairs for a particular destination node. To overcome timeslot conflict at source node, the control plane performs timeslot synchronization among all source nodes for each of the destination nodes.

Figure 3 shows an illustration of different propagation delays for a particular destination node. For node N, the propagation delays from nodes 1, 2, .., N-1 are t 1, t 2, .., t N-1 respectively. The problem is more challenging than the zero delay case (Birkhoff-von Neumann switch) or the equal delay case (star topology), which have been discussed in Refs. [11] and [12], respectively. We begin our analysis for a TWIN with zero and equal source-destination delay first and extent the results to that of different delays.

 figure: Fig. 3.

Fig. 3. Slot transmission in a period

Download Full Size | PDF

3.2 Propagation delay compensation

For a TWIN network of N nodes with zero or equal propagation delay among all node pairs, we assume that traffic arrivals at all edge nodes are uniformly distributed. The utilization of the network is given by the following lemma (we define utilization as the ratio of used slots to the total number of slots on the average):

Lemma 3.2.1: In a TWIN network of N nodes and zero or equal propagation delay between each source-destination node pair, the number of slots in a period for full reachability and 100% utilization is N-1.

Proof:

It is obvious that when every node has traffic to other N-1 nodes, there must be N-1 transmission in a period, with one slot transmission to each destination. In any slot, there should be N-1 transmission from N nodes for full reachability. Next, we show that N-1 slots are sufficient for 100% utilization. With no propagation delay, a burst sent in the m-th slot in the period will be received in the same slot. Hence we can give such a schedule: in the first slot, let each node select any of the N-1 destinations with no two nodes selecting the same destination, e.g. node 1 selects node 2, node 2 selects node3, .., and node N selects node 1. In fact, there are totally (N-1)! combinations with no source-destination conflict for all source nodes in the first slot.

In the second slot, let each node selects any of the remaining N-2 destinations without overlapping (except the destination already selected in the previous slot), e.g. node 1 selects node 3, node 2 selects node 4, .., and node N selects node 2. This procedure is repeated in the remaining N-3 slots. In the last (N-1)th slot, there will be just one destination left for each node. In particular, each node sends exactly one conflict-free burst to each of the other N-1 destination nodes utilizing every slot in every period thus achieving 100% network utilization. This is also true for all source-destination node pairs with equal propagation delay because the delay will result in a constant shift of the arrival time at the destination without causing any collision among transmissions, a time-shifted version of the zero-propagation delay case. ▪

Zero propagation delay or equal propagation delay in the network are ideal cases that simplifies the matching of input and output slots without the difficulty of slot collisions. In a practical network with a mesh topology, the random distribution of these propagation delays is one of the main reasons for performance degradation. An intuitive idea is to equalize all propagation delays to that of the longest link among all node-pairs. But, such a solution will increase propagation delay between node-pairs unnecessarily.

We solve this synchronization problem efficiently using periodic property of slot allocation. To satisfy lemma 3.2.1, a small compensation delay can be added to the delay of each node pair so that a source node will transmit into a particular slot of the N-1 slots at the destination node. We have thus compensated for the difference in propagation delays virtually using the property of the periodic N-1 slots. This novel synchronization scheme is stated in the following Theorem.

Theorem 3.2.1: In a network of N nodes, denoting the propagation delay between node i and node j by δij (δij is expressed in number of slots), if (δij mod (N-1)) (∀i, j) is an integer constant, the minimum number of slots in a period to achieve 100% utilization is N-1.

Proof:

We prove that when the condition “(δij mod (N-1)) is an integer constant” is satisfied, the conclusion is the same as zero or equal propagation delay case in lemma 3.2.1.

Assume for any node i and j,

δijmod(N1)=p(0p<N1,pisaconstant)

It is clear that a burst sent in m-th slot of current period will arrive at the destination in slot q=(m+δij ) mod (N-1) in a later period. Consider together the assumption of (1), q=(m+δij ) mod (N-1)=(m+p) mod (N-1). Hence, q is only subject to m. Applying the property of periodicity, an assignment with different m implies different q, and thus there will be no slot overlapping in any period. Hence, the requirement in lemma 3.2.1 is satisfied, fulfilling the condition for 100% utilization. ■

Theorem 3.2.1 fulfils the synchronization condition required in lemma 3.2.1 using the properties of periodic N-1 slots. To satisfy the condition (1), a small compensating delay is required to be added to each node pair (i, j). Denoting the largest compensating delay added to each source-destination link (i, j) by βij (in number of slots), it is obvious that βij would be smaller than the period τ. This can be derived as follows: denote δij ’ the original propagation delay of link (i, j) in the network and δij the propagation delay after compensation, then δij =δij ’+βij . According to (1), (δij ’+βij ) mod (N-1)=p (p is an integer constant and 0≤p<N-1). If (δij ’ mod (N-1))≤p, βij =p-(δij ’ mod (N-1))<N-1; if (δij ’ mod (N-1))>p, βij =(N-1)+p-(δij ’ mod (N-1))<N-1. Therefore, βij <N-1. Since the period τ is quite small compared to the longest propagation delay, this is a reasonable scheme.

3.2.1 Electrical domain delay compensation

It is not obvious how such compensating delays can be added in the optical domain. A total of N-1 specific optical delays are needed at each source node for synchronization to N-1 destination nodes, which will be impractical to implement. To overcome this problem, we add compensating delays in the electrical domain and show that this is a feasible but suboptimal solution. Adding a compensating delay in the electrical domain implies queueing a burst in the buffer for the delay before transmission. Due to this delay, the ideal condition in theorem 3.2.1 will be violated. While slots at destination nodes remain conflict-free, conflicts will arise at source nodes with traffic to different destination nodes. For example, consider a traffic burst between a node pair (i, j) designated for transmission in slot tij according to theorem 3.2.1. The electrical delay compensation will shift the transmission slot from tij to ((tij +βij ) mod (N-1)). A conflict will arise if ((tij +βij ) mod (N-1)) maps to a slot for more than one node pair for source i.

In other words, we have applied electrical domain compensation at source nodes that can lead to some conflicts. We will propose an algorithm to resolve these conflicts.

3.3 Conflict resolution algorithm

The theoretical estimation of the network utilization due to such slot conflicts at a source is given by the following lemma:

Lemma 3.3.1: In a TWIN network with N nodes using a periodic N-1 slots, let δij denote the propagation delay between node i and node j, the average utilization of the network, when (δij mod (N-1)) (∀i, j) is uniformly distributed in [1, N-1] and electrical domain delay compensation is used, is given by

Φ=k=1N1CN1kStr(N1,k)k!k(N1)N

Where Str(N-1, k) is the Stirling number of the second kind.

Proof: The N-1 slots in a period can be imagined as N-1 boxes. If two or more transmission times (after delay compensation) fall into the same box, only one can be successful. Assume (δij mod (N-1)) is uniformly distributed in [1, N-1], then the transmission slots to each destination in the period after electrical compensation are also uniformly distributed in [1, N-1]. Hence, the problem becomes calculating the average throughput of each source node under uniform arrival process, equivalent to the problem in Ref. [6]. ■

Lemma 3.3.1 provides a theoretical estimation of the number of successful slots in a period for a network with random uniform source-destination propagation delays. Notably, some slots available remain unused. We propose a simple Conflict-Resolution Algorithm (CRA) for performance enhancement.

CRA Algorithm:

While not the end of free slots in source node

Search for the next free slot at source node

Compute the corresponding destination slot based on the propagation delay

If conflict-free source-destination slots are available

Assign source-destination slots

Set Successful

End-If

End-While

If Successful is set

Accept connection request

Else

Reject connection request

End-If

Since the BB runs the CRA algorithm, it is necessary to assess whether the computational complexity is reasonable. To allocate the timeslot, the CRA algorithm searches for a free timeslot at the source node and a corresponding free time slot at the destination node. In the event that the destination slot is already occupied, the algorithm repeats the search for the next free timeslot at the source and computes the destination slot. As there is a clear one-to-one mapping between the timeslots in the source-destination node pair after electrical delay compensation, the worst case complexity for the search is a reasonable O(2N).

In periodic slot assignment, the control plane may assign one or multiple slots in a single-period or over multiple periods to a source-destination node pair. A further question is how to fix the period τ? The period τ defines the maximum delay for a service-slot or a multiple service-slot. Therefore, the period τ will define the minimum delay bound. Higher delay bounds of integer multiples of τ can also be provided. It appears that τ should be as small as possible subject to the minimum slot size and the number of slots in each period.

In summary, the CRA algorithm searches for conflict-free source-destination slots for capacity allocation between the node pair. If such a slot pair is not available, no further capacity can be allocated to the source-destination node pair. We apply ns-2 simulations to assess the performance of the combination of our delay compensation scheme and the CRA algorithm in the next subsection.

3.4 Simulation results

We conduct the first simulation for a TWIN of 100 nodes with 99 slots in each period τ. TWIN assumes a randomly generated mesh network. The modulus of propagation delay between any source-destination node pair is uniformly distributed between 1 and 99 according to the condition in lemma 3.3.1. In the experiment, we assume that the aggregate traffic from each source would require one slot per period and two cases are considered. In the first case, conflicts will be deemed as failures with no further action. In the second case, a conflict would trigger the CRA algorithm to search for alternative conflict-free source-destination slots in the period. If such slots are unavailable, it rejects the request.

Table 1 shows the results of our first simulation (1st column). In the table, a network load equal to 1 implies that every node has a slot of traffic to send to every other node in every period τ. In this case, the conflicts reduce the utilization to 0.63 which is close to the result of lemma 3.3.1 and recent results in Ref. [13]. After applying the CRA algorithm, the resulting utilizations improve about 40 to 50% for network loads from 0.7 to 1.

In experiment 2, we repeat the simulation for a TWIN network with 200 and 400 nodes. Other conditions are the same as experiment 1. Table 1 shows the results (2nd and 3rd columns). The results of experiment 2 show a similar improvement of utilization for CRA algorithm. The performance is slightly better for a network with more nodes. The reason is due to better choices for conflict resolution with more available slots.

To determine the effect of increasing the number of slots per period τ, we conducted the third experiment: for a network of N (N=200) nodes, we increase the number of slots to 2(N-1) in each period. We conduct this experiment to assess the effect of having double the number of slots in a period. In this experiment, each node requests for two slots and CRA can allocate non-consecutive slots in the period. The results are shown in Table 2. Small improvements can be perceived when CRA is used.

Tables Icon

Table 1. Network utilization in TWIN networks with different number of nodes

Tables Icon

Table 2. Network utilization in a 200-node TWIN network for 2 slots allocation

Tables Icon

Table 3. Network utilization in a 200-nodes TWIN network for random of 1 to 4 slots allocation

In our final experiment, we assume that an aggregate flow could request a random number of slots from 1 to 4 in a 200 nodes TWIN. This experiment is to assess the impact of traffic demand on network utilization. Table 3 shows the simulation results. There is little difference in the results as compared to experiment 2. Experiments 2 and 3 imply that there is little advantage for increasing the number of slots per period after taking into consideration the additional overheads.

3.5 Periodic timeslot service

We apply the service curves theory [14, 15] to quantify the service provided by period timeslots assigned to a source-destination node pair. Due to space limitation, readers can refer to [14, 15, 16] for basic definitions and applications of service curves. The service offered by a stream of periodic slots is defined as follows:

Definition 3.5.1: The service offered by a stream of periodic slots to a aggregate flow I between a source-destination pair is given by a slotted service curve

SI(t)=k=1wIbu(tzIkτ),whereu(t)={0,t<01,t0

where wI is the number of slots assigned to the aggregate flow I, and b is the number of bits in a timeslot, and zI τ is the period in the assignment for aggregate flow I.

The timeslot duration and period are defined by parameters wI and zI . By varying these two parameters, we could construct a slotted service curve with specific timeslot duration and period. Figure 4 shows a slotted service curve for wI =1 and zI =1. For the sake of exposition, the following notations will be used in the rest of this paper: I (t) denotes the arrival curve of the aggregate flow I, SI (t) denotes its service curve, AI (t) denotes its actual traffic envelope, and dI denotes its queueing delay budget. To represent an individual flow i between this source-destination pair, we replace the subscript by a lowercase letter i. The slotted service curve offers a “staircase service” with step size wIb after each period zI τ with no service between the periods. The proposed slotted service curve has useful properties that provide deterministic QoS guarantee in relation to admission control and slot scheduling in the data plane in Section 4.

 figure: Fig. 4.

Fig. 4. The arrival curve and service curve of an aggregate flow I

Download Full Size | PDF

4. Data plane

The keys functions of the data plane include traffic policing and scheduling mechanism. The main purpose of traffic policing is to separate conformant traffic from non-conformant traffic in a flow. This prevents non-conformant traffic in an aggressive flow from degrading the performance of conformant flows. We assume that a token bucket parameterized by individual flow’s traffic specifications is used to identify conformant and non-conformant traffic. The scheduling mechanism will schedule according to QoS requirements and treat non-conformant traffic as best-effort traffic in the network.

In the next subsection, we describe how class-based traffic is assembled into a slot before transmission.

4.1 Class-based traffic assembly process

Figure 5 depicts the class-based buffering and scheduling mechanism. The mechanism consists of three subprocesses: 1) class-based buffering; 2) class-based traffic assembly; and 3) a periodic slot service of wib bits per ziτ seconds. Figure 5 shows M+1 buffers for M real-time traffic classes and one best-effort traffic class. Also shown in the figure are flow specific traffic monitors Si¯*, ∀i that identify and forward conformant and non-conformant traffic to their buffers.

While traffic arrived are buffered continuously, traffic for a transmission slot is assembled just before the transmission. The highest priority traffic buffered is given precedence over the rest. In other words, the slot is assembled by backlogged traffic starting from the highest priority class, followed by the next and so on. A slot will be transmitted periodically independent of whether the slot is full. Hence, the presence of best-effort traffic will allow more efficient utilization of the allocated capacity. To implement class-based buffering, classification scheme that provides a quantifiable QoS-to-class mapping is required. To address this issue, we apply the traffic classification scheme in Ref. [22]. A brief review of the scheme is presented next.

 figure: Fig. 5.

Fig. 5. The class-based traffic assembly and scheduling process

Download Full Size | PDF

4.2 Flow classification

There are limited number of works on flow classification in the literature [19, 20, 21]. It is probably due to the difficulties in mapping specific deterministic QoS guarantees to a particular class. The typical parameters for flow classification include type of application, end-to-end delay, ingress-egress addresses and/or port numbers and precedence. In most cases, the main purpose of flow classification is to provide class-based service differentiation to admitted flows.

In Ref. [22], a simple flow classification procedure that provides a quantitative QoS-to-class mapping is proposed. In the proposal, flows are classified into classes according to their delay budget. We introduce the idea of queueing delay bound next. We define a queueing delay vector D={DM , D M-1, .., D 1}, where DM < D M-1<..<D 1 are the delay bounds of the classes from M to 1 respectively. The basic idea is that flows classified into a particular class will have the same class-based delay bound. The following definition specifies how a flow is classified into a particular class shown in Fig. 6.

Definition 4.2.1: A flow i is classified into class k if the queueing delay budget di is

DkdiDk+1for1kM.

By classifying flows into various classes with different delay bounds, we can compute the aggregate arrival curve for various traffic classes and provide the minimum aggregate service curves to these classes for queueing delay control [15]. We provide a detailed analysis to show how the scheduler provides class-based queueing delay bound next.

 figure: Fig. 6.

Fig. 6. Flow classifications with M QoS classes and a best effort class (class 0)

Download Full Size | PDF

4.3 Class-based scheduler

We assume the control plane allocates w slots per period τ to a source-destination node pair. The switch (Fig. 5) will close for w slots transmission every period τ. Just before the switch is closed, traffic is assembled into the slot according to their priority class as described in subsection 4.1.

We apply the idea of capacity allocation [22] to M traffic classes. We define a capacity allocation vector α={αM , α M-1, .., α 1}, where αM <α M-1<..< α 1 and Σ kαk =1, reserving capacity for various classes of traffic. It does not preclude a network operator from allocating the full capacity to one class of traffic, a special case of class-based scheduling. Using the allocation vector, we propose the following theorem for per-flow service guarantees by means of class-based traffic treatment.

Theorem 4.3.1: For a slotted service curve of wb bits per τ second allocated to an ingress-egress node pair with M classes and a capacity allocation vector α={αM , α M-1, .., α 1}, a flow j with arrival curve j (t) could be admitted to class h(1≤hM) by this optical slot scheduling mechanism if

S¯j(tDh)+k=1MiFkS¯i(tDk)k=1wbu(tkτ)and
Sj(t)+iFhSi(t)αhwb,for0tτand1hM

where Fk (Fh ) is the number of flows already accepted in class k(h).

Proof:

For M classes of traffic, we apply flow classification to all QoS flows according to their delay budgets. For a flow to be admitted to class M, we must ensure that the arrival traffic of class M shifted by DM is upper bounded by the slotted service curve S(t). That is,

S¯M(tDM)S(t)and
iFMSi(t)αMwb,for0tτ

If we extend to all existing flows in M classes, the following conditions are required:

k=1MiFkS¯i(tDk)k=1wbu(tkτ)and
iFhSi(t)αkwb,for0tτand1kM

If a new flow j arrives, it could be admitted into class h if and only if the service curve required is still upper bounded by the slotted service curve. Hence the results in the theorem are obvious. This completes the proof. ■

4.4 Application examples

In this section, we consider two different scenarios in which the traffic characteristics of flow j are modeled as either the token bucket TB (Σj , ρj ) or TSPEC (Mj , pj , Σj , ρj ) in IETF specifications. While the token bucket provides a neat solution, the TSPEC traffic characteristic provides a more realistic model of the real traffic.

First we consider the case when the in-profile traffic of a flow j could be characterized by the token bucket TB (Σj , ρj ), where Σj and ρj denote the burstiness and long term rate of flow j respectively. In this context, it is not difficult to show that either of these parameters determines whether this flow could be admitted. For ease of exposition, we neglect all out-ofprofile traffic as they are not protected by the scheduling mechanism. We apply theorem 4.3.1 to define a set of simple and concise constraints for admission control in the following proposition.

Proposition 4.4.1: For a slotted service curve of wb bits per τ second allocated to an ingress-egress node pair with M classes and a capacity allocation vector α={αM , α M-1, .., α 1}, a flow j with requirement (Σj , ρj , dj ) could be admitted to class h (1≤hM, DhdjDh -1), by this optical slot scheduling mechanism if

σj+k=hMiFkσikDhwbτ(1k=h+1Mαk)and
ρj+iFhρihαhwbτ

where σik denotes the burstiness of flow i in class k and ρih denotes the long-term rate of flow i in class h and Fh is the set of existing flows (already admitted) in class h.

Proof:

To admit a flow j into class M, we must ensure that the traffic of the flow j is upper bounded by the envelope of the arrival curve deduced from its service curve. That is,

S¯M(t)SM(t+DM)

To meet the delay bound, the following conditions must hold [18]:

σj+iFMσiMDMwbτand
ρj+iFMρiMαMwbτ

where σiM and ρiM represent the burstiness and long-term rate of flow i in class M and FM is the set of existing flows in class M. Inequalities (14) and (15) ensure that the class M traffic arrival curve is upper bounded by the slotted service curve shifted left by τ and thus bound the delay and limit the capacity allocated to class M traffic. When αM =1, all the capacity is allocated to class M and we have the scenario of single-class case.

Similarly, a flow j could be admitted in class M-1 (delay bound D M-1) if

σj+k=M1MiFkσikDM1wbτ(1αM)and
ρj+iFM1ρiM1αM1wbτ

If we extend this to class h, the results follow. This completes the proof. ■

It is worth noticing that proposition 4.4.1 separates the requirements into two constraints. The first constraint deals with the burstiness of the flows while the second deals with the long term rate. Whether a flow j could be admitted depends on whether its burstiness or its long-term rate causes the resulting traffic curve to exceed the guaranteed arrival curve, which is deduced from the slotted service curve. By means of class-based traffic assembly and scheduling, the queueing delay bound achieved for higher priority class traffic is smaller than that of lower priority class traffic independent of capacity allocation.

 figure: Fig. 7.

Fig. 7. Comparison of service rates required for two arrival curves 1(t):(Σj ,ρj ) and 2(t):(Mj ,pj ,Σj ,ρj ).

Download Full Size | PDF

Let us consider a more realistic IETF RSVP type of traffic of a flow j whose characteristics are given by the quadruple (Mj , pj , Σj , ρj ) and a queueing delay budget dj . Instead of providing full derivation based on theorem 4.3.1, we sketch a simple graphical solution in Fig. 7 due to lack of space. In this figure, we plot the arrival curves of the flow based on TB (Σj , ρj ) and TSPEC (Mj , pj , Σj , ρj ) as 1(t) and 2(t). We sketch the rates r 1 and r 2 which are needed to provide the deterministic guarantees according to 1(t) and 2 (t). The graphical solution shows that for a given queueing delay budget dj , r 1r 2. The tight arrival curve based on IETF specification allows a lower minimum rate for the same queueing delay budget. It allows a more accurate computation of the service curve based on a more realistic arrival curve. This result shows that by means of the realistic IETF specification, it could admit more flows for a given capacity, leading to a more efficient utilization of resources.

4.5 Simulation result

To assess the effectiveness of the admission scheme proposed in section 4.4, we designed an experiment to test its performance. Assume that the capacity allocated to this source-destination node pair is 4Mb/s for three classes of traffic at the source node. For the sake of clarity, we assume that all the traffic flows have the same token bucket parameter (ρ, Σ), in which ρ=0.5Mb/s and Σ=40kbit. The delay bound of the three classes are 20ms, 40ms, 80ms respectively and the capacity allocation vector {α 3, α 2, α 1}={0.3, 0.3, 0.4}. We compute the number of flows admitted into each class without violating the service contract.

According to proposition 4.4.1, the number of flows n 3 that can be admitted in class 3 (highest priority) must satisfy:

n3×40×10320×1034×106 and n 3×0.5×106≤0.3×4×106n 3=2 Similarly, the number of flows n 2 that can be admitted in class 2 must satisfy:

n2×40×10340×1034×106×(10.3) and n 2×0.5×106≤0.3×4×106n 2=2

And finally the number of flows n 1 that can be admitted in class 1 must satisfy:

n1×40×10380×1034×106×(10.30.3) and n 1×0.5×106≤0.4×4×106n 1=3

 figure: Fig. 8.

Fig. 8. Delay distribution with three classes of traffic

Download Full Size | PDF

If numbers of flows admitted to various classes are limited by these numbers, the queueing delay of flows in the three classes should be bounded by 20ms, 40ms and 80ms respectively. We simulated the scenario in Network Simulator 2 (ns-2) and obtained the delay of admitted flows in each of the three classes. The results are plotted in Fig. 8. We observe that class 3, the highest class, has the delay bound by 20 ms, and we could see that 90% of the traffic’s delay are within 15ms. The class 2 has larger delay bound, 40ms and it is observed that 90% of the traffic delay are bounded within 35ms. The case is also similar for class 1 traffic. The result has proved that our scheme provides strict QoS guarantees to flows if the admission control policy is observed.

5. Conclusion

Providing deterministic QoS guarantees in an optical network is a challenging issue. Most existing research works focus on providing end-to-end connectivity in wavelength routed or star-coupler networks. Few available results on QoS guarantees in star-coupler networks, assumed equal end-to-end delays, are not applicable to networks with mesh topologies. In this paper, we have proposed an efficient framework for deterministic service guarantees in slot-based mesh optical networks. The framework uses the combination of a control plane and a data plane to solve the complex problem of capacity allocation, slot-matching and traffic scheduling. We have applied admission control and capacity allocation to source-destination node pairs in the control plane successfully. It uses electrical domain delay compensations and a collision resolution algorithm of O(2N) for a N edge nodes optical network. We have proposed a comprehensive aggregation, buffering, and scheduling mechanism to provide deterministic QoS guarantees using class-based traffic treatment. Through our analysis, we prove that the framework realizes Service Curves assurance that provides class-based queueing-delay bounds. Our simulation results on our framework, for the TWIN Network, have validated our analysis and shown that the class-based delay bounds are maintained. The simplicity and scalability of this framework make it suitable for implementation in slotted optical networks.

References and links

1. S. Yao, S. Dixit, and B. Mukherjee, “Advances in Photonic Packet Switching: an overview,” IEEE Commun. Mag. 38, 84–94, Feb. (2000). [CrossRef]  

2. C. Qiao and M. Yoo, “Optical Burst Switching (OBS) - A new Paradigm for an Optical Internet,” J. High Speed Nets. 8, 69–84 Jan. (1999).

3. I. Widjaja, I. Saniee, R. Giles, and D. Mitra, “Light core and intelligent edge for a flexible, thin-layered and cost-effective optical transport network,” IEEE Commun. Mag. 41, 30–36, (2003). [CrossRef]  

4. C. Nuzman and I. Widjaja, “Time-domain wavelength interleaved networking with wavelength reuse,” in Proc. IEEE INFOCOM’06, Mar. (2006).

5. N. Golmie, T. Ndousse, and D. Su, “A differentiated optical service model for WDM Networks,” IEEE Commun. Mag. 38, 68–73, Feb. (2000). [CrossRef]  

6. B. Li and Y. Qin, “Traffic scheduling in a Photonic Packet Switching System with QoS Guarantee,” J. Lightwave Technol. 16, 2281–2295 (1998). [CrossRef]  

7. M. Yoo, C. Qiao, and S. Dixit, “Optical burst switching for service differentiation in the next generation optical Internet,” IEEE Commun. Mag. 39, 98–104, Feb. (2001). [CrossRef]  

8. Maode Ma and M. Hamdi, “Providing deterministic quality-of-service guarantees on WDM Optical Networks,” IEEE J. Sel. Areas Commun. 18, 2072–2083 (2000). [CrossRef]  

9. B. Teitelbaum et al., QBone Architecture (v1.0), Internet 2 QoS Working Group Draft, Aug. 1999

10. H. A. Mantar, J. S. Hwang, I. T. Okumus, and S. J. Chapin, “A scalable model for interbandwidth broker resource reservation and provisioning,” IEEE J. Sel. Areas Commun. 22, 2019–2034 (2004). [CrossRef]  

11. K. Ross, N. Bambos, K. Kumaran, I. Saniee, and I. Widjaja, “Scheduling bursts in Time-Domain Wavelength Interleaved Networks,” IEEE J. Sel. Areas Commun. 21, 1441–1451 (2003). [CrossRef]  

12. M. Chen and T. -S. Yum, “A conflict-free protocol for Optical WDMA Networks,” in Proc. IEEE GLOBECOM, Dec. (1991).

13. I. Saniee and I. Widjaja, “Simplified layering and flexible bandwidth with TWIN,” in Proc. Workshop on Future Directions in Network Architecture, SIGComm, (2004).

14. R. L. Cruz, “Quality of service guarantees in virtual circuit switched networks,” IEEE J. Sel. Areas Commun. 13, 1048–1056, Aug. (1995). [CrossRef]  

15. J. -Y. Le Boudec and P. Thiran, Network Calculus - A Theory of deterministic queueing systems for the Internet, (Lecture Notes in Computer Science, Springer2001).

16. R. L. Cruz and C. M. Okino, “Service guarantees for window flow control,” in Proc. 34th Allerton Conf. on Comm., Cont. & Comp., Oct. (1996). [PubMed]  

17. H. Sariowan, “A Service-curve approach to performance guarantees in integrated-service networks,” Ph.D. thesis, Dept of Electrical & Computer Engineering, UCSD, June (1996).

18. J. Schmitt, P. Hurley, M. Hollick, and R. Steinmetz, “Per-flow guarantees under class-based priority queueing,” in Proc. IEEE GLOBECOM, Nov. (2003).

19. J. Zheng, V. O. K. Li, and X. Yuan, “An adaptive flow classification Algorithm for IP switching,” in Proc. IEEE GLOBECOM, Nov. (1999).

20. W. Wang and C. C. Shen, “An adaptive flow classification scheme for data-driven label switching networks,” in Proc. IEEE ICC, June (2001).

21. K. Yasukawa, K. Baba, and K. Yamaoka, “Dynamic class assignment for stream flows considering characteristics of non-stream flow classes,” ICICE Trans. Commun. 11, 3242–3254, Dec. (2004).

22. C. K. Siew and M. H. Er, “A new multiservice provisioning mechanism with service curves assurance for per-class scheduling delay guarantees” in press for IEE Proc. Communications. Available at URL: http://www.icis.ntu.edu.sg/our_institute/staff/cksiew/revised_COM_2005_0246-uncorrected%20final%20draft.pdf

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Traffic flow to nodes G and F in a TWIN network
Fig. 2.
Fig. 2. TWIN network with a bandwidth broker controller
Fig. 3.
Fig. 3. Slot transmission in a period
Fig. 4.
Fig. 4. The arrival curve and service curve of an aggregate flow I
Fig. 5.
Fig. 5. The class-based traffic assembly and scheduling process
Fig. 6.
Fig. 6. Flow classifications with M QoS classes and a best effort class (class 0)
Fig. 7.
Fig. 7. Comparison of service rates required for two arrival curves 1(t):(Σj ,ρj ) and 2(t):(Mj ,pj ,Σj ,ρj ).
Fig. 8.
Fig. 8. Delay distribution with three classes of traffic

Tables (3)

Tables Icon

Table 1. Network utilization in TWIN networks with different number of nodes

Tables Icon

Table 2. Network utilization in a 200-node TWIN network for 2 slots allocation

Tables Icon

Table 3. Network utilization in a 200-nodes TWIN network for random of 1 to 4 slots allocation

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

δ ij mod ( N 1 ) = p ( 0 p < N 1 , p is a constant )
Φ = k = 1 N 1 C N 1 k Str ( N 1 , k ) k ! k ( N 1 ) N
S I ( t ) = k = 1 w I b u ( t z I k τ ) , where u ( t ) = { 0 , t < 0 1 , t 0
D k d i D k + 1 for 1 k M .
S ¯ j ( t D h ) + k = 1 M i F k S ¯ i ( t D k ) k = 1 w b u ( t k τ ) and
S j ( t ) + i F h S i ( t ) α h w b , for 0 t τ and 1 h M
S ¯ M ( t D M ) S ( t ) and
i F M S i ( t ) α M w b , for 0 t τ
k = 1 M i F k S ¯ i ( t D k ) k = 1 w b u ( t k τ ) and
i F h S i ( t ) α k w b , for 0 t τ and 1 k M
σ j + k = h M i F k σ i k D h w b τ ( 1 k = h + 1 M α k ) and
ρ j + i F h ρ i h α h w b τ
S ¯ M ( t ) S M ( t + D M )
σ j + i F M σ i M D M w b τ and
ρ j + i F M ρ i M α M w b τ
σ j + k = M 1 M i F k σ i k D M 1 w b τ ( 1 α M ) and
ρ j + i F M 1 ρ i M 1 α M 1 w b τ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.