Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Performance evaluation of multi-stratum resources integration based on network function virtualization in software defined elastic data center optical interconnect

Open Access Open Access

Abstract

Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.

© 2015 Optical Society of America

1. Introduction

Recently, data center-supported applications (such as cloud computing, remote storage and video on demand) have attracted much attention of service providers and network operators due to their fast evolving of Internet [1]. Since data center services are typically diverse in terms of required bandwidths and usage patterns, the network traffic shows high burstiness and high-bandwidth characteristics, which poses a significant challenge to the data center networking for more efficient interconnection with reduced latency and high bandwidth [2]. To accommodate these services in highly-available and energy-effective flexible connectivity, the architecture of elastic optical network has been proposed and experimentally demonstrated [3, 4 ], which is achieved by taking advantage of orthogonal frequency division multiplexing (OFDM) technology [5]. It can allocate the necessary spectral resources with a finer granularity through sub-wavelength, super-wavelength and multiple-rate data traffic accommodation tailored for a variety of user connection demands. Therefore, data center interconnect with elastic optical network is a promising scenario to allocate spectral resources for applications in a highly dynamic, tunable and efficient control manner [6].

With the rapid evolution of data-center-supported services, proprietary optical network hardware appliances deployed as the communication infrastructure are impossible to accommodate the new applications with low costs and higher efficiency. Network function virtualization (NFV) [7] aims to address these problems by implementing the network function running as software on generic hardware, which can be consolidated onto industry standard elements, such as switches, computing and storage. It causes that the original optical network and application functions can be partitioned into basic elements [8, 9 ] in elastic data center optical interconnect. For instance, sliced-transponder [10] can be realized by slicing parallelized sub-transceivers of a single transponder into virtual sub-transponder resources, while it can fully utilize the transponder resource by generating a wide variety. Moreover, virtual path computation element (vPCE) [11] can be deployed on demand to keep the quality of the virtual network functions (e.g., in terms of latency, request processing time, dedicated algorithms, etc.), which is run as a software application on a cloud computing environment (e.g., a virtual machine).

On the other hand, many delay-sensitive data center services require a high-level end-to-end quality of service (QoS) guarantees [12]. Aimed at the actual high-bitrate data center services or virtual optical network (VON) request, the integration of such atomic resource elements in network and application stratums, becomes a key issue to meet the high-level performance requirement from these applications. Recently, a centralized control architecture, software defined networking (SDN) [13, 14 ] provides maximum flexibility for the operators and makes a unified control over various resources for the joint optimization of functions and services with a global view [15–17 ]. Therefore, it is very important to apply SDN technique to integrate the multi-stratum resources in elastic data center optical network.

A multi-stratum resilience between IP and optical networks that allows to accommodate data center services has been discussed in our previous works [18]. On the basis of it, in this paper, we propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. Additionally, a resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources effectively. The overall feasibility and efficiency of the proposed architecture with RIM are experimentally verified on the control plane of our OpenFlow-based enhanced SDN (eSDN) testbed [19]. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.

The rest of this paper is organized as follows. We examine the related work in Section 2. Section 3 introduces the MSRI architecture. The resource integrated mapping scheme under this network architecture is proposed in Section 4. Then we describe the testbed and present the numeric results and analysis in Section 5. Section 6 concludes the whole paper by summarizing our contribution and discussing our future work on this area.

2. Related work

The data center optical network has been well studied in terms of transport plane and control plane. In [8–10 ], the multi-flow optical transponder is proposed to implement the efficient multi-layer optical networking, while it can reduce the cost of equipment to adapt IP over elastic optical network efficiently. In [7], the authors virtualize the hardware in data plane with the converge technology of SDN and NFV to provide the elastic control. Using NFV technology in optical network, the virtual transport PCE is considered as a virtual network function to deploy on demand [11] for each path calculation request. In [18, 19 ], we focus on the QoS guarantee for the services in scenario of elastic data center optical interconnection, which consider the service differentiation with time-sensitivity and the recovery issue in case of a disaster. Moreover, the computing resources allocation problem has been extensively investigated in the literature from different perspectives such as the efficient use of resources [20] and energy consumption in data centers [21, 22 ]. In [20], the optimal networked cloud mapping issue is researched as a mixed integer programming problem, which indicates the objectives related to cost efficiency of the resource mapping procedure onto a shared substrate, while interconnecting various islands of computing resources. The authors in [21] develop an energy conserving resource allocation scheme with prediction for cloud computing systems. The prediction mechanism can predict the trend of arriving jobs (dense or sparse) in the near future and their related features, so as with help the system to make adequate decisions. In [22], a survey of research in energy-efficient computing is conducted and the paper presents energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices.

3. MSRI architecture for software defined elastic data center optical interconnect

The multi-stratum resources integration (MSRI) architecture can be implemented based on software defined elastic data center interconnect, which is designed to gather and schedule with multiple stratum resources (i.e., optical network, computing and storage resources) in a control way using open system. Note that, we call various kinds of resources as different “stratums” [3]. In this section, the core and structure of the novel architecture are briefly pointed out. After that, the functional building blocks of controllers and coupling relationship between them in control plane are presented in detail as follows.

3.1 MSRI architecture for elastic data center optical network

The MSRI architecture based on NFV in software defined elastic data center optical interconnect is illustrated in Fig. 1 . The distributed data centers are interconnected with elastic optical network, which mainly consists of two stratums: the elastic optical network resources stratum (e.g., spectral sub-carriers) and the application resources stratum (e.g., computing and storage) respectively, as shown in Fig. 1. Each stratum resource is software defined with OpenFlow and controlled by an application resource controller (AC) and a network controller (NC) respectively in a unified manner. To realize the MSRI based on NFV with extended OpenFlow protocol (OFP), OpenFlow-enabled multi-flow transponder and bandwidth-variable optical switches with OFP agent software are required, which are referred to as MFT and OF-BVOS, and demonstrated and proposed in [7]. The motivations for MSRI architecture in elastic data center optical interconnect are twofold. Firstly, the MSRI can emphasize the cooperation between AC and NC to realize global optimization with joint interaction of multiple stratum resources using cross stratum optimization (CSO) [3]. Secondly, based on NFV, the MSRI can break the limit of network devices into basic atomic elements to implement the computing, storage and network resources integration and optimize resource utilization. For instance, the service is provided by using two spectral paths and two servers, which is shown in Fig. 1. In detail, the traffic flow (marked in red arrow) from user to data center destinations has been composed by two paths, while two servers constitute the destination for user. Thus, the MSRI can integrate the multiple resources (marked in red dashed ring) to perform the service provisioning.

 figure: Fig. 1

Fig. 1 The architecture of MSRI based on NFV for software defined elastic data center optical network.

Download Full Size | PDF

3.2 Functional models of MSRI for software defined elastic data center network

To achieve the architecture described above, the network and application resource controllers have to be extended in order to support the MSRI functions. The responsibilities and interactions among these functional modules in two controllers are shown in Fig. 2 . In the NC, the virtualization manager module is responsible for virtualizing the required optical network resources, and interworks the information to perceive the OF-enabled optical nodes periodically through OpenFlow protocol. The application monitor module in AC manages and monitors the virtual application resource. When the virtual optical network request arrives, the MSRI control module can perform the resource integrated mapping strategy. After the proposed scheme completing, the MSRI control module decides which application resources can be integrated for the VON request and optical network resources are provided for the service provisioning. Then it provides this request to PCE module in turn, including the request parameters (e.g., bandwidth and latency), and returning a success reply containing the information of the provisioned lightpath. Note that, the PCE is capable of computing a network path or route based on a network graph, and of applying computational constraints. To perform the path computation with the multi-stratum integration of optical and application stratum resources, the NC can be interacted with AC through network-application interface (NAI). After receiving the application resources information from the AC, the end-to-end multi-flow computation can be calculated in PCE considering CSO of optical and application resources from AC. Note that, the vPCEs as the virtualized network functions are used to provide the path computation for each VON request. The provisioning manager performs spectrum assignment for computed path and provisions the lightpath by using extended OFP. Note that, the OFP agent software embedded in OF-BVOS maintains optical flow table and models node information as software and maps the content to control the physical hardware. The database stores the virtual network and application resources for MSRI, while the AC obtains data center resource information periodically or based on event-based trigger through an application monitor module. In this work, we use the unified interface to control the optical network and application resources with OpenFlow protocol, which can realize the multi-stratum resources integration efficiently. In data center network, we extend the OpenFlow protocol to invoke the API of VMWare to monitor and collect the data center resources or servers, and control the resources scheduling. The AC obtains the CPU and RAM utilization timely through the VMWare API which can be used to evaluate the application resource.

 figure: Fig. 2

Fig. 2 The functional models of network and application resource controllers.

Download Full Size | PDF

4. Resource integrated mapping scheme

Based on the functional architecture above, we propose a resource integrated mapping scheme (RIM) in the NC to map the virtual optical network and application resources for the integration, which can enhance the resource utilization and perform the service provisioning efficiently.

4.1 Problem statement

In order to make our proposal clearly, we use an example to explain the proposal of resource integrated mapping, which is illustrated in Fig. 3 . As shown, there are two layers in the figure, i.e., the virtual network layer of the VON request and physical elastic optical network layer. Note that, the needed resources of VON request and the remainder resources in optical network are both marked in Fig. 3. For simplicity, we assume that the request needs storage resource in the node of data center server and the network bandwidth in elastic optical network. In fact, the computing and storage resources in data center are both considered in the proposed scheme, while they are also taken into account in experimental evaluation. It is just an example to make the proposed scheme much clearly. The VON request abc needs the 20, 40 and 80G storage space in three nodes, and 400, 100 and 40Gbps bandwidth among them. In the traditional strategy, the VON will be blocked in this scenario since there are not enough surplus storage (i.e., 80G) and network resources (i.e., 400Gbps) for virtual node c and link lab of VON. Based on the MSRI architecture, such resources can be virtualized and scheduled without the limit of physical entity through NFV. Therefore, the resource integrated mapping scheme maps the virtual node c of VON to multi-node C and D in physical network, while multi-path AFE and AE are mapped to virtual link lab together. Therefore, the multi-stratum resources can be integrated to accommodate VON request for users.

 figure: Fig. 3

Fig. 3 The schematic diagram of RIM.

Download Full Size | PDF

4.2 Network modeling

The multi-stratum resources integration architecture based on elastic data center optical interconnect is represented as G (V, L, F, A), where V = {v1,v2,...,vn}denotes the set of nodes which are equipped with bandwidth-variable optical cross-connects function, L = {l1,l2,...,ln}is the set of bi-directional fiber links between nodes in V. F = {ω1,ω2,...,ωF}is the set of spectrum sub-carriers on each fiber link, and A denotes the set of data center servers, while N, L, F and A represent the number of network nodes, the links and the spectrum sub-carriers and data center nodes respectively. From the users’ point of view, they pay much attention to the experience of QoS rather than concerning about which server to provide services. For each VON request, it can be translated into the needed network and application resources. Note that these resources contain the required number of allocated sub-carriers ω1,ω2,...,ωj and application resources ar1, ar2,…, arj in the analysis of network model for simplicity. In each data center server, two time-varying application stratum parameters describe the service condition of data center application resources, which are comprised of memory utilization modeled RAM and CPU usage. The application occupation with the application stratum parameters of current each server is proposed in [12] in detail. We denote the ith VON traffic request as described above as VRi(ω1,ω2,...,ωj,ar1,ar1,...,arj), while VON request VRi + 1 will arrive after connection demand VRi in time order. The propagation delay of each link lL is represented as Dl, while the sum of delays of all the links provided for a spectral path is known as end-to-end propagation delay PDR=lRDl. In the scenario of multiple paths, DD is on behalf of the differential delay between multiple fiber-level flows needed to be compensated at receiving side, i.e., DD=|lR1DllR2Dl|. As described above, the typical threshold of differential delay compensation is specified as 128ms using off-chip SDRAM, which is represented as MDD. According to the VON request and status of resources, the appropriate data center server can be chosen as the destination node of VON based on the scheme. In addition, some requisite notations and their definitions are listed as Table 1 .

Tables Icon

Table 1. Notations and Definitions

4.3 Path cascading degree

The notion of path cascading degree is introduced to reflect the spectrum integration ability of the path for MSRI. Additionally, a novel RIM scheme is proposed by introducing path cascading degree (PCD)-based triggered mechanism using resource assignment schemes first fit (FF) to assign the available spectrum resources.

The spectrum occupation on a link is expressed using an F binary bit array, in which each binary bit indicates the usage of the corresponding sub-carrier. The bit value 1 means a free sub-carrier while the bit value 0 corresponds to an occupied one. So, the spectrum occupation on Rs, d with nl link is expressed as the link spectrum occupation matrix hereafter.

SO:SO1SO2...SOi...SOFLlPOR=[a1,1a1,2...a1,i...a1,Fa2,1a2,2...a2,i...a2,F...anl,1anl,2...anl,i...anl,F]L1L2Lnl

In this matrix, each column vector SOi denotes the state of the ith sub-carrier occupation among the links along Rs, d, while each row Ll in the matrix indicates the spectrum usage on the corresponding connecting link. From another perspective, the sub-carrier occupation of the entire path can be defined as the path spectrum occupation vector integrally, in which aiR indicates the ith sub-carrier usage on the path Rs, d.

ROR=L1L2...Lnl=[a1Ra2R...aiR...aFR]=[a1,1a2,1...anl,1a1,2a2,2...anl,2...a1,ia2,i...anl,i...a1,Fa2,F...anl,F]

Along each link on the path, with the consideration of contiguity constraint, the number of probable accommodation status using the ith sub-carrier on link l for all possible path bandwidth demands is described asmil, while the connecting bandwidth of each possible provided status is represented as bk. Therefore, the average bandwidth bil of all probable allocation status using the ith sub-carrier on link l is calculated as the equation as below, in which the value of bil indicates the consecutiveness degree of the ith sub-carrier to the adjacent available spectrum.

bil=k=1milbk/mil

In addition, the number of ith sub-carrier occupation state change for neighboring sub-carriers on link l, defined as vil, is useful to estimate the degree of the spectrum fragmentation on one link. Particularly, the higher degree of fragmentation means it is more difficult to search consecutive spectrum on the link and MSRI is more likely for improving resource utilization. So, bil/vilindicates the integration ability of the link on the ith sub-carrier. For l, we introduce the concept of spectrum integration eigenvector to express its integration ability for all sub-carriers. Specifically, the link spectrum integration eigenvector on link l is defined below.

el=[b1l/v1lb2l/v2l...bil/vil...bFl/vFl]

Regarding a path Rs, d with nl links, a nl-dimensional vector space named link spectrum integration eigenvector space Sp is involved there. Path cascading degree DR is defined as Eq. (5), the value of which represents the maximum variation of link integration ability along the path. Two indispensable parameters are involved in the definition of path cascading degree, i.e., path spectrum integration eigenvector eR and the mean vector of link spectrum integration eigenvector space Sp e¯l, which can be described as Eq. (6).

DR=cov(eR,e¯l)D(eR)D(e¯l)=E(eRe¯l)E(eR)E(e¯l)E(eR2)[E(eR)]2E(e¯l2)[E(e¯l)]2,E(eR2)[E(eR)]2>0,E(e¯l2)[E(e¯l)]2>0
eR=[b1R/v1Rb2R/v2R...biR/viR...bFR/vFR]e¯l=l=1nlel/nl

In this way, it is of the greatest difference for the concatenation uniformity among all links along the path. Therefore, path cascading degree expresses the largest barrier for spectrum integration to go through the path. Figure 4 is a conceptualized illustration of the notions in the proposed model. As shown in the figure, the traffic demand is deployed in a 6-node simple network along the path RA, C to go through link l1 (lA, F), l2 (lF, E), l3 (lE, D) and l4 (lD, C). According to the spectrum usage state among these links, the link spectrum occupation matrix PORA,Cand path spectrum occupation vector RORA,C are described as Fig. 4, which the latter is calculated as Eq. (2). Note that, based on them, the average bandwidth b1l1 and spectrum occupation variation v1l1 of the 1st sub-carrier on the link l1 are worked out and then two type parameters of all sub-carriers among the links on the path can be looking through incrementally by parity of reasoning. Additionally, the link spectrum integration eigenvectors along the pathel1,el2,el3andel4(colored blue) are made of these parameters on the corresponding connecting link and then constitute the link spectrum integration eigenvector space, in which the mean vector e¯l(colored red) is computed in the light of this space. Therefore, path cascading degree DR can be calculated based on the mean vector e¯l and path spectrum integration eigenvector eR(colored green). The process of computation and motion is illustrated in Fig. 4.

 figure: Fig. 4

Fig. 4 Conceptualized illustration of path cascading degree.

Download Full Size | PDF

4.4 RIM scheme

Based on the proposed PCD, we propose a RIM scheme in MSRI architecture in software defined elastic data center interconnect. In detail, when a VON request arrives, there are two penalty operations that can deteriorate the mapping performance: Step 1: Assign virtual nodes to substrate network that have enough application resources. If no enough resources are in one node, calculate two close nodes for the application integration. Step 2: Allocate enough sub-carriers to satisfy the bandwidth requirements. If no enough consecutive resources are on any single path, trigger multi-flow computation and perform network integration. The notion of PCD is introduced as a triggered value to decide whether perform step 2 of MSRI or not. We first try to run routing and spectrum assignment (RSA) strategy with spectrum continuity and contiguity constraint. If any resources are not found to be allocated, step 2 of MSRI is triggered when the path cascading degree on current path is inferior to the preset threshold. The algorithm selects the number of paths with multiple paths constraint comparing traffic estimation with current demand. Based on distribution of spectral resources, the traffic demand can be split into multiple flows on multiple optical fiber paths by using first fit allocation scheme, which considers the tolerable differential delay constraint. If no resources are found at this point, the VON request is blocked. If both steps one and two are successful, the VON request is provisioned; otherwise, the VON request is blocked. An outline of RIM scheme is described as follows in Fig. 5 .

 figure: Fig. 5

Fig. 5 Flowchart of RIM scheme.

Download Full Size | PDF

5. Experimental demonstration and performance evaluation

To evaluate the overall feasibility and efficiency of the proposed architecture, we set up an elastic optical network with data centers based on our testbed, as shown in Fig. 6 . The testbed is deployed with control plane due to the lack of MFT-enabled hardware, which will be verified in our future work. We develop Open VSwitch (OVS) as software agent to emulate the optical nodes in data plane and interwork with controller to support the MSRI with OpenFlow protocol. Data center servers and the software agents are realized on an array of virtual machines created by VMware ESXi V5.1 running on IBM X3650 servers. The virtual operation system makes it easy to set up experiment topology based upon NSFNet, which comprises 14 nodes and 21 links. For OpenFlow-based MSRI control plane, the NC server is assigned to support the proposed architecture and deployed by means of three virtual machines for MSRI control, vPCE computation and network virtualization, while the AC server is used as CSO agent to monitor the application resources from data center networks. The vPCE as a virtualized network function is used to provide the path computation for each VON request. Each controller server controls the corresponding resources, while the database servers maintain traffic engineering database and related configuration. We deploy the service information generator of VON related with AC, which implements batch VON services for experiments. Note that, the AC manages the data center servers and their application resources through the VMware software, which can gather the CPU and storage resources, and configure and control the virtual machines via internal API in the data centers.

 figure: Fig. 6

Fig. 6 Experimental testbed for MSRI and demonstrator setup.

Download Full Size | PDF

Based on the testbed described above, we have designed and verified experimentally MSRI for VON service in elastic data center optical interconnect. The experimental results are shown in Figs. 7(a) and 7(b) . Figure 7(a) presents the signaling procedure for MSRI using OFP through a Wireshark capture deployed in the NC. As shown in Fig. 7(a), 10.108.65.249 and 10.108.50.74 denote the IP addresses of the NC and AC respectively, while 10.108.50.21 and 10.108.51.22 represent the IP addresses of related emulated OF-BVOSs respectively. Here, the existing OpenFlow messages have the original function, which are reused to simplify the implementation in this paper. The new messages types will be defined to support new functionalities in the future work. When VON request arrives, the NC prepares to provide the required data center resources for service accommodation, and then sends the request for multi-stratum resources integration to the AC via UDP message. Note that we use UDP message to simplify the procedure and reduce the performance pressure of controllers. After receiving the application resources information from the interworking, the NC performs the RIM scheme to provide the application and optical network stratum resources integration, computes the paths considering CSO of optical network and application resources, and then provisions spectral paths to control all corresponding OF-BVOSs along the computed path via flow mod message. Receiving the setup success reply via packet in, the NC responds the MSRI success reply to AC and updates the application usage to keep the synchronization. Figure 7(b) shows a snapshot of the extended flow mod message for VON provisioning, which verifies OPF extension for MSRI. In a packet-switched network, OpenFlow abstracts data plane as a flow entry which is defined as rule, action and stats. It represents the packet’s characteristic and the action of switch. For the MSRI control of optical networks, flow entry of OpenFlow in optical flow table is extended. In MSRI architecture, the rule is extended as path ID, bandwidth, storage, grid, channel spacing, and central frequency, which are the main characteristics of data center optical network. Note that 32 bits are for the extension of each field. The action of optical node mainly includes add, switch, drop and configure to set up a path to the port/label with specified adaption function (e.g., modulation format), and delete a path to restore the original state of equipment. Various combinations of rule and action are used to realize the control of optical nodes. The statistics function is responsible for monitoring the flow property to provide service provisioning. We can see there are 6x32 bits for the MSRI extension in terms of path ID, bandwidth, storage, grid, channel spacing, and central frequency.

 figure: Fig. 7

Fig. 7 (a) Wireshark capture of message sequence for MSRI, (b) extended flow mod message in NC.

Download Full Size | PDF

We also evaluate the performances of MSRI with the RIM scheme under heavy traffic load scenario and compare with the traditional CSO, multi-data center (MDC) and split-multi-flow (SMF) schemes [18] in terms of path blocking probability, resource occupation rate and path provisioning latency using virtual machines. The traditional CSO scheme will accommodate the services request from user to destination node immediately with CSO of network and application resources status. With multi-data center scheme, the destination of the service request can be provided by multiple data center servers if one server have not enough application resources for accommodation. With split-multi-flow scheme, the wasted noncontiguous spectral fragments can be utilized for the new traffic request in advance can reduce the blocking probability. The traffic requests are setup with bandwidth randomly distributed from 500Mbps to 400Gbps. We assume the CPU utilization in data center is picked from 0.1% to 1% for each demand, while the storage resource in server is occupied from 1GB to 10GB for each service request. We also assume the size of hard disk is 1TB. They arrive at the network following a Poisson process and results have been extracted through the generation of 100,000 demands per execution. We assume the bandwidth of a sub-carrier is 12.5GHz, which is a typical value in elastic optical network. In the RIM scheme, many factors can lead to differential delay between multiple paths, which are composed of deterministic and non-deterministic differential delay. The deterministic differential delay is constant for the duration of a connection and it contains propagation delay and equipment delay, while the emission and queuing delay occurred in transmitter node and pointer buffers will cause non deterministic differential delay. Although many factors are described above, the differential delay is primarily attributed to propagation delay. Therefore, we consider propagation delay as differential delay constraint for simplicity, which is always a concern of the typical values that are recommended in the ITU-T G.709 and commercial framer device. Here, the typical threshold of differential delay compensation is specified as 128ms using off-chip SDRAM. Note that we use the classical first fit algorithm for the spectrum assignment.

Figure 8(a) compares the performances of four schemes in terms of path blocking probability in the NSFNet topology, which is shown in Fig. 9 . Path blocking probability measures both network and application blocking situation, which is measured by CPU and memory overflow. It can be seen clearly that the RIM scheme reduces the path blocking probability effectively than other schemes, especially when the network is heavily loaded. The reason is that, the RIM scheme can integrate the data center application resources of multiple servers into one node of VON request, furthermore on the basis of it, get multiple paths in optical network together for one link of the request. It causes that few VON requests can be blocked since the proposed scheme breaks the limitation of the network devices and data center servers through NFV. Compared with RIM scheme, the MDC scheme just provides the application resource integration in one stratum, while SMF scheme makes multiple optical fiber paths aggregation from source to the same destination. The comparisons on resource utilization among four schemes are shown in Fig. 8(b). The resource utilization reflects the percentage of occupied resources to the entire elastic optical network and application resources. The phenomenon can be seen that the proposed RIM scheme outperforms other schemes in the resource occupation rate significantly. That is because the RIM scheme can flexibly provide network and application resources integration for VON request, and those resources are utilized effectively when they will be blocked in other schemes. The performances among those schemes in terms of path provisioning latency are compared in Fig. 8(c). Note that, the latency reflects the average provisioning delay after the VON request arriving. We can see the proposed RIM scheme reduces the path provisioning latency compared to others. That is because the RIM scheme maps multi-stratum resources with parallel computing, which leads to low computation time being consumed.

 figure: Fig. 8

Fig. 8 Comparison on (a) path blocking probability, (b) resource occupation rate and (c) path provisioning latency among various schemes in heavy traffic load scenario.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Network topology of NSFNet.

Download Full Size | PDF

6. Conclusion

To enhance the QoS guarantee of data center service in elastic optical network, this paper presents a MSRI architecture based on network function virtualization in software defined elastic data center optical interconnect. Additionally, the RIM scheme is introduced for MSRI based on the proposed architecture, which can evaluate the network status and integrate optical network and application resources. The functional architecture is described in this paper. The feasibility and efficiency of MSRI are verified on our OpenFlow-based eSDN testbed built by control plane. We also quantitatively evaluate the performance of RIM scheme under heavy traffic load scenario in terms of path blocking probability, resource utilization, and provisioning latency, and compare it with the traditional CSO, MDC and SMF schemes. The results indicate that the MSRI with RIM scheme can utilize elastic optical network and data center resources effectively and enhance end-to-end responsiveness of data center services, while leading to a reduced blocking probability.

Our future MSRI work includes two aspects. One is to improve the RIM scheme performance, extend the testbed to a large scale network topology considering NFV applications with multi-layer and multi-domain. The other is to develop the new messages types to support new functionalities for MSRI on our eSDN testbed.

Acknowledgments

This work has been supported in part by National Natural Science Foundation of China (NSFC) project (61501049, 61271189, and 61571058), the Fundamental Research Funds for the Central Universities (2015RC15), Fund of State Key Laboratory of Information Photonics and Optical Communications (BUPT), P. R. China, and the Science and Technology Project of State Grid Corporation of China (SGIT0000KJJS1500008).

References and links

1. M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network architecture,” Comput. Commun. Rev. 38(4), 63–74 (2008). [CrossRef]  

2. C. Kachris and I. Tomkos, “A survey on optical interconnects for data centers,” IEEE Comm. Surv. and Tutor. 14(4), 1021–1036 (2012). [CrossRef]  

3. H. Yang, J. Zhang, Y. Zhao, Y. Ji, J. Han, Y. Lin, and Y. Lee, “CSO: Cross Stratum Optimization for Optical as a Service,” IEEE Commun. Mag. 53(8), 130–139 (2015). [CrossRef]  

4. O. Gerstel, M. Jinno, A. Lord, and S. J. B. Yoo, “Elastic optical networking: a new dawn for the optical layer?” IEEE Commun. Mag. 50(2), s12–s20 (2012). [CrossRef]  

5. W. Shieh, “OFDM for flexible high-speed optical networks,” J. Lightwave Technol. 29(10), 1560–1577 (2011). [CrossRef]  

6. I. Tomkos, S. Azodolmolky, J. Sole-Pareta, D. Careglio, and E. Palkopoulou, “A tutorial on the flexible optical networking paradigm: state of the art, trends, and research challenges,” Proc. IEEE 102(9), 1317–1337 (2014). [CrossRef]  

7. R. Nejabati, S. Peng, M. Channegowda, B. Guo, and D. Simeonidou, “SDN and NFV Convergence a Technology Enabler for Abstracting and Virtualising Hardware and Control of Optical Networks (Invited),” in Proceedings of Optical Fiber Communication Conference (OFC 2015), (Optical Society of America, 2015), paper W4J.6.

8. M. Jinno, H. Takara, Y. Sone, K. Yonenaga, and A. Hirano, “Multiflow optical transponder for efficient multilayer optical networking,” IEEE Commun. Mag. 50(5), 56–65 (2012). [CrossRef]  

9. J. Fernandez-Palacios, V. López, B. Cruz, and O. Dios, “Elastic Optical Networking: An Operators Perspective,” in Proceedings of European Conference and Exhibition on Optical Communications (ECOC 2014), (Optical Society of America, 2014), paper Mo.4.2.1.

10. T. Tanaka, A. Hirano, and M. Jinno, “Advantages of IP over elastic optical networks using multi-flow transponders from cost and equipment count aspects,” Opt. Express 22(1), 62–70 (2014). [CrossRef]   [PubMed]  

11. R. Vilalta, R. Muñoz, R. Casellas, R. Martínez, V. López, and D. López, “Transport PCE Network Function Virtualization,” in Proceedings of European Conference and Exhibition on Optical Communications (ECOC 2014), (Optical Society of America, 2014), paper We.3.2.2.

12. H. Yang, J. Zhang, Y. Ji, Y. Tan, Y. Lin, J. Han, and Y. Lee, “Performance evaluation of data center service localization based on virtual resource migration in software defined elastic optical network,” Opt. Express 23(18), 23059–23071 (2015). [CrossRef]   [PubMed]  

13. R. Martínez, R. Casellas, R. Vilalta, and R. Muñoz, “Experimental assessment of GMPLS/PCE-controlled multi-flow optical transponders in flexgrid networks,” in Proceedings of Optical Fiber Communication Conference (OFC 2015), (Optical Society of America, 2015), paper Tu2B.4. [CrossRef]  

14. L. Liu, W. R. Peng, R. Casellas, T. Tsuritani, I. Morita, R. Martínez, R. Muñoz, and S. J. B. Yoo, “Design and performance evaluation of an OpenFlow-based control plane for software-defined elastic optical networks with direct-detection optical OFDM (DDO-OFDM) transmission,” Opt. Express 22(1), 30–40 (2014). [CrossRef]   [PubMed]  

15. M. Channegowda, R. Nejabati, M. Rashidi Fard, S. Peng, N. Amaya, G. Zervas, D. Simeonidou, R. Vilalta, R. Casellas, R. Martínez, R. Muñoz, L. Liu, T. Tsuritani, I. Morita, A. Autenrieth, J. P. Elbers, P. Kostecki, and P. Kaczmarek, “Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed,” Opt. Express 21(5), 5487–5498 (2013). [CrossRef]   [PubMed]  

16. S. Das, G. Parulkar, and N. McKeown, “Why OpenFlow/SDN can succeed where GMPLS failed,” in Proceedings of European Conference on Optical Communication (ECOC 2012), (Optical Society of America, 2012), paper Tu.1.D.1. [CrossRef]  

17. F. Paolucci, F. Cugini, N. Hussain, F. Fresi, and L. Poti, “OpenFlow-based flexible optical networks with enhanced monitoring functionalities,” in Proceedings of European Conference and Exhibition on Optical Communications (ECOC 2012), (Optical Society of America, 2012), paper Tu.1.D.5. [CrossRef]  

18. H. Yang, J. Zhang, Y. Zhao, Y. Ji, J. Wu, Y. Lin, J. Han, and Y. Lee, “Performance evaluation of multi-stratum resources integrated resilience for software defined inter-data center interconnect,” Opt. Express 23(10), 13384–13398 (2015). [CrossRef]   [PubMed]  

19. H. Yang, J. Zhang, Y. Zhao, Y. Ji, H. Li, Y. Lin, G. Li, J. Han, Y. Lee, and T. Ma, “Performance evaluation of time-aware enhanced software defined networking (TeSDN) for elastic data center optical interconnection,” Opt. Express 22(15), 17630–17643 (2014). [CrossRef]   [PubMed]  

20. C. Papagianni, A. Leivadeas, S. Papavassiliou, V. Maglaris, C. Cervello-Pastor, and A. Monje, “On the optimal allocation of virtual resources in cloud computing networks,” IEEE Trans. Comput. 62(6), 1060–1071 (2013). [CrossRef]  

21. C. Wang, W. Hung, and C. Yang, “A prediction based energy conserving resources allocation scheme for cloud computing,” in Proceedings of IEEE International Conference on Granular Computing (GrC 2014), (IEEE, 2014), paper 320–324. [CrossRef]  

22. A. Beloglazov, J. Abawajy, and R. Buyya, “Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing,” Future Gener. Comput. Syst. 28(5), 755–768 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The architecture of MSRI based on NFV for software defined elastic data center optical network.
Fig. 2
Fig. 2 The functional models of network and application resource controllers.
Fig. 3
Fig. 3 The schematic diagram of RIM.
Fig. 4
Fig. 4 Conceptualized illustration of path cascading degree.
Fig. 5
Fig. 5 Flowchart of RIM scheme.
Fig. 6
Fig. 6 Experimental testbed for MSRI and demonstrator setup.
Fig. 7
Fig. 7 (a) Wireshark capture of message sequence for MSRI, (b) extended flow mod message in NC.
Fig. 8
Fig. 8 Comparison on (a) path blocking probability, (b) resource occupation rate and (c) path provisioning latency among various schemes in heavy traffic load scenario.
Fig. 9
Fig. 9 Network topology of NSFNet.

Tables (1)

Tables Icon

Table 1 Notations and Definitions

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

S O : S O 1 S O 2 ... S O i ... S O F L l P O R = [ a 1 , 1 a 1 , 2 ... a 1 , i ... a 1 , F a 2 , 1 a 2 , 2 ... a 2 , i ... a 2 , F ... a n l , 1 a n l , 2 ... a n l , i ... a n l , F ] L 1 L 2 L n l
R O R = L 1 L 2 ... L n l = [ a 1 R a 2 R ... a i R ... a F R ] = [ a 1 , 1 a 2 , 1 ... a n l , 1 a 1 , 2 a 2 , 2 ... a n l , 2 ... a 1 , i a 2 , i ... a n l , i ... a 1 , F a 2 , F ... a n l , F ]
b i l = k = 1 m i l b k / m i l
e l = [ b 1 l / v 1 l b 2 l / v 2 l ... b i l / v i l ... b F l / v F l ]
D R = cov ( e R , e ¯ l ) D ( e R ) D ( e ¯ l ) = E ( e R e ¯ l ) E ( e R ) E ( e ¯ l ) E ( e R 2 ) [ E ( e R ) ] 2 E ( e ¯ l 2 ) [ E ( e ¯ l ) ] 2 , E ( e R 2 ) [ E ( e R ) ] 2 > 0 , E ( e ¯ l 2 ) [ E ( e ¯ l ) ] 2 > 0
e R = [ b 1 R / v 1 R b 2 R / v 2 R ... b i R / v i R ... b F R / v F R ] e ¯ l = l = 1 n l e l / n l
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.