Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Demonstration of low latency Intra/Inter Data-Centre heterogeneous optical Sub-wavelength network using extended GMPLS-PCE control-plane

Open Access Open Access

Abstract

This paper reports on the first user/application-driven multi-technology optical sub-wavelength network for intra/inter Data-Centre (DC) communications. Two DCs each with distinct sub-wavelength switching technologies, frame based synchronous TSON and packet based asynchronous OPST are interconnected by a WSON inter-DC communication. The intra/inter DC testbed demonstrates ultra-low latency (packet-delay <270µs and packet-delay-variation (PDV)<10µs) flexible data-rate traffic transfer by point-to-point, point-to-multipoint, and multipoint-to-(multi)point connectivity, highly suitable for cloud based applications and high performance computing (HPC). The extended GMPLS-PCE-SLAE based control-plane enables innovative application-driven end-to-end sub-wavelength path setup and resource reservation across the multi technology data-plane, which has been assessed for as many as 25 concurrent requests.

©2013 Optical Society of America

1. Introduction

Cloud based applications and network centric services and consumers such as virtualized PC, Video/Game on Demand (VoD, GoD), Storage Area Network (SAN), data replication, and etc, have transformed traditional DCs to massive scale computing infrastructures [1,2] with highly complex interconnectivity requirements guaranteeing any-to-any server communications with stringent QoS requirements. Data-Centre(DC) as the main propellers of the ultimate everything in the cloud with existing and increasing responsibilities in storing, processing, rendering, searching and so on, should employ highly effective intra/inter networking in terms of connectivity, bandwidth and latency so to make the services to the end users seamlessly fast with the highest quality of experience possible.

This is whilst the current most deployed hierarchical opaque L2/L3 DC networks imposes scalability restrictions, resource inefficiency, non-optimal Quality of Service (QoS) and limited resiliency in delivering future application services [3]. As such, future applications could benefit from flexible ultra-low latency finely granular optical network technologies able to integrate seamless provisioning of combined intra/inter DC cloud-based computing and network services [4]. Such a network could deliver enhanced resource usage efficiency and network scalability.

In our previous works in [5] we reported on the full implementation of Datacenter networking solution based on multi-technology sub-wavelength networking testbed. In this paper we elaborate on what we reported in [5], with more detailed and extensive descriptions, and also more results from the advanced DC data-plane implementation enabling transparent traffic grooming and management. The paper is organised as follows: section 2 will introduce the implemented testbed which constitutes intra DC network for two prototype scale single-tier distributed DCs [6] interconnected via inter-DC core network.

Section 3 explains the intra DC advanced sub-wavelength switching solutions of different technologies, which are interconnected through WSON network. Data-plane of the testbed consists of two different optical sub-wavelength switched intra-DC research prototype testbeds: a) an asynchronous tuneable Optical Packet Switch Transport (OPST) [7] ring, and a synchronous multi-wavelength and topology-flexible Time-Shared Optical Network (TSON) [8]. The implemented data-plane enables multi-bitrate (100Mbps-5.7Gbps), transparent and very low latency/PDV point-to-point Ethernet data delivery.

Section 4 on the other hand describes the implemented control-plane; an enhanced Generalised Multi-Protocol Label Switching (GMPLS) as the control-plane, exploits extended Path Computation Engine (PCE) supporting sub-wavelength path finding, and a resource allocation tool called Sub-wavelength Assignment Engine (SLAE) for the time-shared network of TSON. The control-plane implementation enables user/application-driven dynamic end-to-end sub-wavelength connection establishment with just enough bandwidth and duration. The bandwidth-flexibility and scalability, connection transparency with resource efficiency, and network/IT resource awareness attributes of the implemented testbed addresses the critical requirements of intra-DC networking environments. The two DC sub-wavelength technologies use pre-established connections over a WSON network for the inter-DC communications.

Finally, section 5 explains the extended data-plane evaluations, along with the evaluation of the whole network of the integrated control and data-plane for end-to-end path setup and data transfer. The data-plane of the intra/inter DC networking solution has been evaluated to perform point-to-point, point-to-multipoint, and multipoint-to-(multi)point connectivity services, where it demonstrates data delivery with highly low latency (<270 µs) and PDV (<10 µs). The performance of the multi-tech data-plane along with its enhanced unified control-plane have been also assessed for all phases of operations and processes individually and combined. Intra/Inter DC Test-bed and scenario.

Figure 1 displays a complete view of the implemented testbed including the topological scenario in Fig. 1(a), more in detail implementation view in Fig. 1(b), and the figurative testbed with the corresponding devices and components in Fig. 1(c). The test bed in total contains 11 optical nodes in the data-plane comprising two intra-DC networks and the inter-DC network. In control-plane, 8 server VM nodes are used in addition to switches to set up the out-band control communications. A number of servers are connected to each DC network for running applications inside and across the integrated network. The two intra-DC sub-wavelength technologies are the TSON [6], implemented in a 4-node star topology, and the 3-node ring of tuneable OPST system [7]. The two intra-DC networks are interconnected via a 4-node partial mesh WSON network. Figure 1(a), shows the proposed distributed DC networks of intra and inter DC communications for a VM migration use case. The contents of use will be moved to an approximate located DC upon request from either user or an application to the control-plane aiming to improve the user quality of experience (QoE). Figure 1(b) provides more detailed view of the implemented data-plane and control-plane nodes of TSON, OPST, and WSON networks. In the data-plane blue boxes in TSON domain can be seen which are based on FPGA optoelectronics module used for electrical/optical operations at the edge, and fast switch control in the core of the TSON network. These modules communicate with a software agent in the control-plane via Corba interfaces. For the inter-DC network, the orange boxes represent the 4 optical nodes based on MEMS 3D optical systems. The path setup on this network is done statically and it carries Ethernet traffic between the two DC networks. On the other end the second DC is located; with the OPST system (indicated with green colour) as another intra-DC network solution. The communication between the control-plane and the OPST is via a RESTful WebSerivces connection between a software agent in the control-plane and the management system of the OPST testbed. The developed control-plane on top (shown with orange colour) is based on GMPLS deployed as distributed software stacks on several VMs: one for each TSON node and one for the OPST system. In addition are two VM units of User Network Interfaces (UNI), and one VM containing of PCE + SLAE resource allocation elements as the remaining control-plane nodes. Figure 1(c), displays the components and devices we used in our implementations. The FPGA boards, servers, OPST boxes and the rest of the figures visualise the actual network deployment logistics.

 figure: Fig. 1

Fig. 1 (a) Network topology and formation for DC applications (b) Network implementation details in data-plane and control-plane, (c) Network topology and formation view with the corresponding components.

Download Full Size | PDF

2. Dataplane of (sub)wavelength switching for intra/inter DC communications

TSON: TSON is a multi-wavelength fully bi-directional synchronous and frame based yet flexible system, with 1ms frame and 31 time-slices per each frame. TSON network implementation consists of FPGA optoelectronics platforms integrated with advanced optical components to enable high performance processing and transparent switching and transport [8,9]. For FPGA platforms we use Xilinx Virtex 6 HXT boards (156.25 MHz clock frequency), with multiple 10Gbps (for control and transport) DWDM SFP + transceivers. For the optical layer of TSON components of four 2x2 PLZT switches [10] with 10-ns switching speed, EDFAs, MUX/DEMUXes, and etc are used.

TSON edge nodes use FPGA platforms for processing of Ethernet packets and to generate optical time-slices from them at the ingress TSON edge, and also to regenerate Ethernet frames from time-sliced optical bursts at the egress TSON edge node. In order to send and receive Ethernet and optical time-slices each TSON edge node (Node 1,2,4 in Fig. 1) uses four SFP + transceivers, two 1310nm 10km reach for end-point server traffic and control, and two DWDM 80Km reach transceivers at 1544.72nm and 1546.12nm. Figure 2 , illustrates the function blocks developed for TSON edge nodes for high performance Ethernet-TSON and TSON-Ethernet conversions. For the conversion, at the ingress edge node the client Ethernet frames are buffered based on their traffic destination MAC address (4 MAC buffers are implemented due to the available memory capacity) and then the buffered traffic is aggregated in TX FIFOs to form optical signal bursts (up to 8x1500 byte Ethernet frames in one time-slice) [8]. The bursts are released on the allocated time-slices and wavelength(s), which is set inside FPGA LUTs by the control-plane. Each flow of traffic based on destination MAC, can be allocated from one time-slice on a frame, up to all time-slices on all the available wavelengths (two wavelengths), which provides flexible bandwidth from 100 Mbps (bitrate achievable per each time-slice including all the overhead required for bursty optical TX and RX, and also considering the switching gaps inserted in each time-slice) up to 5.7 Gbps over two wavelength, with 100Mbps rate granularity accordingly. It should be noted if the network is setup in advance and proactively it is enabled to operate with greater bandwidth flexibility, so to set up connections as of as low as 100 Kbps using only one frame of communication. On receiving optical signals in TSON Edge nodes Ethernet packets are segregated, extracted and sent out.

 figure: Fig. 2

Fig. 2 TSON FPGA function blocks.

Download Full Size | PDF

In TSON core nodes the fast optical switches of PLZT are employed (Node 3 in Fig. 1). This node uses the four 2x2 PLZT switches (one switch per each direction and wavelength), to direct the incoming optical time-sliced signals towards the appropriate output ports, as defined by the control-plane. The TSON core node uses the same type of high performance FPGA boards for PLZT control, and not to generate any traffic. The FPGA LUTs are filled from the control-plane, through customised Ethernet communication carrying PLZT switching information, so to change the switching state per time-slice on the PLZT switches to establish and maintain optical LSPs across the TSON domain.

TSON network requires global frame synchronisation among TSON nodes to meet the precision needed for operations. The network-wise synchronisation is implemented by connecting the FPGA boards to a statically selected master clock node via dedicated synchronisation links, and using a 3-way frame synchronisation protocol as shown Fig. 3 , to tune and maintain a global frame.

 figure: Fig. 3

Fig. 3 TSON data-plane synchronisation protocol.

Download Full Size | PDF

The master clock node, sends synchronisation frames (generated within just one FPGA clock-cycle (156.25MHz → 6.4 ns)) with time stamps to the clock slave nodes regularly. The slave nodes use the time stamps and the known trip time between to compensate for clock variations and drifts. The time-slice synchronisation in TSON is not needed since the link delay between network nodes is engineered to be multiple(s) of time-slice duration. As the network expands geographically, to maintain the synchronisation, the period of the regular synchronisation messages can be changed to avoid major drifts due to larger trip times.

Network resources in TSON are time-slices over the available wavelengths. The test bed uses the SLAE for allocating free time-slices to the path (LSP) requests for connections across the TSON domain. The allocation uses flexible non-contiguous time-slices across all the waves for establishing the paths (Fig. 4 ,). The allocated network resources are then confirmed at each TSON node of edge and core for TX/RX and switching accordingly.

 figure: Fig. 4

Fig. 4 TSON resource allocation for setting up LSPs, (a) allocated slices for LSP1, (b) allocated slices for LSP2 in addition to LSP1.

Download Full Size | PDF

OPST: The second sub-wavelength technology is the OPST system which is packet based and asynchronous (This OPST system is a research prototype testbed) indicated by Node 9-11 in Fig. 1. OPST collapses layer 0 of optical transport up to layer 2 of Ethernet switching under the same internal ring of network transport, control and management. Using this approach it transforms the entire ring into a distributed L2 switch with optical packet transfer that operates as a single new network element (Fig. 5 .).The collapsing of layers 0 to 2 is achieved by efficient processing of Ethernet traffic from clients, and then using ultra-fast nsec tuneable laser transmitters to place and route the packets on the ring over the wavelength designated per each destination node inside OPST. (Three wavelengths are used in the OPST testbed in total as there are three OPST nodes). The destination node drops the optical Ethernet packets using wavelength selective switches and burst mode receivers and directs them out.

 figure: Fig. 5

Fig. 5 OPST system data-plane and internal management system.

Download Full Size | PDF

The OPST data-plane operation function blocks are shown in Fig. 6 . When Ethernet frames get into the OPST system, they are queued in a Virtual Output Queue (VOQ) based on their destination MAC address. The queued frames will be sent out on the allocated wavelength to their destination MAC, using fast tunable lasers (FTL). The wavelength allocation per each destination is carried out using internal Dynamic Bandwidth Allocation (DBA) element. OPST nodes on the receiving end utilise burst mode receiver to drop the packets addressed to them which are sent over the wavelength corresponding to that destination node. The received optical packets are then sent out on 10GE links.

 figure: Fig. 6

Fig. 6 OPST system function blocks for sending and receiving optical bursts.

Download Full Size | PDF

In order to avoid any collisions in the optical domain, the OPST system uses Optical Media Access Control system (OMAC), which employs a Carrier Sense Media Access with Collision Avoidance (CSMA-CA). This OMAC mechanism adjusts the burst emission time by snooping on the receiving channel and confirming the availability of the wavelength for transmission.

WSON network: The inter-DC is based on a 4-node (Node 5-8) bidirectional WSON partial mesh network, which is built using 3D MEMS switch platform as shown in Fig. 1.

3. Extended GMPLS-PCE-SLAE control-plane

The GMPLS architecture as defined within the IETF CCAMP WG is designed to be fully agnostic of specific deployment models and transport environments. GMPLS is built upon the MPLS procedures and broadens the applicability of those mechanisms beyond the single data-plane envisioned by the original MPLS specifications.

Connections controlled by GMPLS fulfil both the users’ need for a tailored bandwidth for each connection and the network operator’s wish to maintain a controllable and manageable network infrastructure. The natively generalized control approach enabled by GMPLS on the underlying data-plane allows also handling multiple switching technologies under a single control-plane instance.

The implemented multi-technology GMPLS stack (Fig. 7 .) for the first time delivers specific extensions and procedures to support the sub-wavelength switching granularity: sub-wavelength network resource modelling, Sub-Lambda Assignment Engine (SLAE) for TSON, enhanced GMPLS + PCE routing algorithms, and OSPF-TE, RSVP-TE protocol extensions for sub-wavelength resource reservation. The extended GMPLS control-plane addresses the intra DC networking requirements, by combining IT and network resource allocation, and leaving the applications to drive the network path establishment.

 figure: Fig. 7

Fig. 7 GMPLS main blocks and vertical structure.

Download Full Size | PDF

In action, the GMPLS edge controller is triggered from the UNI interface for setting up an end-to-end sub-wavelength lightpath. It invokes the extended PCE to compute a TSON + OPST multi-layer route. PCE then calls SLAE for time-slice allocation over TSON region, so the SLAE allocates free time-slices using its database (Fig. 8 .). After path and time-slice computation across the two DC networks, the GMPLS edge controller starts the RSVP-TE signalling for setting up the multi-layer path over the TSON and OPST domains. In the control-plane GMPLS stack at each hop (whole OPST ring constitutes a single hop, while each TSON node is controlled as an independent entity) communicates with the corresponding data-plane node for resource reservation.

 figure: Fig. 8

Fig. 8 a) Time-Slice Label Object; b) Flexible Time-Slice Assignment TLV; c) Time-Slice availability sub-TLV; c) Calendar event sub-TLV.

Download Full Size | PDF

The communication between control-plane and data plane uses a developed Transport Network Resource Controller (TNRC) module as control-plane to data-plane translator. The TNRC is implemented as Abstract Part (AP) and Specific Part (SP), where the AP can communicate with the unique SP of whatever data-plane technology underneath, which makes the GMPLS control-plane agnostic of the data-plane. The developed SPs then communicate with TSON using Corba interface, and with OPST using XML RESTful [12]. The TSON controller then uses customised Ethernet communications to exchange information with TSON FPGA platform for TX/RX and switching purposes.

4. Evaluation and results

Transparent traffic grooming and management: The integrated sub-wavelength data-plane by taking advantage of statistically multiplexing of optical connections enables management of traffic to provide various transparent connectivity services with arbitrary birates. As such are Point-to-Point, Point-to-Multipoint which is a traffic segregation scenario, and Multipoint-to-(Multi) Point as a traffic aggregation scenario. TSON achieves this functionality by exploiting its provisioning systems for setting up LSPs using various time-slice allocation patterns along with the burst switching inside the TSON domain, whilst OPST with the packet based asynchronous system is able to deliver these services using its collapsed L2-L0 routing and transportation. Figure 9 , displays three transparent traffic management use cases we deployed using the implemented testbed. These use cases illustrate transparent services of multipoint to point in Fig. 9(a), multipoint to multipoint in Fig. 9(b), and a traffic engineering use case deploying multipath routing of data in Fig. 9(c). On the right side of Fig. 9, the topological formation of different DC networks of TSON (DC1), WSON(inter-DC) and OPST (DC2) can be seen, which are in arrangement of star, partial mesh, and ring accordingly. On the left side the corresponding data delivery latency results measured for a number of use cases are presented.

 figure: Fig. 9

Fig. 9 Different connectivity services: (a) Multipoint-to-point, (b) Multipoint-to-Multipoint, (c) Multi-Path routing.

Download Full Size | PDF

Data-plane performance: In Fig. 9(a), Ethernet traffic is being delivered across the multi technology test bed, with various bit rates of 1Gbps, 2 Gbps, and 3 Gbps in total. At DC 1 in edge TSON ingress node, the Ethernet frames are aggregated in optical bursts and emitted in to the TSON core. At the core, the optical aggregation of traffic takes place, where the optical bursts from each TSON edge node are combined and switched towards the egress TSON edge node by using appropriate time-slices. At the egress TSON edge node, the original and now aggregated Ethernet traffic are retrieved, and sent to the WSON inter-DC network. This Ethernet data is then transferred to the OPST ring in DC 2, and from there is delivered to the client. In this scenario, TSON is provisioned to aggregate the smaller portions of traffic from two ingress edge nodes (500Mbps, 1Gbps and 1.5 Gbps per each ingress link in accordance to the total bitrate) into one egress point. The data-plane delay measurements of this scenario in Fig. 9, are shown with the corresponding bars for different traffic rates and frame sizes. The delay is at its highest (330µs) with the lowest total applied traffic rate of 1Gbps, and it drops as the data rate increases. This is explained by considering the TSON data aggregation and burst generation blocks in its electronics architecture, which by queuing Ethernet frames and holding them up to the release point on the assigned time-slices, produces delays proportionate to the rate of the traffic flowing in to it. The difference between 1500 Byte and 64 Byte frames delay is observable as the smaller frames of 64 Byte experience less propagation delay to travel inside OPST, despite the fact that they need to stay longer in TSON aggregation mechanism to make a full burst size before emission.

In Fig. 9(b), the use of the testbed for a Multipoint-to-Multipoint communication is showcased, where the traffic is bundled at the ingress of DC1 by TSON, to be transmitted over one channel to DC2 across the WSON core network. The traffic in DC2 is segregated based on the destination MAC addresses and delivered. The latency measurements are shown on the left side of Fig. 9(b), the difference in data delivery latency between the two ports in DC2 is caused by longer trip time for port 1 because of the envisioned fibre span in OPST between two ports. It should be noted that the maximum bitrate for 64 Byte Ethernet frames is limited to ~2.2 Gbps in this setup because of greater ratio of overhead to payload in comparison to 1500 Byte Ethernet frames.

In Fig. 9(c), a multipath scenario is showcased, in which the data is transported between the two DCs over two different routes. The traffic is segregated in DC1 by TSON, transferred across WSON inter-DC network, then aggregated in OPST DC2 and finally delivered to the end user. The segregation has been carried out for two different data rate cases: 1- two routes with identical datarates (50% of total per each route) between the two DCs, and 2: two routes with one delivering 67% of the total traffic input while the second route carrying the remaining 33% across the TSON-WSON-OPST DC networks. This use case is of traffic engineering importance which showcases the ability of the implemented testbed in transparent multi-path routing and load balancing for DC networks. The latency results for the case corresponding with 50% data segregation and for multiple bitrates is shown in the central chart of Fig. 9(c), whilst the network delay performance for different segregation pattern of 67% to 33% is shown on the left chart in Fig. 9(c). The higher delay for larger packets of 1500 Byte is mostly due to the higher packet serialisation process delays in OPST nodes.

The slight difference in the average data delivery delays in different scenarios is caused by the number and also the processing times of the nodes in OPST and TSON networks, as it can be seen the second scenario having six nodes in both TSON and OPST involved in packet processing results in the highest network delay time.

End-to-End evaluations: One of the main objectives of this work is to take advantage of the extended and customised GMPLS control-plane for serving multiple and concurrent application-driven end-to-end path-setup requests across the integrated DC testbeds, to enable automatic, dynamic and flexible server-to-server communications. Complete end-to-end setup time for single and concurrent lightpath requests from GMPLS invocation (at the UNI-gateway) until transmission of data has been measured for different phases and technologies of operation in Fig. 10 .

 figure: Fig. 10

Fig. 10 Control-plane latency (service establishment) results for concurrent path requests.

Download Full Size | PDF

For 1 request, the measurements show the time needed for PCEP + PCE operations take less than 2 seconds. Having the SLAE with GMPLS signalling added for the TSON network only the delay increases slightly to above 2 seconds. Adding the OPST to the entire end-to-end path setup procedure increases the imposed delay to up to 100 seconds. The evaluation scenario includes higher number of concurrent requests which shows linear increase as the requests were queued and dealt with sequentially.

For the busiest scenario of 25 concurrent requests, It can be seen the path computation and control-plane operations take up to 10 seconds for all the operations. Adding the TSON data-plane with SLAE to the measurements, the latencies rise to 12 seconds for 25 parallel requests scenario. Added OPST system, this value increases to around 400 seconds. The main reason behind the unexpected increase in establishment time by OPST is by the specific management software used in this prototype.

The end-to-end testbed evaluations include delay measurements as shown in Fig. 11 . According to the results bars OPST system shows incredible low latency data delivery thanks to its asynchronous optical packet switching mechanism. OPST system delivers ultra-low latency (<40 µs) and very low PDV (<10 µs) independent of the traffic load. On the other side, TSON system delivers increased by yet very low latency (<260µs) and can achieve very low PDV (<5 µs) for higher data rates due to packet buffering and time-sliced aggregation mechanisms. For TSON it should be noted, the higher the bitrate, the faster the aggregation and buffering is, and therefore thats why the end-to-end intra-inter DC communication of TSON- WSON-OPST latency drops from 270µs (at 1 Gbps) to ~150 µs (at 5.7 Gbps) for 1500 Byte Ethernet frames, while having the PDV being < 10 µs. In Fig. 12 , the measured end-to-end PDV for different packet sizes is displayed, where the 1500 Byte Ethernet frames will get most varying delays queuing in buffers in TSON as the larger packets are more probable to be shifted to later bursts.

 figure: Fig. 11

Fig. 11 Data-plane latency results for different bitrates.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 (a) Data-plane PDV for 64 Byte packets, (b) Data-plane PDV for 1500 Byte packets.

Download Full Size | PDF

5. Conclusion

We have demonstrated for the first time a fully implemented multi-technology, multi-layer, intra/inter DC heterogeneous sub-wavelength network, composed of advanced optical control and data plane solutions. The data-plane of the DC network has 11-nodes in total, where TSON in DC1 (4 node star), WSON as inter-DC network (4 node mesh), and OPST in DC2 (3 node ring). The enhanced control-plane of GMPLS + PCE + SLAE sets up application driven paths across the two sub-wavelength technologies of TSON (synchronous and frame based bit rate support up from 100 Mbps to 5.7 Gbps with 100 Mbps steps) and OPST (asynchronous and packet based with bit rate support up to 5.6 Gbps), which are interconnected through pre-established WSON connections.

The implemented network demonstrates flexible connectivity patterns with various bit rates of Point-to-Point, Point-to-Multipoint and Multipoint-to-(Multi)Point capabilities by taking advantage of statistical multiplexing nature of the sub-wavelength technologies, enabling efficient optical traffic management for intra DC communications. The Network data-plane evaluations show very low latency data delivery across individual and integrated testbeds (TSON with <270 µs and OPST with < 40 µs, with PDV<10 µs for higher datarates) which is of great importance for DC networks. Also the integrated data-plane enhanced with the IT/Resource aware, technology agnostic, sub-wavelength supporting, and unified GMPLS control-plane has been evaluated for application driven path setups for concurrent path requests.

Acknowledgment:

This work is supported by the EC through IST STREP project MAINS (INFSO-ICT-247706) and PIANO + ADDONAS as well as EPSRC grant EP/I01196X: Transforming the Internet Infrastructure: The Photonics Hyperhighway.

References and links

1. C. F. Lam, L. Hong, B. Koley, Z. Xiaoxue, V. Kamalov, and V. Gill, “Fiber optic communication technologies: What's needed for datacenter network operations,” IEEE Commun. Mag. 48(7), 32–39 (2010). [CrossRef]  

2. A. Vahdat, L. Hong, Z. Xiaoxue, and C. Johnson, “The emerging optical data center,” in Conference of Optical Fiber Communication Conference and Exposition (OFC/NOFEC 2011), pp. 1–3.

3. Enterasys Networks, Inc: Data Center Networking Enterasys Connectivity and Topology Design Guide Enterasys Networks, Inc (2011),http://www.enterasys.com/company/literature/datacenter-design-guide-wp.pdf.

4. S. J. B. Yoo, Y. Yawei, and W. Ke, “Intra and inter datacenter networking: The role of optical packet switching and flexible bandwidth optical networking,” in conference of Optical Network Design and Modeling (ONDM) (2012), Vol. 16, pp.1–6.

5. B. R. Rofoee, G. Zervas, Y. Yan, D. Simeonidou, G. Bernini, G. Carrozzo, N. Ciulli, J. Levins, M. Basham, J. Dunne, M. Georgiades, A. Belovidov, L. Andreou, D. Madrigal, J. Aracil, V. Lopez, and J. P. F. Palacios, First Demonstration of Ultra-low Latency Intra/Inter DC Heterogeneous Multi Technology Optical Sub-wavelength Network using extended GMPLS-PCE control-plane” in European Conference and Exhibition on Optical Communication (ECOC) Post Deadline (Th.3.D), 2012.

6. Juniper Networks, “Cloud Ready Data Center Network Design Guide”, http://www.juniper.net/us/en/local/pdf/design-guides/8020014-en.pdf, (2012).

7. G. S. Zervas, B. R. Rofoee, Y. Yan, D. Simeonidou, G. Bernini, G. Carrozzo, and N. Ciulli, “Control and transport of Time Shared Optical Networks (TSON) in metro areas,” Future Network & Mobile Summit (FutureNetw), 2012, vol., no., pp.1–9, 4–6 July 2012.

8. Y. Yan, G. S. Zervas, Y. Qin, B. R. Rofoee, and D. Simeonidou, “High Performance and Flexible FPGA-Based Time Shared Optical Network (TSON) Metro Node” in European Conference and Exhibition on Optical Communication (ECOC) (We.3.D), 2012.

9. G. S. Zervas, J. Triay, N. Amaya, Y. Qin, C. Cervelló-Pastor, and D. Simeonidou, “Time Shared Optical Network (TSON): a novel metro architecture for flexible multi-granular services,” Opt. Express 19(26), B509–B514 (2011). [CrossRef]   [PubMed]  

10. K. Nashimoto, K. Kudzuma, and D. Han, “Nano-second response, polarization insensitive and low-power consumption PLZT 4×4 matrix optical switch,” in conference of Optical Fiber Communication Conference and Exposition (OFC/NFOEC), 2011.

11. P. Pan and T. Nadeau, “Software-Defined Network (SDN) Problem Statement and Use Cases for Data Center Applications” in IEFT Internet Draft, October 2011, http://tools.ietf.org/html/draft-pan-sdn-dc-problem-statement-and-use-cases-01, (2011).

12. MAINS deliverable 2.2. URL:http://ist-mains.eu/wiki/index.php?title=File:MAINS_WP2_D2.2_XML1_v0.5.zip.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (a) Network topology and formation for DC applications (b) Network implementation details in data-plane and control-plane, (c) Network topology and formation view with the corresponding components.
Fig. 2
Fig. 2 TSON FPGA function blocks.
Fig. 3
Fig. 3 TSON data-plane synchronisation protocol.
Fig. 4
Fig. 4 TSON resource allocation for setting up LSPs, (a) allocated slices for LSP1, (b) allocated slices for LSP2 in addition to LSP1.
Fig. 5
Fig. 5 OPST system data-plane and internal management system.
Fig. 6
Fig. 6 OPST system function blocks for sending and receiving optical bursts.
Fig. 7
Fig. 7 GMPLS main blocks and vertical structure.
Fig. 8
Fig. 8 a) Time-Slice Label Object; b) Flexible Time-Slice Assignment TLV; c) Time-Slice availability sub-TLV; c) Calendar event sub-TLV.
Fig. 9
Fig. 9 Different connectivity services: (a) Multipoint-to-point, (b) Multipoint-to-Multipoint, (c) Multi-Path routing.
Fig. 10
Fig. 10 Control-plane latency (service establishment) results for concurrent path requests.
Fig. 11
Fig. 11 Data-plane latency results for different bitrates.
Fig. 12
Fig. 12 (a) Data-plane PDV for 64 Byte packets, (b) Data-plane PDV for 1500 Byte packets.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.