Dell, EMC, Dell Technologies, Cisco,

Showing posts with label Openflow. Show all posts
Showing posts with label Openflow. Show all posts

Tuesday, March 27, 2018

The hyper-converged network becoming more common in HCI

It is commonly said of hyper-converged infrastructure systems that they virtualize storage, compute and networking resources. That last one, networking, is not the same as the other two in an HCI implementation. In an HCI system, it is easy to take physical compute and storage resources and turn them into logical pools of virtual resources to be delivered to the virtual machines (VMs) at the end-user level. In each of those two cases, you are virtualizing just a few things -- CPU cycles and RAM for compute or data memory space for storage. In contrast, trying to implement a hyper-converged network means virtualizing the physical switches and controllers and dividing up among the VMs the bandwidth, throughput, access and other features they provide. Virtualizing how the end user accesses network resources, however, is not the same thing as making the resource itself virtual. Because the network is vital to all the other virtualized resources, it is arguably the most important aspect of an HCI system. If the network function doesn't operate properly, the VMs can experience everything from slowdowns to simply not being available to the end user. In other words, no network means no VMs to work on. One way a hyper-converged network is used by nearly all HCI platforms is by virtualizing the network interface card (NIC) in the VMs. This is also known as I/O virtualization, because it virtualizes the network input and output of the VM. Both @Microsoft's #HyperV hypervisor and VMware's ESX support this, and Microsoft added NIC teaming to Windows Server 2016, allowing virtual NICs to be combined into a pooled virtual network adapter to reduce connectivity and throughput issues for any one VM. Virtualizing how the end user accesses network resources, however, is not the same thing as making the resource itself virtual. Virtualizing networks To be truly on the same level as the compute and storage resources in a hyper-converged platform, your hyper-converged network would need to take the function of the physical switches and controllers. It would have to turn them into a logic pool from which any set amount could be assigned to each VM. That would require much of the network hardware to be contained in the HCI appliance or replicated via software in the appliance. VMware is one of the vendors with the most fully formed hyper-converged network technology, via its NSX product. While NSX was originally developed to work solely in a VMware HCI system, last year, the company released NSX-T, which can bring network virtualization to operations running the Kernel-based Virtual Machine hypervisor or ones based on OpenStack. Nutanix is working on adding networking to its HCI platform through its Acropolis infrastructure stack. These networking features include microsegmentation services and orchestration through open APIs. Software-defined networking and HCI Software-defined networking (SDN) is exactly as stated: running software that virtualizes management of typical network hardware, such as switches, firewalls and routers. SDN is often based on the OpenFlow protocol, although other protocols, such as OpenDaylight and even OpenStack, can be used. One of the advantages of SDN is that it allows a network admin to manage the flow of data through a network from a central control console, instead of having to adjust the individual hardware components separately.

Thursday, March 22, 2018

The future of SDN in 2018 moves toward increased automation

The market for classic #SDN -- defined by a standards-based separation of the control plane and the data plane and tied together by the #OpenFlow protocol -- continues to hum along quietly in the background, growing slowly, but growing nonetheless. Nemertes Research found 13% of enterprises run some OpenFlow in their infrastructures, and another 13% plan to do so by 2018. This OpenFlow deployment is typically only in part of the data center and, in many cases, is entirely virtualized, with the controller running on a virtual machine and the data plane consisting of virtual switches only.

The future of SDN could show variations of the technology becoming more prominent among enterprises, as organizations look increasingly toward programmability and automation.

The key word for the future of SDN this year is more -- more software-defined networking vendors, more options...

Sign in for existing members

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

and more deployments.

The market for classic SDN -- defined by a standards-based separation of the control plane and the data plane and tied together by the OpenFlow protocol -- continues to hum along quietly in the background, growing slowly, but growing nonetheless. Nemertes Research found 13% of enterprises run some OpenFlow in their infrastructures, and another 13% plan to do so by 2018. This OpenFlow deployment is typically only in part of the data center and, in many cases, is entirely virtualized, with the controller running on a virtual machine and the data plane consisting of virtual switches only.

Vendors like Big Switch Networks also pitch parallel deployment scenarios, in which a software-defined network lives alongside a conventional network. This option could power a management or monitoring network, or possibly provide a more flexible, affordable and enhanced security zone that connects two conventional network segments.

The future of SDN offers many options

In the forefront, different variations of SDN -- especially software-defined WAN -- grow quickly and spread widely. Whether we're talking about Cisco Application Centric Infrastructure, VMware NSX or one of the other systems, they all share many of the goals found in classic SDN, including the following:

centralized control;control policies focused more on services and applications than on addresses and ports;network virtualization to make it possible to deliver a logical network over an existing physical network;a network that is cheaper to operate, more automated and more resilient; andnetwork programmability, usually made possible through APIs on the controller.

More than 30% of organizations have SDN in place, and Nemertes is tracking steady upward trends for the future of SDN. These trends are driven by a number of factors, including the following:

http://searchsdn.techtarget.com/tip/The-future-of-SDN-in-2018-moves-toward-increased-automation

Tuesday, October 10, 2017

NoviFlow and Lumina Networks Partner to Deliver Industry’s First SD-Core Networking Solution

THE HAGUE, Netherlands, Oct. 09, 2017 (GLOBE NEWSWIRE) --  @NoviFlow Inc., a leading vendor of high-performance #OpenFlow -based switching and routing solutions, and @LuminaNetworks, Inc. a market leading provider of vendor-independent #SDN controller and applications powered by #OpenDaylight ™, today announced a partnership to deliver a Software Defined Core - SD-Core, using the world’s highest performance MPLS and Segment Routing based forwarding plane. SD-Core redefines a carrier-grade architecture and technology-set for deploying scalable MPLS and Segment Routing based networks using SDN and white box technologies, enabling network providers to evolve their core networks to SDN, while retaining the protocols, products and reliability of their existing networks. The solution builds on the needs of large MPLS networks to scale beyond the capabilities of typical core routers, while reducing costs, and enabling modern SDN and VNF-based architectures. Distributing the forwarding and data planes into separate elements, allows large network providers to adopt SDN, scale their control plane, adopt white box switches but keep their existing core router vendor, through a cap-and-switch strategy. Unlike competing white box solutions, NoviWare, Noviflow’s high-performance OpenFlow forwarding plane software has been designed to address the complex MPLS requirements of carrier-grade networks and delivers over a million simultaneous flows using a programmable match-action pipeline, supporting all OpenFlow 1.3 and 1.4 actions, instructions and match fields, along with key OpenFlow 1.5 features. Providing control and applications for NoviWare’s dataplane, Lumina’s SDN Controller provides OpenFlow control to the data plane, while working with the network’s legacy routing software to integrate seamlessly with existing network nodes. Interfaces for provisioning, service assurance and operational tasks are made available via REST APIs and can be integrated with orchestrators to support TOSCA NFV service definitions. The solution is built to allow the gradual migration of MPLS services – point-to-point E-line, L3VPN, E-tree, etc. to the new infrastructure, allowing the capping of investment of legacy core routers, while the network evolves to deliver new SDN-based services. Dominique Jodoin, President and CEO of NoviFlow, explained, “The alliance between Lumina and NoviFlow provides a compelling new end-to-end solution for MPLS and Segment Routing, extending the radical CAPEX and OPEX benefits of SDN/OpenFlow beyond the network’s edge and into the spine and core. NoviWare provides a single uniform open standard-based forwarding plane interface for OpenFlow, gRPC, and soon gNMI and P4-runtime, finally delivering the programmable network that Tier 1 carriers, network infrastructure providers, and enterprises have been waiting for. NoviFlow is honored to join forces with a key player such as Lumina Networks to offer unprecedented feature/performance in commercial SDN solutions.” “Taking specific SDN use cases out of the lab and deploying them in real customers, with real problems is the cornerstone of our value,” said Andrew Coward, CEO of Lumina Networks. “It’s rare that a new solution is simultaneously lower cost, higher-performance and ready for production, so working with NoviFlow to evolve the core of our MPLS customers’ networks is a real demonstration of the power of SDN used with white box technology”. “However, we don’t expect customers to rip and replace their existing network, but rather cap their investments in traditional core routers and start migrating non-mission critical traffic, build confidence and then move to a considered migration of traffic, blending old and new networks together - a ‘journey to the SD-Core’, so to speak.” NoviFlow’s NoviWare running on NPU’s and Barefoot’s Tofino will be demonstrated live in NoviFlow’s booth (#A10) at the SDN and NFV World Congress on October 10-13, 2017 in The Hague, Netherlands. Lumina will also be demonstrating the SD-Core solution live in their booth (#A09). We invite all interested parties to come by and see the best in SDN/OpenFlow solutions!

https://globenewswire.com/news-release/2017/10/10/1143505/0/en/NoviFlow-and-Lumina-Networks-Partner-to-Deliver-Industry-s-First-SD-Core-Networking-Solution.html

Tuesday, August 23, 2016

OpenFlow controller design killing SDN, say network boffins

#OpenFlow 's architecture is inefficient, and caps performance while sucking unnecessary power. That's the conclusion of a bunch of Comp. Sci boffins from researchers at Australian brain box #Data61 and Sydney University, who assessed four major OpenFlow controllers – #NOX, #Maestro, #Floodlight and #Beacon. Their paper is at #Arxiv. Poor old OpenDaylight was also tested but not reported: “the performance [was] too low to provide any insightful comparison”. To cut to the chase: none of the controllers tested got anywhere close to line speed, whether running on a network processor (based on #Tilera chips), or on a Xeon E5-2450-based server (more on the configurations later). On the #CBench software defined networking ( #SDN ) controller benchmark, the best the Tilera setup achieved was just under five million requests per second, compared to a line rate of 29 million requests per second. It seems that Intel's long years of work understanding packet processing is paying off: on the x86 setup, Beacon was able to hit 20 million requests per second; the maximum of the other controllers was 7 million requests per second.

http://www.theregister.co.uk/2016/08/22/sdn_controller_design_sucks_claim_network_boffins/

Saturday, June 25, 2016

The next phase of SDN: Software defined campus networks

Enterprise and university campus networks have enjoyed decades of architectural permanence. For years, these networks have been built with cookie cutter designs, with the only critical decision points being the number of ports and users. But new challenges presented today – more devices (both in number and type), mobility, security, and diverse application traffic – the management of these networks is finally coming to the forefront. Software-defined networking ( #SDN ) is an ideal methodology to push policies to campus networks in a systematic and automated way. 

#OpenFlow, one of the cornerstones of SDN, was built to facilitate the separation of control from forwarding within network devices.  One aspect of this is that it also allows operators to centralise the control of these devices, thereby simplifying the task of managing the network. And the campus network needs management simplification. Between bring your own device (BYOD) and the Internet of Things (IoT), networks are becoming more complex every day. Gartner projects that 6.4 billion connected things will be in use worldwide in 2016 – up 30 percent from 2015 – and would reach 20.8 billion by 2020.

http://www.cloudcomputing-news.net/news/2016/jun/24/next-phase-sdn-software-defined-campus-networks/

Monday, November 16, 2015

QLogic 100Gb Ethernet Technology Showcased in Software-Defined Networking Demonstration at SC15 Conference

AUSTIN, Texas, Nov. 16, 2015 (GLOBE NEWSWIRE) -- SC15 - Today, #QLogic ® announced that its #100Gb Ethernet (100GbE) technology would be showcased as part of the #SCinet Network Research Exhibition (NRE) demonstration of large data flow transfers across a highly intelligent #SoftwareDefinedNetworking ( #SDN ) #OpenFlow ring at SC15. In this demonstration, #QLogic 100Gb Ethernet Adapter technology is being leveraged by the California Institute of Technology, #Dell, EchoStreams, Stanford University and the University of Michigan to enable large data flows to traverse across various paths based on either the shortest or the fastest routes of availability using a custom SDN application spanning connections on the show floor as well as from remote sites leveraging #darkfibre.

http://money.cnn.com/news/newsfeeds/articles/globenewswire/6024540.htm