The enterprise is taking a hard look at #machinelearning (ML) as a means to bring network infrastructure into the 21st century, and many leading organizations have already implemented the technology in production environments. But is ML the panacea vendors make it out to be? And can it produce the kind of autonomous, intent-based environments that currently populate the hype cycle? The good news about machine learning is that it does not necessarily require a huge upfront investment. Leading cloud providers are rolling out ML-as-a-Service ( #MLaaS ) options. These implement the technology on the very infrastructure that will likely support the next-generation applications that can make the best use of machine learning, namely, the cloud. Google, for one, recently rolled out an MLaaS offering based on technology it acquired from a UK company called DeepMind way back in 2014. The goal is to allow network administrators to create autonomous virtual networks on the Google Cloud that would leverage the reams of unstructured data, such as telemetry, historical patterns and traffic analysis, that are already flowing around network deployments. But is this enough to craft a truly smart network that could create optimized environments on the fly based on the needs of applications? Not exactly, according to Juniper’s Kireeti Kompella. In an interview with Data Center Knowledge, Kompella said that such a vision is possible, but it will require careful coordination among a number of cutting-edge technologies and the vendors that design them. By itself, machine learning can craft network configurations from a list of profiles using past performance and other data. But to build a truly self-driving network, the enterprise will also need a closed-loop monitoring system overseeing SLAs, peering, LSPs and other parameters. In this way, the network can measure current performance against evolving objectives and adapt itself accordingly, without human intervention or input. We also shouldn’t expect ML to improve on basic networking attributes like latency, says Savvius Inc.’s Jay Botelho. By nature, latency is highly difficult to predict. In any given millisecond, exorbitant traffic can hit a specific switch or router, resulting in the inevitable packet queue. The best way to fight latency, then, is not to micromanage every bit in transit but to push traditional monitoring capabilities as close to the user as possible. As data becomes more distributed, even a highly intelligent management regime will be of little benefit if it remains locked in a centralized deployment model. Probably the one area where ML can make already a major contribution to network performance is security. As enterprise software strategist Louis Columbus noted on Forbes recently, ML can address the five key threats facing the enterprise these days without incurring the millions in costs associated with traditional measures. ML’s constraint-based pattern-matching capabilities, for example, are ideal for thwarting compromised credential attacks, while its ability to scale Zero Trust Security (ZTS) measures allow it to support enterprise-wide risk scoring models. In addition, it provides for highly customized multi-factor authentication workflows to keep tabs of a changing workforce, while at the same time enabling predictive analytics and insight into emerging threat profiles. And lastly, it can remain a step ahead of rapidly evolving malware as hackers try to rework their code to subvert reactive defensive measures. Machine learning will prove to be an invaluable tool for enterprise network management, but only if it is deployed as part of an overarching framework that also incorporates software-defined networking (SDN), network functions virtualization (NFV), automation and broad policy and governance architectures. Giving a machine the ability to learn means it will soon be able to do a lot on its own, but ultimately it must rely on related systems — and the people who guide them — to put that learning to the most productive use.
No comments:
Post a Comment