#Cloudcomputing is a fabulous cure-all of fixes and solutions for all our previous frustrations with technology that simply snaps onto existing business systems as soon as the engineers hit ‘go live’ button. It should, hopefully, be fairly easy to see what’s wrong with this statement. While it’s true that cloud computing services are supplied ‘from the backend’ by a cloud services provider using a datacenter to shoulder the workload backbone, the working integration of these technologies once grafted onto a firm’s operational requirements can very often fall somewhere short perfect. Cloud is difficult A piece of cloud software has a variety of client-specific instances (‘client’ meaning device or application) meaning it has a different shape, job and configuration everywhere it is deployed. But it gets worse. A piece of cloud software must also work inside what is often a complex set of connected processes with different data sets. Although we may have designed the software to work perfectly in the developer lab, the reality of these complex processes and changing data shapes means that cloud computing can become a very difficult thing indeed.
If we accept the cloud difficulty factor as one of the greatest truths associated with this technology, then we can perhaps look at ways to tackle the problem. Cloud is supposed to be a so-called ‘solution’, but in reality we often need solution solutions such as Application Performance Management (APM) and debugging tools before we can move forwards.
#HPE has tabled a technology aimed at addressing some of the cloud difficulty issues faced by companies now looking to implement virtualized computing layers. The firm’s new multi-cloud management software, HPE #OneSphere, was first announced at HPE Discover Madrid in November 2017. It is a Software-as-a-Service ( #SaaS )-based #hybridcloud management solution for on-premises IT and public clouds.
No comments:
Post a Comment