Hyperscale technology looks at data, resources, and service delivery differently. In the hyperscale cloud, the network, compute, and storage software are uncoupled from the hardware. Its primary characteristics are what attracts those who massive amounts of data capacity and need it immediately. The hyperscale cloud is based on rapidity to build, rapidity to deploy, and rapidity to respond.
The cornerstones of the hyperscale cloud, according to a Citrix white paper, include:
- A virtualized infrastructure that can be programmatically controlled
- A highly modular, scalable, and available application architecture
- A DevOps approach to managing and orchestrating hyperscale cloud infrastructure and applications
Companies like Google, Amazon, and Facebook — all operating in the hyperscale cloud — have benefitted from those cornerstones in terms of efficiency and have found that this speed in building, deploying, and responding — this “hyperness” — can bring great cost savings, high scalability, and immense flexibility.
But, it’s the “can” in that last sentence that you need to worry about.
Michael Sullivan, senior editor at BCC Research, said “the complexity of re-architecting legacy workloads to run in a hyperscale environment is a daunting challenge for enterprise IT organizations. So, the movement to hyperscale, while attractive, is slow as companies determine where they can best leverage the technology.”
And part of that complexity is optimization in the hyperscale cloud. Without optimizing your hyperscale cloud, you’ll spend more, needlessly.
“The RightScale 2018 State of the Cloud Survey showed that enterprise cloud spending will grow rapidly over the next year, and yet 35 percent of cloud spend is wasted. As a result, optimizing cloud costs is the top initiative for cloud users in 2018,” said Michael Crandell, CEO of RightScale.
The enterprise challenges of hyperscale cloud computing
Change is hard for the best of us, especially when we’re convinced the way we’re doing something is best.
Do you have an automation mindset?
Think of all the times you’ve been told you absolutely must have a 2N infrastructure so that you’d have all the infrastructure components you needed in case of failure of some of the primary ones. So, say, instead of the 20 servers you need, you’ll pay for 40 to ensure redundancy. Look at the amount of waste that can be.
But it doesn’t need to be that way, if you can work your way to more of a DevOps mindset, and automate all that can be automated.
TechTarget writer Robert Gates suggests IT pros “reassess high availability and reliability, availability and serviceability features…and break away from the idea that redundancy and resiliency must be built into every piece of hardware to prevent failure.”
And that’s what hyperscale computing has done. The data center workers are no longer “replacing failed network cards and hard drives, updating firmware, or scheduling maintenance windows,” says Microsoft’s Rick Bakken. “They’re running the automation and ignoring hardware failures because those are taken care of automatically.” So N+1 redundancy can be enough.
Is your IT staff cutting edge?
Although a majority of companies have already moved some of their work to the Cloud, many retain a staff of IT pros — think enterprise sysadmins working on maintenance and OpenStack, or swapping out a five node Hadoop cluster on premises with a five node cluster in the Cloud — aren’t necessarily qualified to run things in the Cloud, which will require a new skill set.
To deal with big data issues inherent in working in hyperscale:
- Sysadmins will need to know how to administer Hadoop and NoSqL as well as how to configure and manage components of the infrastructure
- Developers will need proficiency in Python, Scala and Java, and exposure to Kinesis and Lambda
- Data analysts and scientists will need to learn the how of building algorithms and then learn how to automate those algorithms to work with the data
As the report written for Citrix notes, “Unless companies understand how different the hyperscale cloud is and what changes they will need to make to exploit it, they will not be able to harness it effectively to achieve its benefits. In fact, network operators can make life more complicated for themselves if they try to migrate network functions designed for traditional environments and to apply existing operational practices to the hyperscale cloud.”
How to get hyperscale cloud optimization right
In-house network operations staff need to augment their network operations expertise with new hyperscale cloud operations skills either brought in by new recruits or with a managed service partner with proven expertise in the hyperscale cloud, according to Caroline Chappell, a principal analyst at Heavy Reading.
For many, bringing on new staff with the technical skills needed, is hard to do because those workers are in such demand and come with hefty price tags. That’s when businesses turn to Pointivity to help guide them through deployment — ensuring you have the right services — who will also manage the allocation of resources, turning off what isn’t needed.
But before you make the leap to hyperscale computing, you need a strategy to figure out what you’re going to put where, and we can help with that. Then we’ll help you with automated resource provisioning along with the analytics to dynamically automate placement, sizing, and capacity so you can achieve the best performance and resource efficiency.
Give us a call at 858-777-6900, or email us at info@pointivity.com