Why Enterprises Should Use Hyperscale Data Center Techniques

by Scott Walker
13 August 2020

When contemplating what data center customers will need over the next one to three years, several things come to mind.

First, hybrid cloud will continue to be a popular trend, and with good reason. Uncontrollable costs from public cloud service providers are driving people to pull workloads from those applications and into a more economical hybrid cloud environment. Some customers have also reported performance issues when demand on public cloud is high.

Next, many customers are asking for larger footprints and increased power density. In fact, it’s not uncommon to see densities hit 20kW. These higher power densities are a real problem for legacy data center providers that designed their buildings to serve 4-5kW per rack installations, back in the days when a high-density load was considered to be 10kW. We’re long past that now. Data center operators who can build-to-suit can handle these new 20kW and higher requirements, which is really what customers need to run their mission-critical applications.

The bottom line is: to get the most cost-effective, efficient use of a data center, enterprises need to use hyperscale techniques. But how?

Let’s start with utilization rates. Enterprises typically get about a 30 percent utilization rate of their infrastructure when measured on a 24x7x365 basis, whereas hyperscalers get 70-80 percent – more than double that of enterprises. If enterprises can double their utilization rate, it means that they can buy half of what they normally buy and still serve the same workload demand. That will save a lot of money. 

But to improve their utilization rate, enterprises have a choice. They can do it on their own, or buy a hyperconverged system that essentially does the same thing. That hyperconverged system will give them public cloud economics in a private cloud environment. There are also quite a few composable systems from major OEMs that leverage similar techniques.

A few years ago, I sponsored an infrastructure TCO study that still rings true today. The study highlighted the point that most of the cost of running a server is not the cost of the server itself. The TCO of running a server consists of three major components: 1) the cost of the server, 2) administration and management, and 3) space, power and cooling. The actual server represents about 20% of the total, 70% is administration and management, and the remaining 10% is space, power, and cooling. 

So, enterprises that want to reduce costs should look closely at the fact that 70% of their server costs are tied up in administration and management. Hyperscalers have done exactly that. Their investments in software, machine learning, and automation drive utilization rates to 4X that of the average enterprise, creating world-class TCO and programmability of their data center infrastructure.  

Scott Walker

Senior Vice President of Sales