The Dawning of the Age of the Modern Data Center

by Bill Kleyman
14 June 2013

A View from the 2013 Uptime Institute Symposium

For anyone interested in data centers, the Uptime Institute Symposium is one of the highlights of the year. The 2013 Symposium was no exception. Over 1,300 attendees heard from almost 100 speakers talking about “The Global Digital Infrastructure Evolution.” 

I believe this digital infrastructure evolution is leading to a direct modernization of the data center. Over the past three years new types of drivers have, literally, forced data center operators to adapt or get out. Now, we are seeing the emergence of the modern data center – sometimes referred to as data center 2.0.

Aside from the marketing terminology; there are direct links to the advancement of today’s data center 2.0 environment. First of all, business drivers have evolved – a lot. There are a lot more users, devices and an almost overwhelming amount of new data. With all of these new trends, more organizations are turning to the only technological platform that can support this type of growth: the data center.

Those who were able to adapt and stay ahead of the technological curve will easily attest that "it’s good to be in the data center business." Why? More organizations of all verticals and sizes are finding ways that they can extend their business by utilizing data center offerings. In fact, the 2013 Uptime Symposium Survey indicates that 70% of data center operators built a new site or renovated a site in the past 5 years. The reality here is that this trend will only continue moving forward.

In a recent blog post "2013 IT Predictions", we discussed the top technologies to be aware of for 2013. Well, the top two technologies – cloud computing and the distributed data center all top the conversation at this year’s 2013 Uptime Symposium.  Furthermore, it’s great to see green computing and the green certification process pick up as well. The idea here is not only to reduce the impact on the environment – but also to lower costs as well.

So, now that we’re halfway through the year – it’s time to look at how the modern data center is evolving and how this correlates to the recent results from the 2013 Uptime Symposium!

Cloud Resiliency. As more organizations embrace cloud services as part of their IT platform, the data center has to be prepared for new demands and challenges which revolve around the cloud. Remember, the cloud isn’t perfect and there have been numerous instances of outages. Organizations who solely reside in the cloud are the ones who, at the end of the day, have to pay the price for outages. So, to adapt to these new demands, data center operators are striving to become more resilient and agile when it comes to cloud computing. In fact, cloud-level resiliency was one of the top topics of conversation at the Uptime Symposium. Although some solutions are limited by load-balancing and advanced networking solutions – the overall resiliency design can be accomplished with a lot of success. In working with a distributed, resilient, data center environment, there are a lot of core benefits to be gained.

  • You create redundancy at the hardware level as well as the software layer.
  • Distributed data centers become logically-connected, data center environments.
  • WAN utilization and distribution becomes easier to control and manage.
  • You’re able to create global active-active data center mirroring and load-balancing.
  • New types of cloud-based disaster recovery and business continuity services can be achieved.
  • Higher availability can be delivered on a global scale. The failover can be further made to be completely transparent end-to-end.

Data Center Operating System. It’s no longer just about data center management. It’s about advanced data center infrastructure management (DCIM) and the creation of the data center operating system. More organizations are deploying advanced DCIM solutions for all the right reasons. As the data center continues to become a distributed entity, there will be an even greater need to deploy intelligent DCIM solutions. The Uptime Symposium 2013 survey results indicate the type of adoption that DCIM is seeing. About 70% of the respondents said that they have either deployed DCIM or are planning to deploy it within the next 12-24 months. That’s pretty significant since DCIM isn’t always an inexpensive solution. The bottom line, however, is that this type of management is very necessary. In working with DCIM, there are key drivers pushing executives to make the key decisions. These include energy efficiency, asset visibility and – extremely importantly – capacity planning. The data center will continue to be an integral part of any organization. The ability to properly plan space and resource utilization can only be done with a good data center management solution. At the end of the day, advanced DCIM solutions are going to be the required technology to create truly agile, optimized data centers capable of handling today’s demands.

Efficient Systems and Storage. The data center has been looking at converged platforms and unified systems to increase efficiency and reduce costs. These platforms are high-density computing solutions which are able to provide a lot of power within a smaller chassis. From there, administrators are able to place more users and workloads on a smaller number of servers. In working with efficient technologies – SSD and flash controllers have been making their mark. According to the Uptime Symposium, flash storage is capable of slashing operating costs of storage and can help save on energy and space. Still, the full deployment of flash and SSD controllers is somewhat limited. The arrays are still a bit expensive and there is a serious concern about long-term storage. Still, when the right niche is found; SSD and flash are amazing technologies. Anything that requires high IOPS or extreme drive performance should look at a flash or SSD array. This means big data analytics, virtual desktop infrastructures (VDI), and high-utilization temporary storage.

Go Green to Save Some Green. At the onset of energy efficiency utilization, many argued that to go green – you would have to spend a lot of green. Well, as technology improves, this mentality has certainly changed. Now, many data center executives are seeing quite the opposite. In fact, to go green means to save some green! Year over year, the Uptime Symposium 2013 Survey shows that there was a 10% jump (2012: 48%, 2013: 58%) in green certifications. This means that more data centers are actively deploying new energy-efficient technologies to support data center functions. Here’s the real interesting stat: For data center environments hosting over 5,000 servers, there is a 90% adoption around PUE technologies. When it comes to data center green efficiency – here are some hot solutions:

  • Low-power servers. 10% of the power using 10% of the space.
  • Onsite clean power generation. Low carbon, grid-independent technology.
  • Chiller-free data centers. Reduces capex by 15%-30% and helps slash PUE.
  • Power-proportional computing. Helps reduce waste and power consumption.
  • Memristors. Enables the creation of powerful, low energy unified devices! (Note: Some of these technologies are still in the lab.)

Is Modular Relevant? Here’s the good news: the conversation around modular data centers continues. Furthermore, there is still a lot of interest around the technology. Outside of that, both confidence around the technology and adoption has been pretty slow. The Uptime Institute 2013 Survey showed that a whopping 83% of respondents either had no interest or no plans around the pre-fab modular data center market. Only 9% actually deployed a modular data center. There are a few reasons as to why this is happening. The IT market likes to be confident around the technologies that it deploys. Furthermore, modular data centers can be quite the investment. So, when 61% of respondents indicate that they’re only “somewhat confident” in the technology – we’re going to see these low adoption rates. Aside from the positives – like rapid deployment, reliable designs and a pretty good return – many shops are still very weary of deploying modular data center platforms. At the end of the day, economies of scale and the somewhat limited flexibility around deployments have put a heavy weight on modular data center implementations.

Communication is Everything. This was an interesting survey result – but also a bit troubling. One of the core functions for a data center operator is to communicate the performance of the data center to all appropriate business stakeholders. This is an absolutely vital process which allows business goals to directly align with those of the IT department and the data center environment. So, how good is data center operator communications these days? According to the Uptime Symposium 2013 Survey results, 40% of enterprise operators have no scheduled reporting to the C-level. Wait, what? Enterprise data centers are not small shops. The statistics get even more interesting in that only 42% of enterprise data center departments have any real regular communications with the C-suite! The others report metrics and give updates annually, upon request – or, at best, quarterly. In an ever-evolving business and IT world – these numbers must change. Regular reporting has to occur between vital c-level personnel and data center operators.

For an environment so focused on the physical element – the modern data center must be extremely agile. This means adapting to new trends, technologies and demands pushed down by users and business. Let’s face facts, the amount of data and data center capacity is only going to grow. The amount of users and information connecting into the cloud is growing at a tremendous pace. We’ve mentioned these numbers before, but it’s important to look at them again:

  • According to IBM, the end-user community has, so far, created over 2.7 zettabytes of data. In fact, so much data has been created so quickly, that 90% of the data in the world today has been created in the last two years alone.
  • Currently, Facebook process over 240 billion photos from its entire user base. Furthermore, they store, analyze, and access over 32 petabytes of user-generated data.
  • In a recent market study, research firm IDC released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015. 

These growing business segments translate to more business for the data center. New types of technologies revolving around management, storage and even cloud computing will have direct impacts on the evolving data center infrastructure. Moving forward, data center administrators will need to continuously evolve to the needs of the market to stay ahead of the competition. Remember, at the core of almost any technology, solution, platform or cloud instance sits a very important processing power – the data center.

Bill Kleyman

Virtualization and Cloud Computing Industry Expert