Bill Kleyman's blog

Data Center 2014: Top 10 technologies and how they impact you

Welcome to 2014! By now we’ve gone through most, if not all, of our budgets and we are setting plans for the future. As we look back on the past two years we see a direct acceleration in the IT world. Users are connecting in new ways, there is more content to be delivered – and this whole cloud thing just won’t let up. In fact, the recent Cisco Global Cloud Index report goes on to show that while the amount of traffic crossing the Internet and IP WAN networks is projected to reach 1.4 zettabytes per year in 2017, the amount of data center traffic is already 2.6 zettabytes per year – and by 2017 will triple to reach 7.7 zettabytes per year. This represents a 25 percent CAGR. The higher volume of data center traffic is due to the inclusion of traffic inside the data center (typically, definitions of Internet and WAN stop at the boundary of the data center).

Cisco Global Cloud Index

Cisco goes on to state that global cloud traffic crossed the zettabyte threshold in 2012, and by 2017 over two-thirds of all data center traffic will be based in the cloud. Cloud traffic will represent 69 percent of total data center traffic by 2017.

Significant promoters of cloud traffic growth are the rapid adoption of and migration to cloud architectures, along with the ability of cloud data centers to handle significantly higher traffic loads. Cloud data centers support increased virtualization, standardization, and automation. These factors lead to increased performance, as well as higher capacity and throughput.

Cloud computing aside, there have been other technologies that have promoted this increased usage of the Internet, WAN services and data center platforms. More resource availability, datacenter optimizations, and of course mobility are all impacting the direct boom in data center demand. So, looking ahead at 2014 – what are some of the major technological solutions that will impact the data center? What are the key trends that will change the way a data center delivers content? How can data centers continue to meet the demands of both the users and the business? Finally, how can these top 10 technologies and trends impact your industry vertical and business? Let’s find out!

  1. The Hybrid Cloud.

    What to look for: This technology is getting pretty hot. With so much cloud adoption happening, organizations need a good way to interconnect a wide variety of resources. A hybrid cloud is no longer defined by a simple connection between a private and public data center. Now, data centers can interconnect with resources spanning a variety of environments. This means that pieces can be in a private, public or "as-a-Service" delivery. All of these components can be connected together to bring home a powerful hybrid cloud platform. More services, many more users and a more interconnected world will prove to be the driving force behind a hybrid cloud platform.

    Who it will impact: Any organization looking at the cloud will examine a hybrid cloud infrastructure. This can range from educational institutions to government organizations. Remember, hybrid cloud platforms are now also becoming compliant and regulation-ready.

  2. Apps and APIs.

    What to look for: The application world is completely changing. However, so is the mechanism that allows these applications to interconnect and function in today’s IT world. The amazing thing here is that applications are becoming completely hardware agnostic. They aim to deliver clear usability and data. The future of the application world will revolve around constant connectivity, ease-of-use, and the power of the data center. New types of APIs are also allowing applications to access key resources much faster. This type of granular interconnect creates better and more agile cloud-centric applications. Remember, all of these future cloud applications reside within the data center.

    Who it will impact: Application developers, data center cloud providers, and organizations creating their own apps should take note. In fact, any organization looking to deliver applications via the cloud needs to understand just how powerful new applications and APIs can really be.

  3. The Next-Generation Cloud.

    What to look for: Over the next few years – many of the traditional cloud platforms we’ve come to known will evolve. Already we are seeing new ways to utilize cloud traffic and distribute data. The next-generation cloud will allow for greater levels of interconnectivity, optimized resource utilization, and creating a more powerful environment for the end-user. As more devices connect and more content is delivered – web-based communication will grow to become even more important.

    Who it will impact: This truly impacts all verticals. Since cloud-based traffic is bound to increase, organizations will need to utilize WAN-based resources much more effectively.

  4. Fog Computing.

    What to look for: Just when you started to make sense of the cloud, now you need to see through the fog! It may just be a short-lived buzz term, but the concept of fog computing is for real – we’re taking our cloud environments to the edge – literally. More environments are working to bring critical data closer to the user. This allows for a lot more data processing and rich content delivery. Creating these micro-clouds and fog networks allows for quite a bit of versatility for organizations. Plus, edge infrastructure doesn’t have to be expensive. Creating a fog computing environment can include branch offices, big data processing points, and even content distribution solutions.

    Who it will impact: Creating an edge network has its direct benefits. However, organizations looking to create big data processing points, or stream content, should look at a Fog. Companies like Netflix get the idea. Bringing the data closer to the user helps with delivery and overall performance.

  5. Everything-as-a-Service.

    What to look for: Service delivery models only continue to grow and evolve. We now have the internet of everything and even the concept of everything-as-a-service. Let me give you an example. By utilizing software development kits (SDKs) and the APIs we discussed in #2, an emerging service platform known as backend-as-a-service (BaaS) can directly integrate various cloud services with both web and mobile applications. Already, there is a broad focus where open platforms aim to support every major platform including iOS, Android, Windows, and Blackberry. Furthermore, the BaaS platform aims to further enhance the mobile computing experience by integrating with cloud-ready data center providers. These new types of services will continue to grow. Data center platforms will strive to be your one-stop source for all service delivery needs.

    Who it will impact: Service-based data center options are powerful solutions for organizations to take advantage of. For example, private-sector enterprises can deliver key services from distributed data centers and not have to utilize their own resources.

  6. Mobility Management.

    What to look for: If you’re not ready for the mobility revolution – it’s time to get on this very fast-moving train. Users, devices and data are becoming a lot more mobile. But don’t think that this conversation is about devices alone. Users are consuming information in entirely new ways and your organization must be ready to facilitate these demands. This means managing mobile resources, users, and data center resources. Whether these are mobile application servers, virtual hosts, or entire racks dedicated to a mobile user platform – be ready to accommodate the new way that users compute.

    Who it will impact: The stats don’t lie. Users are becoming more mobile and accessing their information in new ways. This impacts pharma, healthcare, government, education – and everything in between. Lawyers, doctors and professionals are connecting to their data from a number of different devices.

  7. Software-Defined Technologies.

    What to look for: Software-defined technologies now incorporate network, storage, and compute. We are able to do brilliant things with hardware systems by introducing a more intelligent logical layer. This layer allows for better configurations, optimized resource utilization, and it helps create a more efficient data center infrastructure. SDx will create more resiliencies on a global scale by allowing complex connections to happen at a simplified level. Single hardware controllers can now create thousands of independent connections spanning numerous networks. No more 1-to-1 mapping. The future of intelligent hardware utilization revolves around multi-tenancy and high-density solutions.

    Who it will impact: It’s hard to identify just one vertical that will benefit from this. Government entities and public sector organizations leverage SDx technologies to accomplish a lot of necessary tasks. Whether it’s logically segmenting a storage array or creating a globally distributed, resilient, data center connection – software-defined technology is making its mark in the industry.

  8. Web-Ready Content and Device Agnosticism.

    What to look for: Much like the mobility revolution of #6 – the content that will be delivered to these devices will have to be optimized as well. On top of it all, maintaining device agnosticism is crucial. Users want access to their applications and data regardless of OS or hardware. This is why new types of applications and rich content will be delivered to a variety of users located all over the world. Intelligent data center automation controls will optimize the user’s connection by creating automated experience orchestration. That is, engines will dynamically define the user experience based on device, connection, location, security, and other factors. This is the future of how users will consume their information.

    Who it will impact: Cloud service providers and organizations based in the cloud will look to leverage this trend heavily. Users care about their apps and data. So, all organizations looking to optimize the user experience must look at web-content delivery. Whether you’re a healthcare shop granting access to a benefits app or a finance firm allowing users to conduct complex trades – mobility and security will be critical.

  9. Converged Infrastructure.

    What to look for: This technological platform will continue to pick up steam. The direct integration of storage, network, compute and pure IOPS (Input/Output Operations Per Second) has created a platform capable of high levels of resource optimization and workload delivery. We’re able to place more users per blade, deliver richer content, and create a data center model that follows the sun. Basically, we’re creating mobility within the data center. These new platforms take up less space and are much easier to manage. Furthermore, converged systems create even more capabilities for edge networks and organizations entering the cloud environment.

    Who it will impact: Although a lot of organizations can benefit from a converged system – there are some that can benefit more than others. Call centers, schools, hospitals, data entry organizations and any other shop that has a dense amount of users doing similar things. In using virtualization and a converged infrastructure – organizations are able to optimize their resources while still increasing user density.

  10. The Personal Cloud | The Evolution of the User.

    What to look for: A typical user may carry 2-3 devices with them which connect to the cloud. What if this person is a techie? What if we take into account all of the devices they have at home as well? The reality is that the user is evolving and now maintains a continuous connection to the cloud across multiple devices. This trend will continue to push forward as users connect cars, homes, refrigerators, thermostats and other devices directly to the Internet. Moving forward, a user’s personal cloud will identify who they are, which devices they utilize and how to best optimize their experience. This means creating the same experience regardless of device or OS, controlling apps and devices remotely, and introducing even greater levels of personal cloud security. Right now, the personal cloud is just a concept applied to a user’s personal cloud experience. In the future – a personal cloud may identify a user’s overall cloud persona.

    Who it will impact: Not only will this impact the user environment, it will impact all those that interact with it as well. Organizations looking to optimize the user experience and deliver new types of content will need to be aware of how the user evolves their compute process. Service delivery, application development, and workload management will all evolve as the cloud and the user continue to change.

The modern data center has truly become the home of everything. We’re seeing entire businesses born from a cloud model as executives take direct advantage of new data center resources. The next couple of years will certainly be interesting. We’ll see more cloud-centric workloads deployed as the modern user becomes even more mobile. In our 2013 IT Predictions blog we looked at more consumerization, a lot more data and a new type of computing platform. Now, all of these technologies are certainly in place and are being evolved. There is more big data and business intelligence, we have a lot more mobility on the user front, and we are certainly seeing a lot more data center convergence take place. At the heart of it all – where so many new technologies and solutions live – sits the all-important data center. Looking ahead even further, we know that the data center will continue to serve a critical role in the evolution of IT. We’ll see even more data center automation, greater distributed technologies – and even the utilization of intelligent robotics. One thing will always be true – it’ll be up to the IT professional, cloud architect, or technology executive to utilize these powerful tools to align business goals with IT solutions.

Blog Tags:

Welcome to Next-Generation DCIM (Data Center Infrastructure Management)

We are in the era of on-demand data delivery. The proliferation of cloud computing and information sharing has created a sort of data center boom. There are more users, more devices and a lot more services being delivered from the modern data center environment.

In fact, entire organizations and applications are being born directly within the cloud. To really put this trend in perspective, consider this - A 2013 Cisco Cloud Index report indicated that cloud traffic crossed the zettabyte (1000^7 bytes) threshold in 2012. Furthermore, by 2016, nearly two-thirds of all data center traffic will be based in the cloud.

Cisco Cloud Index Report - 2013

Efficient computing, converged infrastructures, multi-tenant platforms, and next-generation management are the key design points for the modern data center environment. Because we are placing more services into our data centers – there is greater need for visibility into multiple aspects of daily operations.

The picture of the next-generation management platform is that of a truly unified management plane

So what does that really mean? A unified data center infrastructure management platform removes “physical” barriers of a typical control system. The reason for this necessary shift can be seen in the job that the modern data center is now tasked with performing and the levels of monitoring that must occur. Next-generation DCIM (Data Center Infrastructure Management) will remove logical and physical walls and unify the entire data center control process.

"Everything-as-a-Service" Controls. The modern data center is now considered to be at the heart of "The Internet of Everything." This means that more services, information, and resources are being delivered through the data center than ever before. As the data center continues to add on new functions and delivery models – administrators will need to have visibility into everything that is being delivered. Whether it’s monitoring SLAs or a specific as-a-service offering – next-generation data center management must have integration into the whole infrastructure.

Cluster-Ready Real-Time Monitoring. There is true data center distribution happening. These nodes are inter-connected, sharing resources, and delivering content to millions of end-users. With advancements in modern data center infrastructure, bandwidth and optimized computing – data center architecture has gone from singular units to truly clustered entities. This level of connectivity also requires new levels of distributed management and control. Not only is this vital for proper resource utilization and load-balancing; cluster-ready monitoring creates greater amounts of data enter resiliency. By having complete visibility into all data center operations across the board, administrators are able to make better – proactive – decisions.

Big Data Management Engines. The increase of cloud services has created an influx of new types of offerings to the end-users. IT consumerization, BYOD and mobility have all become very hot topics around numerous large organizations. As cloud continues to grow – more users will be utilizing these resources. With an increase of users comes the direct increase of data. This is where next-generation data center management comes in. Big data engines will sit both within the cloud and on the edge of the cloud network – also known as the Fog. There, these data centers will have direct tie-ins into big data analytics engines running on virtual and physical platforms. Because this data and the information gathering process is so crucial – complete visibility into the entire process is vital. This means that large data centers acting as hubs for big data analytics will have control visibility into storage, networking, compute, infrastructure and more.

Logical and Physical Management. The days of bare metal accumulation are over. Modern data centers are highly virtualized and highly efficient. New data centers are being built around optimal power, cooling and resource control mechanisms. On top of that – sit highly efficient high-density servers capable of great levels of multi-tenancy. Although some silo’d monitoring operations may still occur – the overall infrastructure must be unified in terms of management and control. This means having granular visibility into core data center operations which includes:

  • Power
  • Cooling
  • Aisle/Control
  • Alerts/Monitoring
  • Advanced Environmental Monitoring
  • Proactive Alerting and Remediation
  • Security (physical, structural, rack, etc.)

On top of that, data center administrators must also be able to see the workloads which are running on top of this modern infrastructure. This means understanding how virtual resources are being utilized and how this is impacting the underlying data center environment.

Cloud Orchestration and Automation. The cloud computing boom also created a big need around better workload control and automation. This actually spans both the hardware and software layer. From a next-generation management perspective, there needs to be the ability to create hardware and software profiles which can then be applied to physical and virtual resources for deployment. Finally, when this approach is tied into intelligent load-balancing solutions, you have a truly end-to-end cloud orchestration and automation solution. Now, although we’re not quite at those levels yet, next-generation data center management solutions are directly integrating workload automation options. It is becoming easier to deploy hardware and simpler to provision workloads. All of this means faster content and data delivery to the corporation and the end-user. Next-generation data center management will be able to have plug-ins into the physical and logical layer and facilitate new levels of cloud automation and orchestration. 

Mobile Data Center Visibility. It’s not like your data center will just get up and move around – at least not in most cases. However, having the ability to have mobile visibility of your data center is a need. This means controlling some data center functions from mobile devices and delivering direct web-based controls as well. Furthermore, because the data center is becoming more interconnected – there will be more functions and roles to control. There will be various types of administrators and mangers requiring specific controls within a single and clustered data center model. Role-based administration and management will evolve from the standard PC to true mobility. All of this will translate to a more efficient engineering, administration and management layer of your entire data center infrastructure.

Single-Pane of Glass – Complete Control. At the end of the day it all comes down to how well you can manage your data center. There’s no arguing that there will be more requirements around the future data center platform. As the number of services that the data center delivers continues to increase, complete visibility will become even more important. There will be more plug-ins, monitoring tools, and clustered nodes to look at all while trying to control resiliency. The next-generation monitoring UI and control platforms must be intuitive, easy to understand, simple to navigate and allow the administrator to truly optimize the entire data center infrastructure.

Tomorrow’s data centers must also have tomorrow’s visibility and control. The nature of the data center is changing. It’s now the hub that delivers core services, user applications, and entire platforms. IT consumerization and the increase in Internet utilization are partially the reason for the data center boom. However, the natural progression of technology has taken our entire infrastructure into a highly resilient and agile cloud platform. The ability to stay connected and have massive content delivered directly to your end-point is truly impressive.

As the business evolves, the data center infrastructure will have to follow suit. Already business initiatives are directly aligned with the capabilities of the respective IT department. This correlation will only continue to increase as the data center becomes the home of everything. And with the next-generation data center - there must be next-generation management and control.

The Dawning of the Age of the Modern Data Center

A View from the 2013 Uptime Institute Symposium

For anyone interested in data centers, the Uptime Institute Symposium is one of the highlights of the year. The 2013 Symposium was no exception. Over 1,300 attendees heard from almost 100 speakers talking about “The Global Digital Infrastructure Evolution.” 

I believe this digital infrastructure evolution is leading to a direct modernization of the data center. Over the past three years new types of drivers have, literally, forced data center operators to adapt or get out. Now, we are seeing the emergence of the modern data center – sometimes referred to as data center 2.0.

Aside from the marketing terminology; there are direct links to the advancement of today’s data center 2.0 environment. First of all, business drivers have evolved – a lot. There are a lot more users, devices and an almost overwhelming amount of new data. With all of these new trends, more organizations are turning to the only technological platform that can support this type of growth: the data center.

Those who were able to adapt and stay ahead of the technological curve will easily attest that "it’s good to be in the data center business." Why? More organizations of all verticals and sizes are finding ways that they can extend their business by utilizing data center offerings. In fact, the 2013 Uptime Symposium Survey indicates that 70% of data center operators built a new site or renovated a site in the past 5 years. The reality here is that this trend will only continue moving forward.

In a recent blog post "2013 IT Predictions", we discussed the top technologies to be aware of for 2013. Well, the top two technologies – cloud computing and the distributed data center all top the conversation at this year’s 2013 Uptime Symposium.  Furthermore, it’s great to see green computing and the green certification process pick up as well. The idea here is not only to reduce the impact on the environment – but also to lower costs as well.

So, now that we’re halfway through the year – it’s time to look at how the modern data center is evolving and how this correlates to the recent results from the 2013 Uptime Symposium!

Cloud Resiliency. As more organizations embrace cloud services as part of their IT platform, the data center has to be prepared for new demands and challenges which revolve around the cloud. Remember, the cloud isn’t perfect and there have been numerous instances of outages. Organizations who solely reside in the cloud are the ones who, at the end of the day, have to pay the price for outages. So, to adapt to these new demands, data center operators are striving to become more resilient and agile when it comes to cloud computing. In fact, cloud-level resiliency was one of the top topics of conversation at the Uptime Symposium. Although some solutions are limited by load-balancing and advanced networking solutions – the overall resiliency design can be accomplished with a lot of success. In working with a distributed, resilient, data center environment, there are a lot of core benefits to be gained.

  • You create redundancy at the hardware level as well as the software layer.
  • Distributed data centers become logically-connected, data center environments.
  • WAN utilization and distribution becomes easier to control and manage.
  • You’re able to create global active-active data center mirroring and load-balancing.
  • New types of cloud-based disaster recovery and business continuity services can be achieved.
  • Higher availability can be delivered on a global scale. The failover can be further made to be completely transparent end-to-end.

Data Center Operating System. It’s no longer just about data center management. It’s about advanced data center infrastructure management (DCIM) and the creation of the data center operating system. More organizations are deploying advanced DCIM solutions for all the right reasons. As the data center continues to become a distributed entity, there will be an even greater need to deploy intelligent DCIM solutions. The Uptime Symposium 2013 survey results indicate the type of adoption that DCIM is seeing. About 70% of the respondents said that they have either deployed DCIM or are planning to deploy it within the next 12-24 months. That’s pretty significant since DCIM isn’t always an inexpensive solution. The bottom line, however, is that this type of management is very necessary. In working with DCIM, there are key drivers pushing executives to make the key decisions. These include energy efficiency, asset visibility and – extremely importantly – capacity planning. The data center will continue to be an integral part of any organization. The ability to properly plan space and resource utilization can only be done with a good data center management solution. At the end of the day, advanced DCIM solutions are going to be the required technology to create truly agile, optimized data centers capable of handling today’s demands.

Efficient Systems and Storage. The data center has been looking at converged platforms and unified systems to increase efficiency and reduce costs. These platforms are high-density computing solutions which are able to provide a lot of power within a smaller chassis. From there, administrators are able to place more users and workloads on a smaller number of servers. In working with efficient technologies – SSD and flash controllers have been making their mark. According to the Uptime Symposium, flash storage is capable of slashing operating costs of storage and can help save on energy and space. Still, the full deployment of flash and SSD controllers is somewhat limited. The arrays are still a bit expensive and there is a serious concern about long-term storage. Still, when the right niche is found; SSD and flash are amazing technologies. Anything that requires high IOPS or extreme drive performance should look at a flash or SSD array. This means big data analytics, virtual desktop infrastructures (VDI), and high-utilization temporary storage.

Go Green to Save Some Green. At the onset of energy efficiency utilization, many argued that to go green – you would have to spend a lot of green. Well, as technology improves, this mentality has certainly changed. Now, many data center executives are seeing quite the opposite. In fact, to go green means to save some green! Year over year, the Uptime Symposium 2013 Survey shows that there was a 10% jump (2012: 48%, 2013: 58%) in green certifications. This means that more data centers are actively deploying new energy-efficient technologies to support data center functions. Here’s the real interesting stat: For data center environments hosting over 5,000 servers, there is a 90% adoption around PUE technologies. When it comes to data center green efficiency – here are some hot solutions:

  • Low-power servers. 10% of the power using 10% of the space.
  • Onsite clean power generation. Low carbon, grid-independent technology.
  • Chiller-free data centers. Reduces capex by 15%-30% and helps slash PUE.
  • Power-proportional computing. Helps reduce waste and power consumption.
  • Memristors. Enables the creation of powerful, low energy unified devices! (Note: Some of these technologies are still in the lab.)

Is Modular Relevant? Here’s the good news: the conversation around modular data centers continues. Furthermore, there is still a lot of interest around the technology. Outside of that, both confidence around the technology and adoption has been pretty slow. The Uptime Institute 2013 Survey showed that a whopping 83% of respondents either had no interest or no plans around the pre-fab modular data center market. Only 9% actually deployed a modular data center. There are a few reasons as to why this is happening. The IT market likes to be confident around the technologies that it deploys. Furthermore, modular data centers can be quite the investment. So, when 61% of respondents indicate that they’re only “somewhat confident” in the technology – we’re going to see these low adoption rates. Aside from the positives – like rapid deployment, reliable designs and a pretty good return – many shops are still very weary of deploying modular data center platforms. At the end of the day, economies of scale and the somewhat limited flexibility around deployments have put a heavy weight on modular data center implementations.

Communication is Everything. This was an interesting survey result – but also a bit troubling. One of the core functions for a data center operator is to communicate the performance of the data center to all appropriate business stakeholders. This is an absolutely vital process which allows business goals to directly align with those of the IT department and the data center environment. So, how good is data center operator communications these days? According to the Uptime Symposium 2013 Survey results, 40% of enterprise operators have no scheduled reporting to the C-level. Wait, what? Enterprise data centers are not small shops. The statistics get even more interesting in that only 42% of enterprise data center departments have any real regular communications with the C-suite! The others report metrics and give updates annually, upon request – or, at best, quarterly. In an ever-evolving business and IT world – these numbers must change. Regular reporting has to occur between vital c-level personnel and data center operators.

For an environment so focused on the physical element – the modern data center must be extremely agile. This means adapting to new trends, technologies and demands pushed down by users and business. Let’s face facts, the amount of data and data center capacity is only going to grow. The amount of users and information connecting into the cloud is growing at a tremendous pace. We’ve mentioned these numbers before, but it’s important to look at them again:

  • According to IBM, the end-user community has, so far, created over 2.7 zettabytes of data. In fact, so much data has been created so quickly, that 90% of the data in the world today has been created in the last two years alone.
  • Currently, Facebook process over 240 billion photos from its entire user base. Furthermore, they store, analyze, and access over 32 petabytes of user-generated data.
  • In a recent market study, research firm IDC released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015. 

These growing business segments translate to more business for the data center. New types of technologies revolving around management, storage and even cloud computing will have direct impacts on the evolving data center infrastructure. Moving forward, data center administrators will need to continuously evolve to the needs of the market to stay ahead of the competition. Remember, at the core of almost any technology, solution, platform or cloud instance sits a very important processing power – the data center.

Where the cloud lives – A look at the evolving distributed computing environment

The idea behind cloud computing has been around for some time. In fact, one of the very first scholarly uses of the term “cloud computing” was in 1997, at the annual INFORMS meeting in Dallas. However, the way the cloud has evolved into what we are capable of using today shows the amount of creativity and technological innovation that is possible. Distributed computing platforms are helping IT professionals conquer distance and deliver massive amounts of information to numerous points all over the world.

This is all made possible by a number of technologies all working together to bring cloud computing to life. Oftentimes, however, there is still some confusion around cloud computing – not so much how it works; but where it lives. Many will argue that virtualization gave birth to the cloud. Although server, application, network and other types of virtualization platforms certainly helped shape and mold cloud computing, there are other – very important – pieces to the cloud puzzle.

One IT landscape with many clouds in the sky

The very first concept that needs to be understood is that there isn’t one massive cloud out there controlling or helping to facilitate the delivery of your data. Rather, there are numerous interconnected cloud networks out there which may share infrastructure as they pass each other in cyberspace. Still, at the core of any cloud, there are key components which make the technology function well.

  • The data center. If the cloud has a home, it would have to be a data center. Or more specifically a neighborhood of data centers. Data centers house the integral parts that make the cloud work. Without servers, storage, networking, and a solid underlying infrastructure – the cloud would not exist today. Furthermore, new advancements in high-density computing are only helping further progress the power of the cloud. For example, Tilera recently released their 72-core, GX-72 processor. The GX-72 is a 64-bit system-on-chip (SoC) equipped with 72 processing cores, 4 DDR memory controllers and a big-time emphasis on I/O. Now, cloud architects are able to design and build a truly “hyper-connected” environment with an underlying focus on performance.

    Beyond the computing power, the data center itself acts as a beacon for the cloud. It facilitates the resources for the massive amounts of concurrent connections that the cloud requires and it will do so more efficiently over the years. Even now cloud data centers are trying to be more and more efficient. The Power Usage Effectiveness (PUE) has been a great metric for many cloud-ready data centers to manage the energy overhead associated with running a data center. More and more data centers are trying to approach the 1.0 rating as they continue to deliver more data and do it more efficiently. With the increase in data utilization and cloud services, there is no doubt that the data center environment will continue to play an absolute integral part in the evolution of cloud computing.

  • Globalization. The cloud is spreading – and it’s spreading very fast. Even now, data centers all over the world are creating services and options for cloud hosting and development. No longer an isolated front – cloud computing is truly being leveraged on a global scale. Of course, technologies like file-sharing were already a global solution. However, more organizations are becoming capable of deploying their own cloud environment. So, when we say that the cloud lives in a location – we mean exactly that.

    Historically, some parts of the world simply could not host or create a robust cloud environment. Why? Their geographic region could not support the amount of resources that a given cloud deployment may require. Fortunately, this is all changing. At the 2012 Uptime Symposium in Santa Clara, CA we saw an influx of international data center providers all in one room competing for large amounts of new business. The best part was that all of these new (or newly renovated) data centers now all had truly advanced technologies capable of allowing massive cloud instances to traverse the World Wide Web. This is a clear indication that, geographically, the cloud is expanding and that there is a business need for it to continue to do so.

  • Consumerization. One of the absolutely key reasons the cloud is where it is today is because of the cloud consumer. IT consumerization, BYOD, and the absolute influx of user-based connected devices have generated an enormous amount of data. All of this information now needs to span the Internet and utilize various cloud services. Whether it’s a file-sharing application or a refrigerator that is capable of alerting the owner that it’s low on milk – all of these solutions require the use of the cloud. Every day, we see more devices connect to the cloud. To clarify, when we mean devices – we’re not just referring to phones, laptops, or tablets. Now, we have cars, appliances, and even entire houses connecting to a cloud services. The evolution of the cloud revolves around the demands created by the end-user. This, in turn, forces the technology community to become even more innovative and progressive when it comes to cloud computing.

    As more global users connect to the Internet with their devices – the drive to grow the cloud will continue. This is why, in a sense, the cloud will eventually live with the end-user. Even now new technologies are being created to allow the end-user to utilize their own "personal cloud." This means every user will have their own cloud personality with files, data, and resources all completely unique to them.

  • Cloud Connectors. As mentioned earlier, there isn’t really just one large cloud out there for all of users trying to access it. The many private, public, hybrid, and community clouds out there comprise one massive interconnected cloud grid. In that sense, the evolution of the cloud created an interesting, and very familiar, challenge: a language barrier. For example, as one part of an enterprise grows its cloud presence, another department might also begin a parallel cloud project on a different platform. Well, now there is a need to connect the two clouds together. But, what if these two environments are on completely different cloud frameworks? It’s in this sense that we deploy the cloud “Babel Fish” in the form of APIs. These APIs literally act as cloud connectors to help organizations extend, merge, or diversify their cloud solutions.

    It’s not a perfect technology and even now not all cloud platforms can fully integrate with others. However, the APIs are getting better and more capable of supporting large cloud platforms. New technologies like CloudStack or OpenStack help pave the way for the future of cloud connectivity and APIs. Platforms like this work as open-source cloud software solutions which help organizations create, manage, and deploy infrastructure cloud services.

In a cloudy world – bring an umbrella!

Let’s face facts – it’s not always sunny in the cloud. As the technology continues to emerge, IT professionals are still learning some best practices and optimal ways to keep the cloud operational. This isn’t proving altogether easy. There are more users, more infrastructure, and more bad guys taking their aim at various cloud infrastructures.  Just as important as it is to understand how the cloud functions and where it resides – it’s vital to know the "weather forecast" within the cloud computing environment.

  • Attacks. Although this is the darker side of the cloud, it still needs to be analyzed. As more organizations move towards a cloud platform, it’s only logical to assume that these cloud environments will become targets. Even now, attacks against cloud providers are growing. This can be a direct intrusion or a general infrastructure attack. Regardless of the type, all malicious intrusions can have devastating results on a cloud environment. Over the past few months, one of the biggest threats against a cloud provider has been the influx of DDoS attacks. A recent annual Arbor Networks survey showed that 77% of the data center administrator respondents experienced more advanced, application-layer attack. Furthermore, such attacks represented 27 percent of all attack vectors. The unnerving part here is that the ferocity of these attacks continues to grow achieving a 100Gbps spike in 2010.

    Cloud services aren’t always safe either. On February 28, 2013 – Evernote saw its first signs of a hacking. Passwords, emails and usernames were all accessed. Now, the provider is requiring its nearly 50 million users to reset their passwords. The damage with these types of attacks isn’t always just the data. Evernote had to further release a response – and these are always difficult to do. In the responding blog post, Phil Libin, Evernote’s CEO and founder said the following, "Individual(s) responsible were able to gain access to Evernote user information, which includes usernames, email addresses associated with Evernote accounts and encrypted passwords." These types of intrusions can only help serve as learning points to create better and more secure cloud environments.

  • Outages. If you place your cloud infrastructure into a single data center – you should know that your cloud environment can and will go down. No one major cloud provider is safe from some type of disaster or outage. Furthermore, cloud computing services are still an emerging field and many data center and cloud providers are still trying to figure out a way to create a truly resilient environment.

    The most important point to take away here is that a cloud outage can literally happen for almost any reason. For example, a few administrators for a major cloud provider forgot to renew a simple SSL certificate. What happened next is going to be built into future cloud case studies. Not only did this cause an initial failure of the cloud platform, it created a global – cascading – event taking down numerous other cloud-reliant systems. Who was the provider? Microsoft Azure. The very same Azure platform which had $15 billion pumped into its design and build out. Full availability wasn’t restored for 12 hours – and up to 24 hours for many others. About 52 other Microsoft services relying on the Azure platform experienced issues – this includes the Xbox Live network.  This type of outage will create (and hopefully answer) many new types of questions as far as cloud continuity and infrastructure resiliency.

As with any technological platform, careful planning and due diligence has to go into all designs and deployments. It’s evident with the speed of today’s technology usage that the world of cloud computing is only going to continue to expand. Where the cloud lives - blog article by Bill KleymanNew conversations around big data further demonstrate a new need for the cloud. The ability to transmit, analyze and quantify massive amounts of data is going to fall onto the cloud’s shoulders. Cloud services will be created to churn massive amounts of information on a distributed plane. Even now platforms are being used around open-source technologies to help control and distribute the data. Projects like Hadoop and the Hadoop Distributed File System (HDFS) are already being deployed by large companies to control their data and make it agile.

With more users, more connection points and much more data – cloud computing lives in a growing and global collection of distributed data centers.  It is critical that cloud developers and participants select their data center platforms carefully with an emphasis on 100% reliability, high-density power, energy-efficient cooling, high-performance networking, and continual scalability. Moving forward, the data center will truly be the main component as to where the cloud resides.

Prediction: The Biggest Surprises for IT in 2013

It's already 2013 and already some fun and exciting technological announcements have been made. A 1TB USB flash drive has been built and there are already plans for even more cloud computing platforms. 2012 was a year full of virtualization, the true push for the cloud, and a lot of initiatives revolving around infrastructure efficiencies.

So what's 2013 going to bring? Let's find out!

  1. Much more cloud computing.
    When cloud computing started out – we had a platform that was built around the wide area network. From there, corporations would pass data over a public or private connection from one end-point to another. Now, we have public, private, hybrid, and community cloud platforms. What’s next? Personal, mobile, and the “everywhere” cloud. Many organizations are working hard to make cloud computing not only a corporate platform, but something that will work on an individual user basis. As more resources and bandwidth open up, there will be more focus on the cloud element.
     
    Remember one important point: It’s not always sunny in the cloud. Over the course of 2012, we saw several outages. For example, Amazon Web Services saw a few outages where companies like Instagram, Netflix and Pinterest were made business-ineffective because of the downtime. As more organizations place their environments into the cloud – they will have to build in appropriate redundancies as well. Furthermore, there are the compliance and security concerns. Right now, only a handful of data centers are able to host PCI-compliant equipment. As far as breaches are concerned – Dropbox is a perfect example of where cloud security can have its drawbacks.
     
    Still, working with a cloud computing platform will have its advantages. Look for more replication, distribution, accessibility, and DR projects to take place which directly revolve around cloud computing.
     
  2. Distributed data center management.
    The modern data center has really evolved. In 2012, we saw data centers become an even more integral part of every organization. In 2013, companies will continue to replicate, distribute, and optimize their IT systems across multiple data centers. Because of so many new moving parts, data center management will be huge in 2013. Major management vendors will strive to combine the software and hardware layers to create a single pane of glass for administrators. The idea will be to unify processes and make the management process more transparent.

    Along with that, better alerting and monitoring capabilities will be introduced in 2013. Virtualization, networking, storage, and other IT components are being directly tied into the data center environment. There are many tools on the market today which can manage individual components well – but not together. Look for products which will truly unify the entire data center infrastructure and the workloads that it supports.
     

  3. New types of virtualization.
    Indeed we will have even more types of virtualization technologies in 2013. Already we have server, application and desktop virtualization platforms. The latter of which has been gaining some momentum. Moving forward, we’re going to see more security, storage and even user virtualization. The general term around this type of IT push is software-defined technologies. Logical segmentation of a storage controller to create “virtual” independent units is a form of storage virtualization. Also, creating virtual firewalls to sit at various points within an organization is a great way to stay secure.

    Another big push will be around the user. User virtualization will take user settings, policies and even data and allow it all to travel with the user. This means hardware, software, and a complete device-agnostic environment. There will be a big focus on the end-user in 2013 and the devices that they use. Beyond the devices, there will be a direct focus on the experience and mobility of the session and workload accessed by the user.
     

  4. Next-generation security.
    I’m never a big fan of buzz words – however we do have to live with them. Next-generation security will see a big push in 2013. This means more virtualized security appliances, new types of monitoring engines, and even greater security around the cloud. Organizations are able to more logically segment their networks with advanced security appliances. The great part about this platform is that it doesn’t have to be physical. Beyond the new types of security devices available, new security mechanisms will see their rise in 2013.

    Tools like intrusion prevention/detection services (IPS/IDS) and data loss prevention (DLP) will all grow tremendously as more organization try to use the cloud as a powerful infrastructure platform. More information and more data will flow through a cloud model and therefore have to be secured. Another big push in 2013 will be the securing of mobile and BYOD devices. Look for new types of Mobile/Enterprise Device Management platforms which will not only manage non-corporate devices coming in – but also control the flow of data to these devices. The ability to interrogate devices based on patch level, OS, and even if it’s rooted will become the norm in 2013 as more users bring their own devices to work.
     

  5. Big data.
    In 2013, we will see a near explosion in the amount of data that both the private and public sectors will have to manage. To put this into perspective, in 2012, the Obama administration officially announced the Big Data Research and Development Initiative. The mission of the initiative is to examine how government can leverage big data and large data sets to help increase efficiency and potentially solve problems at the federal level. Whether it’s data correlation or quantifying data that’s been gathered, working with big data can be a challenge. For the US government, this program is composed of 84 different big data programs spread across six departments.

    In the public sector, big data has been playing an equally growing role. To give you an idea, Facebook now has over 40 billion photos that it has to manage and companies like Walmart are managing more than 1 million customer transactions per hour. In 2013, look for big data products to really take the spotlight. Open-source platforms like Hadoop or MongoDB are working hard to help organizations create environments where huge data sets can be analyzed efficiently. With current trends, there will be even more data to be processed in 2013.
     

  6. High-density computing.
    In 2013, high-density computing will come down to a single question: how many users or workloads can you fit on a single machine? High-density computing will become a hot topic in 2013 as more organizations work to become more efficient with their IT infrastructure. This will involve cloud computing, virtualization and better hardware management, but the focus will be on the hardware itself. This doesn’t stop at the server level, however. High-density computing will also include components like storage and networking. There will be greater demands from storage environments as well as the networking infrastructure.
     
  7. Data center green initiatives.
    Data center environments not only want to be efficient – they now want to do it in a green way. 2013 will see the data center transform itself from a power-hungry behemoth to a truly agile, distributed, environment. Power sources will become more creative, cooling mechanisms will be more efficient, and the data center will begin to require less while still being able to deliver more! A recent survey done by the Uptime Institute shows that the average data center PUE is 1.8. Now, more than ever before, the Power Usage Effectiveness rating has become an important metric to measure data center efficiency. Already some of the leading organizations (Google and Microsoft) are showing ratings of 1.07 or just a bit more. In 2013, look for that number to decrease and even start to break 1 for many organizations.

    To stay competitive in today's growing data center market, providers will strive to be as resilient and efficient as possible. Not only does this translate well for the end-user, it also bodes well for the environment and further green initiatives.
     

  8. IT Consumerization and BYOD.
    Many users are now utilizing 3-4 devices on a given day to access their information. These devices, in many cases, are also pulling data from the corporate data center environment. This might be as simple as emails and as complex as virtual desktop or applications. 2013 will be the year of the consumerized IT user. There will be new types of tablets, new types of end-points and many more ways to connect into an environment. IT shops will have to adjust to this new type of environment and secure the data that flows between the devices and the data center.

    Because of BYOD and IT consumerization, 2013 will also see the rise of new types of management platforms. Mobile/Enterprise Device Management (MDM/EDM) solutions will help manage the end-user environment and next-generation security will further help lock down how the data flows through the WAN.
     

  9. A new breed of architects will rise. Over the previous year, cloud computing and a more diverse infrastructure created a new challenge for IT managers: knowledge levels and competency. In the past, IT groups were separated into storage, networking, server and virtualization teams. 2013 will see the rise of the cloud architect. There will be a very real need for engineers or architects who are able to communicate the power of a diverse, agile infrastructure to IT managers and executives. These architects will need to understand all of the underlying components and how they work together to create the cloud platform. These folks are the liaison between the executive teams and the IT groups within an organization.

    The demand for these types of professionals will only continue to grow in 2013. New types of cloud offerings from major software, hardware and even data center organizations are all the reason that the cloud architect is in such demand.
     

  10. The new corporate end-point.
    The whole conversation around the corporate end-point has almost reached a boiling point. In 2012, we saw the rise of the thin client as more organizations tried to get rid of their older, more sluggish and large PCs. Now, with more virtualization and bandwidth control, organizations are looking to further offset PC costs with zero clients. These are tiny units which connect to an application or desktop delivery controller and pull all of the information from there. Nothing is stored at the end-point. What does this create? In 2013, look for zero clients that will break the $100 mark and utilize 5 watts of power or less. These devices will be designed around a rip/replace model with absolutely minimal configuration requirements.

2013 Predictions for Information TechnologyTechnology seems to be advancing at a screaming pace. New types of ways to connect an infrastructure or make a certain component more efficient are being developed at a seemingly monthly pace. In 2013, make sure you don’t get swept away with the hype of a certain product or platform. As with any piece of technology, always exercise caution. A good planning cycle can save a lot of time and headache in the long run. Furthermore, prior to even spending any budgetary dollars, IT managers must be able to answer the following three questions:

  1. Is there a good business case?
  2. Do we have the competency and infrastructure to support this platform?
  3. What is the ROI?

It should be an interesting year full of new initiatives and ways to make environments more agile and capable of growth. At the end of the day – when deploying new platforms or systems - IT executives, to stay effective in the industry, must be able to align their business vision with their technology infrastructure.

Click here to download the article, "2013 Prediction: The Biggest Surprises for IT" in PDF format.

Subscribe to RSS - Bill Kleyman's blog