Infrastructure

Why Interconnection Matters to Wholesale Colocation

Providing large scale, secure space and power has been the focus of wholesale data center providers for many years. Until recently, innovation in wholesale data center solutions has centered on developing data center designs that improve power resiliency and building efficiency. The result being that wholesale customers today are receiving data centers that are more flexible, scalable, and cost effective than ever before.

Data Center Interconnection - RagingWire Data Centers

The cloud and surge in wholesale data center demand are changing the industry, but not in the way that many expected. Interconnection between wholesale data centers and public cloud environments is becoming a key decision criteria rather than an afterthought or “nice-to-have”. Interconnection has become a major component of the wholesale data center colocation strategy.

The Hybrid-Cloud Changes Everything.

The demand for interconnection is being driven by the changing market dynamics, specifically in hybrid cloud. Over the past five years, enterprise organizations have been successfully adopting the public cloud as a complement to rather than as a replacement for their enterprise or colocation environments. The term “hybrid cloud,” came from the desire for enterprises to utilize a combination of clouds, in-house data centers, and external colocation facilities to support their IT environments.

The challenge with having a hybrid environment arises from the need for the data centers and the cloud providers to interconnect and share data securely and with low network latency as part of one extended environment. Within a data center, servers and storage can be directly connected. A hybrid cloud environment doesn’t have the luxury of short distances, so bring on the innovation.

From Carriers to Interconnection

The first interconnection solutions were provided by telecommunications service providers. Dark fiber, lit fiber, and Internet access were all leveraged to interconnect hybrid cloud environments. However, as cloud deployments grew and matured, these network solutions became difficult to secure and manage.

Next, data center providers began to offer “direct” connections within the data center to bring cloud and colocation environments into one location, allowing interconnections to be provided through fiber cross-connects. This approach, however, restricts the choices where companies can place their colocation environments and limits the types of cloud environments to which they can connect.

The newest solutions are being introduced by interconnection platform providers, which leverage the concepts of network exchange points and software defined networking (SDN) to provide a flexible solution that can be self-managed. These innovative solutions solve many key network challenges introduced by hybrid clouds.

The Keys to Interconnection – Dedicated, secured, and many-to-many

Beyond the simple one-to-one interconnection of a cloud environment to a wholesale colocation environment, an interconnection platform is designed to allow many-to-many connections in a dedicated and secured fashion. With an interconnection platform, wholesale colocation environments can be connected to multiple cloud providers (multi-cloud) and multiple cloud locations (availability zones). This design opens the door to unique options for companies to architect their IT environments to optimize resiliency and availability while minimizing cost and complexity. The SDN aspect of the interconnection platform allows the customer to manage their connections in real time without needing involvement from the provider. They can turn up, turn down, and change individual connections at any time, usually through a simple web based user interface.

Interconnection Goes Beyond the Cloud

The first generation of interconnection solutions focused on delivering dedicated private connections to the top cloud providers. As interconnection and cloud environments evolved, it became easier to move and share workloads across clouds and the data centers. Web portals allowed users to configure, manage, and trouble shoot their cloud connections.

Today a next generation of interconnection is rolling out in wholesale data centers that extends the connectivity platform beyond the cloud, to provide data center customers more options for interconnection partners. The first of these partners – SaaS Providers. New interconnection platforms allow enterprises to directly connect to applications such as web conferencing, help desk, customer service and human resources. For the enterprise customer, they receive a dedicated and secure connection to the application that is easier to manage and integrate. For the SaaS provider, they now have a “direct connection” offering to their software that improves the availability and performance of their application service.

The second new category of interconnection partners is other enterprises. Interconnect platforms now cover the globe to connect wholesale data centers in dozens of countries and through hundreds of points-of-presence (PoPs). Any enterprise connected to the interconnect platform becomes a potential interconnection partner. For example, your company may partner with a data provider or analytics service to deliver a solution to your customers. The interconnect platform makes it easy to connect your hybrid cloud environment to your partner’s hybrid cloud environment. You can even use the interconnect portal to invite new partners to join the ecosystem.

What Next? A Hybrid of Hybrids.

It’s clear that the hybrid computing model combining data centers and clouds with a global, seamless, and secured network is the direction that corporate IT is heading. To support hybrid computing, wholesale data centers have evolved beyond space, power, telecommunications, and security. Wholesale data centers have become a critical infrastructure platform for both cloud providers and enterprises. Interconnection now becomes a core element in wholesale data center solutions bringing together clouds and enterprises into a flexible and scalable hybrid of hybrids.

Shedding Light on Dark Fiber for Data Centers

When evaluating your data center connectivity, there are many reasons to consider dark fiber, including cost control, flexibility, security, and scalability. To quickly understanding the basics of fiber optics, see my blog posting, “Tech Primer: Dark Fiber, Lit and Wavelengths.

Importance of Dark Fiber for Data Centers

The number of internet connected devices known as the Internet of Things (IoT) is expected to reach 20+ billion by the year 2020, according to a recent Gartner report.

Gartner report - IoT growth from 2014 - 2020Likewise, cloud usage has been escalating at a similar rate, year over year. The reliance on cloud platforms such as Amazon’s AWS, Microsoft Azure, IBM SoftLayer and Google Cloud also continues to skyrocket, as indicated by cloud revenues seen in this report from Synergy Research Group.

These growth markets are driving enterprises and online businesses to a level of network dependence that is becoming hyper-critical.

Growth of Cloud Providers - Synergy Research Group Report

Connectivity is King

A loss of network connectivity or degraded network performance across a network connection can cause more than the loss of revenue. Poor network performance could even cause the loss of a life in the case of some environments like healthcare, public safety, and the military.

How vital is your network? When stability, along with latency, security, and bandwidth are at the forefront of the decision makers mind, then dark fiber may be the answer.

RagingWire understands that connectivity is of paramount value in a data center. As such, RagingWire has both partnered with connectivity providers and made a significant capital investment in telecommunications infrastructure to service our customer’s unique needs.

For example, in our Sacramento data center campus, we partner with multiple carriers to provide lit and dark fiber services that deliver excellent network performance of ~2ms latency to San Francisco, and ~4ms latency to the South Bay – a location jam-packed with cloud providers.

In our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center.

In Garland, Texas, within the Dallas-Fort Worth Metroplex, RagingWire has built a fiber network that connects its 1,000,000 square foot of data center campus to over 128 locations in the Dallas and Fort Worth market, including popular carrier hotels and cloud providers.

The Good, the Bad, and the Ugly of Dark Fiber

Dark fiber may be the right decision for many of today’s infrastructure connectivity needs, but make sure you go into it with full awareness of its advantages and disadvantages.

The Good:

  • Cost-control: Dark fiber costs the same whether you intend to use 1Gb, 10Gb, or 100Gb.
  • Flexibility: You may run any protocol and any service. You may even choose to install your own multiplexing equipment and slice the fiber into multiple channels (generally 16 channels, but current off-the-shelf hardware allows for up to 160), each usable for 1Gb, 10Gb, or 100Gb.
  • Security: Public access telecommunications networks generally have multiple access points at various nodes throughout the network, whereas dark fiber routes are accessible only at each of the two endpoints of the fiber run.
  • Scalability: Service may be upgraded as required by simply using higher performance equipment. Available bandwidth on dark fiber is limited by only three things: Physics, current technology, and your budget.

The Bad:

  • Cost-control: When your bandwidth requirements are 1Gb or less, lit services will usually be less expensive than fiber when considering the initial lease of fiber and capital outlay for hardware. Additionally, long-distance dark fiber may be more expensive than purchasing a wavelength. You’ll have to do the math and figure out which meets your needs and budget.

The Ugly:

  • Reliability: Your architect will need to design around the fact that there is no built-in fault-tolerance or connectivity failure protection. This will usually require the purchase of a second diverse dark fiber path between your two locations.
  • Scalability and cost-control: Dark fiber is point-to-point. Unlike many other carrier products available, dark fiber does not allow for multiple end-points on a network. It may be necessary to purchase multiple fiber paths for larger networks.

Summary

When considering dark fiber from fiber providers instead of lit fiber or carrier services from telecom providers, it is beneficial to map your unique IT connectivity needs with the strengths and weaknesses of dark fiber. This mapping exercise should help shed some light on the best connectivity options for your custom environment.

Connectivity Questions

Is your data center carrier neutral? Carrier neutrality is vital when choosing a data center. You want your data center to freely allow interconnectivity between all carriers and other colocation providers. This protects your interests and allows for future scale, plus it maximizes your flexibility.

What types of lit connectivity are available? It is less important to focus on the number of carriers in the campus; instead focus on whether the carriers you care about are available. Also ask if their direct competitors are available. This will be helpful for bidding – to keep your primary carrier as cost competitive as possible.

Is dark fiber available? If so, where does it go? Does the data center have a dark fiber product or a partnership? Where does it go and is the pricing competitive? Does the data center have lit connectivity options or a partnership?

 

WEBINAR: “Colocation and the Enterprise: A Conversation with 451 Research and the CIO of NCR”

According to the Uptime Institute 70% of enterprise workloads are running in corporate data centers. Colocation data centers have 20% of enterprise applications, and cloud providers have 9%.

Webinar - Is Data Center Colocation the Right Approach for the EnterpriseWhat does this data mean? The next wave of demand for colocation and cloud is going to come from the enterprise.

In fact, colocation providers will get a double hit from the enterprise. First, workloads will move directly from enterprise data centers to colocation data centers. Second, enterprise workloads that move to public cloud providers will cause those cloud companies to need more servers, storage, and potentially more colocation data center capacity.

If you are an enterprise with in house data centers, it’s time to start scenario planning for migrating your apps and data to colocation data centers and the cloud. This webinar will help you get started.

WEBINAR: “Colocation and the Enterprise: A Conversation with the 451 Research and the CIO of NCR”

Kelly Morgan, Vice President Services from 451 Research, is one of the leading industry analysts looking after the data center space. In the webinar, Kelly presents data from the 451 Voice of the Enterprise Survey that you can use to build the strategy and business case for workload migration.

Bill VanCuren is the CIO of the NCR Corporation, a 130-year old icon with $6.3 billion in revenue and 30,000 employees that is transforming itself into a nimble, internet-based software and services company. Bill has been consistently recognized as one of the top enterprise CIOs. He has over 30 years of global and corporate IT management experience.

Bill and Kelly discuss NCR’s journey from 50 in-house data centers to a handful of colocation facilities and the cloud. Bill talks about the drivers that led him to consider colocation, the analysis he presented to the Board of Directors, and the critical success factors for his team to execute the migration.

It’s a rare treat to be able to tap into the knowledge, experience, and expertise of these two industry leaders. Many thanks to Kelly and Bill for participating in this exclusive webinar. Click the link to watch the recording: Is Data Center Colocation the Right Approach for the Enterprise?

Tech Primer: Dark Fiber, Lit and Wavelengths

Some IT colleagues have asked me, “What is dark fiber and what’s the difference between lit and wavelengths?” Let’s begin by understanding the basic concepts of fiber optics and the difference between dark and lit fiber.

Difference between dark fiber and lit fiber

Unlike wire, which passes electricity through a metal conductor, fiber optic cables use a specialized glass or plastic allowing for data to be transmitted great distances by passing light through the glass. Fiber that isn't currently being used and has no light passing through it is called dark fiber.

Utilizing this fiber, telecommunications carriers can provide something called “wavelength” services, also known as “waves.” This works by splitting the light into various wavelength groups called colors or “lambdas”. Carriers sell these wavelengths to separate customers and then recombine the colors and transmit it across fiber. Therefore, lit fiber is fiber that has been lit with light by a carrier.

Dark and lit fiber explainedTo better understand lit fiber’s wavelengths, think of a rainbow where each color is a channel of light. Remember Mr. "ROY G. BIV" from middle school – Red, Orange, Yellow, Green, Blue, Indigo, and Violet?

Wavelengths essentially split a single fiber into channels. Unlike copper wire, which uses an electrical signal, fiber optic communications utilize either a LASER or a LED operating at a very high frequency. Fiber optic cables have the ability to carry much higher frequencies than copper cables. Traditional bandwidth throughput (1Gb/10Gb/40Gb/100Gb) will easily fit into a single color channel. Each fiber can be split into hundreds of colors, but a typical lit fiber is split into sixteen colors or lambdas.

The business of fiber optics

In the late 1990's, there was an uptick in the number of carriers building out dark fiber networks. In addition, there was a high degree of inter-carrier trades – a practice where carriers would swap dark fiber assets with other carriers in order to gain a foothold in markets where they were not present or had limited capacity. Inter-carrier trades coupled with mergers and acquisitions allowed even the smallest of carriers to offer competitive data transport agreements around the world.

However, a significant portion of this built capacity remained untapped for years. Carriers wanted to avoid potential long-term lost telecommunications revenues and were reluctant to enable competitors in their high margin wavelength services market. In addition, carriers did not want to cannibalize their often oversubscribed and lucrative Ethernet services market with inexpensive high-capacity fiber. For these reasons, many carriers today still do not sell dedicated fiber assets directly to customers.

New demand for bandwidth

Technology needs have changed over time. Enterprises have become more dependent upon cloud services, interconnected infrastructures have grown in number, and a massive growth in the Internet of Things (IoT) all require a large data communications infrastructure that can scale rapidly, predictably, and on demand.

To fulfill these needs, dark fiber providers have entered the market and are working to provide massive bandwidth, low latency, and high quality connectivity to the end customer in the form of raw glass: dark fiber.

For additional information on the pros and cons of dark fiber versus lit services from carriers, read my blog post titled, “Shedding Light on Dark Fiber for Data Centers.”

White Paper and Webinar from Data Center Knowledge: “Strategic, Financial, and Technical Considerations for Wholesale Colocation”

One of the more interesting developments in the data center industry over the last few years has been the emergence of the wholesale data center market.

Think of wholesale data centers in the context of the traditional retail data center market. Wholesale data centers offer dedicated, multi-megawatt deployments spread out over large footprints of multiple thousands of square feet. These deployments are configured as secured vaults, private suites and cages, and entire buildings.

In fact, RagingWire has made a strategic shift into wholesale data center solutions as was reported in Data Center Knowledge in the article, “RagingWire Pursuing Cloud Providers with New Focus on Wholesale.”

White Paper - Strategic Considerations for Wholesale Data Center BuyersWhile RagingWire has been a leader in wholesale data center solutions, we have not seen very much substantive analysis published on the wholesale space. So we decided to underwrite a research project with Data Center Knowledge to study wholesale colocation and publish a white paper and webinar entitled, “Strategic, Financial, and Technical Considerations for Wholesale Colocation.” Both the white paper and webinar are available free of charge.

You can watch/listen to the webinar by clicking here.

You can download the white paper by clicking here.

What will you learn from the white paper and webinar?

From a strategic perspective, there are a number of new applications, such as video, social media, mobile, big data, and content that are leading to new computing paradigms where the design, scale, and location of data centers become increasingly important.

The financial considerations point out how sales tax abatement, scale economics, and targeting top data center markets as part of your data center portfolio can be advantageous with wholesale data centers. For example, one customer of ours said that for every dollar they spend on colocation they spend $10 on computing equipment. Say you are spending $1 million on wholesale colocation leading to $10 million in equipment purchases. At 5% sales tax, that’s a savings of $500,000.  And equipment is often refreshed every 3-5 years!

Finally, the section on technical considerations studies power density, energy efficiency, PUE and ASHRAE standards, DCIM (Data Center Infrastructure Management), and maintenance. Each of these technical elements can have a significant impact on the performance/cost of your wholesale data center, and ultimately on your business.

RagingWire is proud to support this important research and pleased to share it with the industry.

To Share, or Not to Share? The infrastructure dilemma of a wholesale data center customer

Enterprise customers who are searching for a data center for a 200kW or higher critical infrastructure, have a wide range of wholesale colocation providers to choose from. Besides deciding on the physical location to house their infrastructure, these customers must have some important questions to ask a colocation provider such as redundancy, power billing options, network connectivity, high density availability, scalability and services such as DCIM or remote hands. One of the biggest challenges that many of these enterprise customers face is deciding between the infrastructure delivery options that are available in the industry.

Most colocation providers follow any one of the two delivery models for providing infrastructure to wholesale customers: Shared or Dedicated. The traditional wholesale colocation design is based on dedicated infrastructure, where the customer is allocated a fixed infrastructure that maybe isolated from other customers. Dedicated infrastructure can be difficult and costly to scale beyond the initial allocation and usually comes with lower availability due to the small number of fault domains.

In a shared infrastructure colocation design, the customer is allocated a portion of the total infrastructure of the facility. Often, these shared elements are oversubscribed, relying on multiple customers not to reach or exceed their usage at the same time. Due to oversubscription of power, shared facilities can be less expensive, but more risky.

So, which infrastructure delivery model is the best fit for a wholesale customer? Is there a third option?

Data Center - Shared vs. Dedicated InfrastructureThis whitepaper presents RagingWire’s distributed redundancy model which is an enhancement of shared and dedicated infrastructure models. The load is distributed at the UPS and generator level across the facility, using a patented 2N+2 electrical design. Using this scalable system, RagingWire does not oversubscribe its infrastructure so customers are not at risk from the load or actions of other customers. This model also provides the highest level of provable availability in the industry, and it allows for a robust SLA for wholesale colocation: 100% Availability with no exclusions for maintenance. The authors also compare and identify benefits and pitfalls of the three power delivery models and offer practical advice to businesses looking for wholesale colocation. Click here to download this white paper.

Earthquakes and Bay Area Data Centers: It’s Not If, but When

It’s been a long time since we’ve had a severe earthquake in the Bay Area, but today, a 6.1 magnitude earthquake struck 6 miles southwest of Napa. If you’ve never experienced an earthquake, trust me, 6.1 is a big one and scary! We live in Napa and our whole house was shaking at 3.20 AM!

As I help friends and family clean up today, I had a few thoughts to share with you. On a personal level, I’m thankful everyone is safe and accounted for. This earthquake had the potential to be much worse. Because the quake hit early in the morning, most people were home and asleep. Fortunately, the older buildings that were damaged were mostly unoccupied. All that we lost was stuff, and in the end, stuff doesn’t matter that much.

Bay Area Data Centers and Earthquake Risks

From a work perspective, it was a good reminder why RagingWire considers natural disaster risk as a primary selection criteria when building our data centers. We call our Sacramento data center campus "The ROCK" for a reason. That’s because it’s built on bedrock and is far from the earthquake risk zone of Northern California. Even though we’re only driving distance from San Francisco (90 miles) and San Jose (120 miles), we are a world apart when it comes to natural disaster risks.

The last major earthquake in the Bay Area was the Loma Prieta quake in 1989. A magnitude 6.9 shaker that caused part of the Bay Bridge to collapse and interrupted the World Series. Back then, like today, Sacramento was unaffected, because Sacramento is on a different tectonic plate and essentially has no earthquake risk.

In the 25 years since Loma Prieta, there have been many data centers built in the Bay Area. Memories are short, especially for IT people who weren’t here at the time. The Bay Area is a great place to live and work, but it isn’t an ideal place to put your critical IT infrastructure.

Remember, even if the data center building survives a major quake, the surrounding infrastructure is not resilient. Bridges, roads, power grids, fiber paths, and fuel suppliers are all vulnerable and have a direct impact on your operations and service availability. And there’s no question, another quake will hit the Bay Area.

It’s not a matter of IF, but WHEN.

Blog Tags:

From Silicon Valley to Data Center Alley

Since the birth of the integrated circuit in the 1950s, Silicon Valley has become the destination for high tech entrepreneurship. Located in Northern California, the term "Silicon Valley" was coined in the 1970s and gained popularity in the 1980’s with the emergence of the personal computer. In Silicon Valley, capacity and capability came together to create some of the greatest technical innovations in history.

This same dynamic of capacity meeting capability that happened decades ago in Silicon Valley is underway in Loudoun County Virginia. We’re calling it Data Center Alley, the largest concentration of the best data centers in the world.

Capacity refers to the raw materials needed to create a thriving data center community: ample telecommunications; reliable, cost-effective utility power; and available land.

Hundreds of telecommunications providers include Data Center Alley as a link in their national and global networks. These networks interconnect using vast amounts of fiber installed in redundant loops throughout the area. The result is that 70% of the world’s internet traffic passes through Data Center Alley.

For utility power, we are fortunate to work with Dominion Virginia Power. Dominion recognized early on the potential for data centers in Northern Virginia and implemented a capacity model that ensured that sufficient power would be available to meet the needs of Data Center Alley at affordable prices. They worked closely with data center companies to configure their power delivery system so that power was highly reliable. Finally, Dominion has been a good steward of our energy infrastructure by maintaining an intelligent mix of available and environmentally sound energy sources.

The last raw material is land. Data centers need space in order to realize economies of scale. For example, our data center in Ashburn, Virginia is 150,000 square feet and we purchased 78 acres of land in Ashburn to build a1.5 million square foot data center campus. The land also needs to be located near the telecommunications and utility supplies and with easy access.

Capability refers to the people, government policies, and culture that promote building great data centers and growing the data center industry.

The data center industry is a highly specialized field that requires deep expertise in engineering, design, construction, and operations. Much of this expertise comes from on-the-job experience. Data Center Alley has more than 40 data centers which support an outstanding talent pool of data center experts.

Government policies have been instrumental in the development of Data Center Alley. Virginia is one of the most pro-business states in the U.S., and the Loudoun County Board of Supervisors and Department of Economic Development are personally involved in helping data center companies be successful. For example, RagingWire customers can qualify for a Virginia sales tax exemption which could save them millions of dollars on purchases of computer equipment and other related data center infrastructure components.

Lastly, the culture in Data Center Alley is all about building. We put theory into practice and scale it. The result is that there is currently eight million sq. ft. of data center space already built or in development in Data Center Alley.

Data Center Alley is starting to get some recognition. If you want to learn more, watch a segment on Data Center Alley from the Sunday morning news program “Government Matters.”

Welcome to Next-Generation DCIM (Data Center Infrastructure Management)

We are in the era of on-demand data delivery. The proliferation of cloud computing and information sharing has created a sort of data center boom. There are more users, more devices and a lot more services being delivered from the modern data center environment.

In fact, entire organizations and applications are being born directly within the cloud. To really put this trend in perspective, consider this - A 2013 Cisco Cloud Index report indicated that cloud traffic crossed the zettabyte (1000^7 bytes) threshold in 2012. Furthermore, by 2016, nearly two-thirds of all data center traffic will be based in the cloud.

Cisco Cloud Index Report - 2013

Efficient computing, converged infrastructures, multi-tenant platforms, and next-generation management are the key design points for the modern data center environment. Because we are placing more services into our data centers – there is greater need for visibility into multiple aspects of daily operations.

The picture of the next-generation management platform is that of a truly unified management plane

So what does that really mean? A unified data center infrastructure management platform removes “physical” barriers of a typical control system. The reason for this necessary shift can be seen in the job that the modern data center is now tasked with performing and the levels of monitoring that must occur. Next-generation DCIM (Data Center Infrastructure Management) will remove logical and physical walls and unify the entire data center control process.

"Everything-as-a-Service" Controls. The modern data center is now considered to be at the heart of "The Internet of Everything." This means that more services, information, and resources are being delivered through the data center than ever before. As the data center continues to add on new functions and delivery models – administrators will need to have visibility into everything that is being delivered. Whether it’s monitoring SLAs or a specific as-a-service offering – next-generation data center management must have integration into the whole infrastructure.

Cluster-Ready Real-Time Monitoring. There is true data center distribution happening. These nodes are inter-connected, sharing resources, and delivering content to millions of end-users. With advancements in modern data center infrastructure, bandwidth and optimized computing – data center architecture has gone from singular units to truly clustered entities. This level of connectivity also requires new levels of distributed management and control. Not only is this vital for proper resource utilization and load-balancing; cluster-ready monitoring creates greater amounts of data enter resiliency. By having complete visibility into all data center operations across the board, administrators are able to make better – proactive – decisions.

Big Data Management Engines. The increase of cloud services has created an influx of new types of offerings to the end-users. IT consumerization, BYOD and mobility have all become very hot topics around numerous large organizations. As cloud continues to grow – more users will be utilizing these resources. With an increase of users comes the direct increase of data. This is where next-generation data center management comes in. Big data engines will sit both within the cloud and on the edge of the cloud network – also known as the Fog. There, these data centers will have direct tie-ins into big data analytics engines running on virtual and physical platforms. Because this data and the information gathering process is so crucial – complete visibility into the entire process is vital. This means that large data centers acting as hubs for big data analytics will have control visibility into storage, networking, compute, infrastructure and more.

Logical and Physical Management. The days of bare metal accumulation are over. Modern data centers are highly virtualized and highly efficient. New data centers are being built around optimal power, cooling and resource control mechanisms. On top of that – sit highly efficient high-density servers capable of great levels of multi-tenancy. Although some silo’d monitoring operations may still occur – the overall infrastructure must be unified in terms of management and control. This means having granular visibility into core data center operations which includes:

  • Power
  • Cooling
  • Aisle/Control
  • Alerts/Monitoring
  • Advanced Environmental Monitoring
  • Proactive Alerting and Remediation
  • Security (physical, structural, rack, etc.)

On top of that, data center administrators must also be able to see the workloads which are running on top of this modern infrastructure. This means understanding how virtual resources are being utilized and how this is impacting the underlying data center environment.

Cloud Orchestration and Automation. The cloud computing boom also created a big need around better workload control and automation. This actually spans both the hardware and software layer. From a next-generation management perspective, there needs to be the ability to create hardware and software profiles which can then be applied to physical and virtual resources for deployment. Finally, when this approach is tied into intelligent load-balancing solutions, you have a truly end-to-end cloud orchestration and automation solution. Now, although we’re not quite at those levels yet, next-generation data center management solutions are directly integrating workload automation options. It is becoming easier to deploy hardware and simpler to provision workloads. All of this means faster content and data delivery to the corporation and the end-user. Next-generation data center management will be able to have plug-ins into the physical and logical layer and facilitate new levels of cloud automation and orchestration. 

Mobile Data Center Visibility. It’s not like your data center will just get up and move around – at least not in most cases. However, having the ability to have mobile visibility of your data center is a need. This means controlling some data center functions from mobile devices and delivering direct web-based controls as well. Furthermore, because the data center is becoming more interconnected – there will be more functions and roles to control. There will be various types of administrators and mangers requiring specific controls within a single and clustered data center model. Role-based administration and management will evolve from the standard PC to true mobility. All of this will translate to a more efficient engineering, administration and management layer of your entire data center infrastructure.

Single-Pane of Glass – Complete Control. At the end of the day it all comes down to how well you can manage your data center. There’s no arguing that there will be more requirements around the future data center platform. As the number of services that the data center delivers continues to increase, complete visibility will become even more important. There will be more plug-ins, monitoring tools, and clustered nodes to look at all while trying to control resiliency. The next-generation monitoring UI and control platforms must be intuitive, easy to understand, simple to navigate and allow the administrator to truly optimize the entire data center infrastructure.

Tomorrow’s data centers must also have tomorrow’s visibility and control. The nature of the data center is changing. It’s now the hub that delivers core services, user applications, and entire platforms. IT consumerization and the increase in Internet utilization are partially the reason for the data center boom. However, the natural progression of technology has taken our entire infrastructure into a highly resilient and agile cloud platform. The ability to stay connected and have massive content delivered directly to your end-point is truly impressive.

As the business evolves, the data center infrastructure will have to follow suit. Already business initiatives are directly aligned with the capabilities of the respective IT department. This correlation will only continue to increase as the data center becomes the home of everything. And with the next-generation data center - there must be next-generation management and control.

Making Every (Inter)Connection Count

It’s been an interesting week or two of data center news! “London Internet Exchange takes space in EvoSwitch.”  “Digital Realty announces Open Internet Exhange.” “Open-IX movement goes public.”

So what is happening here? What is the problem that is solved with “open” internet exchanges?

As a frequent North American Network Operators Group (NANOG) meeting participant, I’ve heard growing angst in the internet peering ranks about perceived points of failure presented by having single buildings in major internet hubs (e.g. New York, Ashburn, London, Amsterdam) house commercial internet exchanges. Remember Hurricane Sandy? Beyond geography, questions were raised over the treatment of telecommunications carriers and the manner in which interconnections are made as opposed to the European interconnection model (member-driven, multi-site, public).

The biggest problem Open-IX is trying to solve, however, has nothing to do with geographic diversity or carrier treatment. It’s simple economics. In the United States, the major Internet exchanges are concentrated in the hands of a few data center companies and those companies charge carriers a premium for the right to participate in the exchange. Open-IX lays this case out in their framework document as “The Interconnect Problem.”

RagingWire operates, from an interconnection point of view, in line with open internet exchange principles. All of the company’s data center facilities are carrier neutral.

RagingWire Data Centers - Marrier Meet Me Room

Carriers built into our data center aren't our customers, they're our partners in bringing highly available connectivity to our customers. Our network engineers are dedicated to building trusted, close relationships with all our carrier partners to make the ordering and provisioning process as easy and seamless as possible.

Open-IX is still in its infancy, but we look forward to continuing our long relationship with the participants. We share the desire to continually improve service and reduce costs for our customers. RagingWire is the nation’s leading data center colocation provider, focused on delivering 100% availability of power and cooling with easy access to internet connectivity and the industry’s best customer service. It’s all part of our commitment to making every connection count.

Blog Tags:

Pages

Subscribe to RSS - Infrastructure