Infrastructure

Be Ready to Outsmart the Inevitable

We can’t predict earthquakes.

I bet a lot of people don’t know that. With all our modern technology, it stands to reason that we must have some kind of earthquake warning system, like we do for tsunamis. However, in this article titled “Can you predict earthquakes?”, the United States Geological Survey (USGS) clearly says otherwise: Neither the USGS nor any other scientists have ever predicted a major earthquake. We do not know how, and we do not expect to know how any time in the foreseeable future.”

Okay then. They also mention that there is a 100% chance of an earthquake somewhere in the world today, given that several million earthquakes occur annually. That’s pretty scary stuff.

So we can’t see earthquakes coming, and we certainly can’t stop them. But there is something we can do. Be ready for the inevitable.

This is especially true if you happen to do business in one of the more earthquake-prone areas of the world, such as Silicon Valley in California. There will definitely be an impact if a major earthquake hits that area. The question is: how much of an impact will your business feel? For companies that will house their mission-critical computer equipment in RagingWire’s new Silicon Valley SV1 Data Center, the answer is probably “no impact”.

Why are we so sure? Our company has outsmarted devastating earthquakes before.

In 2011, the Great East Japan Earthquake struck with a 9.1 magnitude, damaging more than one million buildings and causing property damage totaling $235 billion. That total made this the costliest natural disaster ever. However, the Tokyo data centers of NTT Communications (RagingWire’s parent company) withstood this earthquake with no damage. How?

NTT’s data centers utilize a base isolation system that incorporates four types of vibration-absorbing devices. This strategic technology reduces any seismic impact by up to 80%, which is more than enough cushioning to protect data vaults and the IT devices within them.

RagingWire’s four-story Silicon Valley SV1 Data Center will be the first facility in Santa Clara to use the same proven earthquake-absorbing technology that NTT does. Even better, SV1 will also be seismically braced on all floors to further dissipate any shaking.

This all adds up to give you peace of mind that your essential equipment will be safe when the next earthquake strikes – and it will. Check out the SV1 Data Center highlights video below and contact us to learn more.

RagingWire Silicon Valley SV1 Data Center, Santa Clara, CA

Why are Data Center Designs Changing Right Now?

Data center designs are changing significantly, and for good reason.

Hyperscale cloud companies and large enterprises are driving new design requirements that include less redundancy, more space, lower costs, and shorter construction times.

Thus, we are at a key inflection point in the history of data centers. The market is changing, and colocation data centers must respond with new ways to provide everything that customers need, and nothing they don’t.

Data Center Knowledge: New Data Center Designs for Hyperscale Cloud and the Enterprise

RagingWire is proud to collaborate with Data Center Knowledge to produce a webinar and a white paper, both titled “New Data Center Designs for Hyperscale Cloud and the Enterprise”, that explain how this new design will be executed, and how customers will benefit.  

Click here to listen to the webinar, in which RagingWire VP of Product Management Bruno Berti shares how data center customers are benefitting from this new design, and RagingWire VP of Critical Facilities Engineering and Design Bob Woolley explains how data centers are delivering that design. Bruno and Bob are preceded on the webinar by 30-year technology journalist Scott Fulton, who gives an enthusiastic and entertaining retrospective of how we got to this point in the evolution of data center design.

Click here to download the white paper, which shows how this new design addresses the facility, power, cooling, telecommunications and security requirements of hyperscale and enterprise applications, while also lowering costs and improving overall data center performance and availability.

How can we improve safety in data centers?

While participating in a recent roundtable discussion among data center industry leaders, I was shocked to hear of an estimate that at least 50% of data centers allow energized work. And it gets worse. Depending on how you define energized work, that figure could be even higher.

Data Center Safety

Simply put, data center technicians who work on energized equipment put themselves and others around them at risk. According to Industrial Safety and Hygiene News, between 500 and 700 people die every year from arc flash incidents. More than 2,000 people with arc flash injuries are treated annually in burn centers. The average cost of medical treatment from an arc flash injury is $1.5 million, with average litigation costs between $10 million-$15 million.

These arc flash accidents are absolutely devastating, and they are preventable. That’s why I believe that data center executives must step up and take a stand to prohibit working on energized equipment. The culture needs to change.

But how do we convince more data centers operators to adopt a culture of compliance toward current safety rules? What are the steps forward to a safer workplace?

I delved into this topic in an article titled “It’s Time to Upgrade Data Center Safety” published recently on Data Center Frontier. Click here to give it a read.

Why Interconnection Matters to Wholesale Colocation

Providing large scale, secure space and power has been the focus of wholesale data center providers for many years. Until recently, innovation in wholesale data center solutions has centered on developing data center designs that improve power resiliency and building efficiency. The result being that wholesale customers today are receiving data centers that are more flexible, scalable, and cost effective than ever before.

Data Center Interconnection - RagingWire Data Centers

The cloud and surge in wholesale data center demand are changing the industry, but not in the way that many expected. Interconnection between wholesale data centers and public cloud environments is becoming a key decision criteria rather than an afterthought or “nice-to-have”. Interconnection has become a major component of the wholesale data center colocation strategy.

The Hybrid-Cloud Changes Everything.

The demand for interconnection is being driven by the changing market dynamics, specifically in hybrid cloud. Over the past five years, enterprise organizations have been successfully adopting the public cloud as a complement to rather than as a replacement for their enterprise or colocation environments. The term “hybrid cloud,” came from the desire for enterprises to utilize a combination of clouds, in-house data centers, and external colocation facilities to support their IT environments.

The challenge with having a hybrid environment arises from the need for the data centers and the cloud providers to interconnect and share data securely and with low network latency as part of one extended environment. Within a data center, servers and storage can be directly connected. A hybrid cloud environment doesn’t have the luxury of short distances, so bring on the innovation.

From Carriers to Interconnection

The first interconnection solutions were provided by telecommunications service providers. Dark fiber, lit fiber, and Internet access were all leveraged to interconnect hybrid cloud environments. However, as cloud deployments grew and matured, these network solutions became difficult to secure and manage.

Next, data center providers began to offer “direct” connections within the data center to bring cloud and colocation environments into one location, allowing interconnections to be provided through fiber cross-connects. This approach, however, restricts the choices where companies can place their colocation environments and limits the types of cloud environments to which they can connect.

The newest solutions are being introduced by interconnection platform providers, which leverage the concepts of network exchange points and software defined networking (SDN) to provide a flexible solution that can be self-managed. These innovative solutions solve many key network challenges introduced by hybrid clouds.

The Keys to Interconnection – Dedicated, secured, and many-to-many

Beyond the simple one-to-one interconnection of a cloud environment to a wholesale colocation environment, an interconnection platform is designed to allow many-to-many connections in a dedicated and secured fashion. With an interconnection platform, wholesale colocation environments can be connected to multiple cloud providers (multi-cloud) and multiple cloud locations (availability zones). This design opens the door to unique options for companies to architect their IT environments to optimize resiliency and availability while minimizing cost and complexity. The SDN aspect of the interconnection platform allows the customer to manage their connections in real time without needing involvement from the provider. They can turn up, turn down, and change individual connections at any time, usually through a simple web based user interface.

Interconnection Goes Beyond the Cloud

The first generation of interconnection solutions focused on delivering dedicated private connections to the top cloud providers. As interconnection and cloud environments evolved, it became easier to move and share workloads across clouds and the data centers. Web portals allowed users to configure, manage, and trouble shoot their cloud connections.

Today a next generation of interconnection is rolling out in wholesale data centers that extends the connectivity platform beyond the cloud, to provide data center customers more options for interconnection partners. The first of these partners – SaaS Providers. New interconnection platforms allow enterprises to directly connect to applications such as web conferencing, help desk, customer service and human resources. For the enterprise customer, they receive a dedicated and secure connection to the application that is easier to manage and integrate. For the SaaS provider, they now have a “direct connection” offering to their software that improves the availability and performance of their application service.

The second new category of interconnection partners is other enterprises. Interconnect platforms now cover the globe to connect wholesale data centers in dozens of countries and through hundreds of points-of-presence (PoPs). Any enterprise connected to the interconnect platform becomes a potential interconnection partner. For example, your company may partner with a data provider or analytics service to deliver a solution to your customers. The interconnect platform makes it easy to connect your hybrid cloud environment to your partner’s hybrid cloud environment. You can even use the interconnect portal to invite new partners to join the ecosystem.

What Next? A Hybrid of Hybrids.

It’s clear that the hybrid computing model combining data centers and clouds with a global, seamless, and secured network is the direction that corporate IT is heading. To support hybrid computing, wholesale data centers have evolved beyond space, power, telecommunications, and security. Wholesale data centers have become a critical infrastructure platform for both cloud providers and enterprises. Interconnection now becomes a core element in wholesale data center solutions bringing together clouds and enterprises into a flexible and scalable hybrid of hybrids.

Shedding Light on Dark Fiber for Data Centers

When evaluating your data center connectivity, there are many reasons to consider dark fiber, including cost control, flexibility, security, and scalability. To quickly understanding the basics of fiber optics, see my blog posting, “Tech Primer: Dark Fiber, Lit and Wavelengths.

Importance of Dark Fiber for Data Centers

The number of internet connected devices known as the Internet of Things (IoT) is expected to reach 20+ billion by the year 2020, according to a recent Gartner report.

Gartner report - IoT growth from 2014 - 2020Likewise, cloud usage has been escalating at a similar rate, year over year. The reliance on cloud platforms such as Amazon’s AWS, Microsoft Azure, IBM SoftLayer and Google Cloud also continues to skyrocket, as indicated by cloud revenues seen in this report from Synergy Research Group.

These growth markets are driving enterprises and online businesses to a level of network dependence that is becoming hyper-critical.

Growth of Cloud Providers - Synergy Research Group Report

Connectivity is King

A loss of network connectivity or degraded network performance across a network connection can cause more than the loss of revenue. Poor network performance could even cause the loss of a life in the case of some environments like healthcare, public safety, and the military.

How vital is your network? When stability, along with latency, security, and bandwidth are at the forefront of the decision makers mind, then dark fiber may be the answer.

RagingWire understands that connectivity is of paramount value in a data center. As such, RagingWire has both partnered with connectivity providers and made a significant capital investment in telecommunications infrastructure to service our customer’s unique needs.

For example, in our Sacramento data center campus, we partner with multiple carriers to provide lit and dark fiber services that deliver excellent network performance of ~2ms latency to San Francisco, and ~4ms latency to the South Bay – a location jam-packed with cloud providers.

In our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center.

In Garland, Texas, within the Dallas-Fort Worth Metroplex, RagingWire has built a fiber network that connects its 1,000,000 square foot of data center campus to over 128 locations in the Dallas and Fort Worth market, including popular carrier hotels and cloud providers.

The Good, the Bad, and the Ugly of Dark Fiber

Dark fiber may be the right decision for many of today’s infrastructure connectivity needs, but make sure you go into it with full awareness of its advantages and disadvantages.

The Good:

  • Cost-control: Dark fiber costs the same whether you intend to use 1Gb, 10Gb, or 100Gb.
  • Flexibility: You may run any protocol and any service. You may even choose to install your own multiplexing equipment and slice the fiber into multiple channels (generally 16 channels, but current off-the-shelf hardware allows for up to 160), each usable for 1Gb, 10Gb, or 100Gb.
  • Security: Public access telecommunications networks generally have multiple access points at various nodes throughout the network, whereas dark fiber routes are accessible only at each of the two endpoints of the fiber run.
  • Scalability: Service may be upgraded as required by simply using higher performance equipment. Available bandwidth on dark fiber is limited by only three things: Physics, current technology, and your budget.

The Bad:

  • Cost-control: When your bandwidth requirements are 1Gb or less, lit services will usually be less expensive than fiber when considering the initial lease of fiber and capital outlay for hardware. Additionally, long-distance dark fiber may be more expensive than purchasing a wavelength. You’ll have to do the math and figure out which meets your needs and budget.

The Ugly:

  • Reliability: Your architect will need to design around the fact that there is no built-in fault-tolerance or connectivity failure protection. This will usually require the purchase of a second diverse dark fiber path between your two locations.
  • Scalability and cost-control: Dark fiber is point-to-point. Unlike many other carrier products available, dark fiber does not allow for multiple end-points on a network. It may be necessary to purchase multiple fiber paths for larger networks.

Summary

When considering dark fiber from fiber providers instead of lit fiber or carrier services from telecom providers, it is beneficial to map your unique IT connectivity needs with the strengths and weaknesses of dark fiber. This mapping exercise should help shed some light on the best connectivity options for your custom environment.

Connectivity Questions

Is your data center carrier neutral? Carrier neutrality is vital when choosing a data center. You want your data center to freely allow interconnectivity between all carriers and other colocation providers. This protects your interests and allows for future scale, plus it maximizes your flexibility.

What types of lit connectivity are available? It is less important to focus on the number of carriers in the campus; instead focus on whether the carriers you care about are available. Also ask if their direct competitors are available. This will be helpful for bidding – to keep your primary carrier as cost competitive as possible.

Is dark fiber available? If so, where does it go? Does the data center have a dark fiber product or a partnership? Where does it go and is the pricing competitive? Does the data center have lit connectivity options or a partnership?

 

WEBINAR: “Colocation and the Enterprise: A Conversation with 451 Research and the CIO of NCR”

According to the Uptime Institute 70% of enterprise workloads are running in corporate data centers. Colocation data centers have 20% of enterprise applications, and cloud providers have 9%.

Webinar - Is Data Center Colocation the Right Approach for the EnterpriseWhat does this data mean? The next wave of demand for colocation and cloud is going to come from the enterprise.

In fact, colocation providers will get a double hit from the enterprise. First, workloads will move directly from enterprise data centers to colocation data centers. Second, enterprise workloads that move to public cloud providers will cause those cloud companies to need more servers, storage, and potentially more colocation data center capacity.

If you are an enterprise with in house data centers, it’s time to start scenario planning for migrating your apps and data to colocation data centers and the cloud. This webinar will help you get started.

WEBINAR: “Colocation and the Enterprise: A Conversation with the 451 Research and the CIO of NCR”

Kelly Morgan, Vice President Services from 451 Research, is one of the leading industry analysts looking after the data center space. In the webinar, Kelly presents data from the 451 Voice of the Enterprise Survey that you can use to build the strategy and business case for workload migration.

Bill VanCuren is the CIO of the NCR Corporation, a 130-year old icon with $6.3 billion in revenue and 30,000 employees that is transforming itself into a nimble, internet-based software and services company. Bill has been consistently recognized as one of the top enterprise CIOs. He has over 30 years of global and corporate IT management experience.

Bill and Kelly discuss NCR’s journey from 50 in-house data centers to a handful of colocation facilities and the cloud. Bill talks about the drivers that led him to consider colocation, the analysis he presented to the Board of Directors, and the critical success factors for his team to execute the migration.

It’s a rare treat to be able to tap into the knowledge, experience, and expertise of these two industry leaders. Many thanks to Kelly and Bill for participating in this exclusive webinar. Click the link to watch the recording: Is Data Center Colocation the Right Approach for the Enterprise?

Tech Primer: Dark Fiber, Lit and Wavelengths

Some IT colleagues have asked me, “What is dark fiber and what’s the difference between lit and wavelengths?” Let’s begin by understanding the basic concepts of fiber optics and the difference between dark and lit fiber.

Difference between dark fiber and lit fiber

Unlike wire, which passes electricity through a metal conductor, fiber optic cables use a specialized glass or plastic allowing for data to be transmitted great distances by passing light through the glass. Fiber that isn't currently being used and has no light passing through it is called dark fiber.

Utilizing this fiber, telecommunications carriers can provide something called “wavelength” services, also known as “waves.” This works by splitting the light into various wavelength groups called colors or “lambdas”. Carriers sell these wavelengths to separate customers and then recombine the colors and transmit it across fiber. Therefore, lit fiber is fiber that has been lit with light by a carrier.

Dark and lit fiber explainedTo better understand lit fiber’s wavelengths, think of a rainbow where each color is a channel of light. Remember Mr. "ROY G. BIV" from middle school – Red, Orange, Yellow, Green, Blue, Indigo, and Violet?

Wavelengths essentially split a single fiber into channels. Unlike copper wire, which uses an electrical signal, fiber optic communications utilize either a LASER or a LED operating at a very high frequency. Fiber optic cables have the ability to carry much higher frequencies than copper cables. Traditional bandwidth throughput (1Gb/10Gb/40Gb/100Gb) will easily fit into a single color channel. Each fiber can be split into hundreds of colors, but a typical lit fiber is split into sixteen colors or lambdas.

The business of fiber optics

In the late 1990's, there was an uptick in the number of carriers building out dark fiber networks. In addition, there was a high degree of inter-carrier trades – a practice where carriers would swap dark fiber assets with other carriers in order to gain a foothold in markets where they were not present or had limited capacity. Inter-carrier trades coupled with mergers and acquisitions allowed even the smallest of carriers to offer competitive data transport agreements around the world.

However, a significant portion of this built capacity remained untapped for years. Carriers wanted to avoid potential long-term lost telecommunications revenues and were reluctant to enable competitors in their high margin wavelength services market. In addition, carriers did not want to cannibalize their often oversubscribed and lucrative Ethernet services market with inexpensive high-capacity fiber. For these reasons, many carriers today still do not sell dedicated fiber assets directly to customers.

New demand for bandwidth

Technology needs have changed over time. Enterprises have become more dependent upon cloud services, interconnected infrastructures have grown in number, and a massive growth in the Internet of Things (IoT) all require a large data communications infrastructure that can scale rapidly, predictably, and on demand.

To fulfill these needs, dark fiber providers have entered the market and are working to provide massive bandwidth, low latency, and high quality connectivity to the end customer in the form of raw glass: dark fiber.

For additional information on the pros and cons of dark fiber versus lit services from carriers, read my blog post titled, “Shedding Light on Dark Fiber for Data Centers.”

White Paper and Webinar from Data Center Knowledge: “Strategic, Financial, and Technical Considerations for Wholesale Colocation”

One of the more interesting developments in the data center industry over the last few years has been the emergence of the wholesale data center market.

Think of wholesale data centers in the context of the traditional retail data center market. Wholesale data centers offer dedicated, multi-megawatt deployments spread out over large footprints of multiple thousands of square feet. These deployments are configured as secured vaults, private suites and cages, and entire buildings.

In fact, RagingWire has made a strategic shift into wholesale data center solutions as was reported in Data Center Knowledge in the article, “RagingWire Pursuing Cloud Providers with New Focus on Wholesale.”

White Paper - Strategic Considerations for Wholesale Data Center BuyersWhile RagingWire has been a leader in wholesale data center solutions, we have not seen very much substantive analysis published on the wholesale space. So we decided to underwrite a research project with Data Center Knowledge to study wholesale colocation and publish a white paper and webinar entitled, “Strategic, Financial, and Technical Considerations for Wholesale Colocation.” Both the white paper and webinar are available free of charge.

You can watch/listen to the webinar by clicking here.

You can download the white paper by clicking here.

What will you learn from the white paper and webinar?

From a strategic perspective, there are a number of new applications, such as video, social media, mobile, big data, and content that are leading to new computing paradigms where the design, scale, and location of data centers become increasingly important.

The financial considerations point out how sales tax abatement, scale economics, and targeting top data center markets as part of your data center portfolio can be advantageous with wholesale data centers. For example, one customer of ours said that for every dollar they spend on colocation they spend $10 on computing equipment. Say you are spending $1 million on wholesale colocation leading to $10 million in equipment purchases. At 5% sales tax, that’s a savings of $500,000.  And equipment is often refreshed every 3-5 years!

Finally, the section on technical considerations studies power density, energy efficiency, PUE and ASHRAE standards, DCIM (Data Center Infrastructure Management), and maintenance. Each of these technical elements can have a significant impact on the performance/cost of your wholesale data center, and ultimately on your business.

RagingWire is proud to support this important research and pleased to share it with the industry.

To Share, or Not to Share? The infrastructure dilemma of a wholesale data center customer

Enterprise customers who are searching for a data center for a 200kW or higher critical infrastructure, have a wide range of wholesale colocation providers to choose from. Besides deciding on the physical location to house their infrastructure, these customers must have some important questions to ask a colocation provider such as redundancy, power billing options, network connectivity, high density availability, scalability and services such as DCIM or remote hands. One of the biggest challenges that many of these enterprise customers face is deciding between the infrastructure delivery options that are available in the industry.

Most colocation providers follow any one of the two delivery models for providing infrastructure to wholesale customers: Shared or Dedicated. The traditional wholesale colocation design is based on dedicated infrastructure, where the customer is allocated a fixed infrastructure that maybe isolated from other customers. Dedicated infrastructure can be difficult and costly to scale beyond the initial allocation and usually comes with lower availability due to the small number of fault domains.

In a shared infrastructure colocation design, the customer is allocated a portion of the total infrastructure of the facility. Often, these shared elements are oversubscribed, relying on multiple customers not to reach or exceed their usage at the same time. Due to oversubscription of power, shared facilities can be less expensive, but more risky.

So, which infrastructure delivery model is the best fit for a wholesale customer? Is there a third option?

Data Center - Shared vs. Dedicated InfrastructureThis whitepaper presents RagingWire’s distributed redundancy model which is an enhancement of shared and dedicated infrastructure models. The load is distributed at the UPS and generator level across the facility, using a patented 2N+2 electrical design. Using this scalable system, RagingWire does not oversubscribe its infrastructure so customers are not at risk from the load or actions of other customers. This model also provides the highest level of provable availability in the industry, and it allows for a robust SLA for wholesale colocation: 100% Availability with no exclusions for maintenance. The authors also compare and identify benefits and pitfalls of the three power delivery models and offer practical advice to businesses looking for wholesale colocation. Click here to download this white paper.

Earthquakes and Bay Area Data Centers: It’s Not If, but When

It’s been a long time since we’ve had a severe earthquake in the Bay Area, but today, a 6.1 magnitude earthquake struck 6 miles southwest of Napa. If you’ve never experienced an earthquake, trust me, 6.1 is a big one and scary! We live in Napa and our whole house was shaking at 3.20 AM!

As I help friends and family clean up today, I had a few thoughts to share with you. On a personal level, I’m thankful everyone is safe and accounted for. This earthquake had the potential to be much worse. Because the quake hit early in the morning, most people were home and asleep. Fortunately, the older buildings that were damaged were mostly unoccupied. All that we lost was stuff, and in the end, stuff doesn’t matter that much.

Bay Area Data Centers and Earthquake Risks

From a work perspective, it was a good reminder why RagingWire considers natural disaster risk as a primary selection criteria when building our data centers. We call our Sacramento data center campus "The ROCK" for a reason. That’s because it’s built on bedrock and is far from the earthquake risk zone of Northern California. Even though we’re only driving distance from San Francisco (90 miles) and San Jose (120 miles), we are a world apart when it comes to natural disaster risks.

The last major earthquake in the Bay Area was the Loma Prieta quake in 1989. A magnitude 6.9 shaker that caused part of the Bay Bridge to collapse and interrupted the World Series. Back then, like today, Sacramento was unaffected, because Sacramento is on a different tectonic plate and essentially has no earthquake risk.

In the 25 years since Loma Prieta, there have been many data centers built in the Bay Area. Memories are short, especially for IT people who weren’t here at the time. The Bay Area is a great place to live and work, but it isn’t an ideal place to put your critical IT infrastructure.

Remember, even if the data center building survives a major quake, the surrounding infrastructure is not resilient. Bridges, roads, power grids, fiber paths, and fuel suppliers are all vulnerable and have a direct impact on your operations and service availability. And there’s no question, another quake will hit the Bay Area.

It’s not a matter of IF, but WHEN.

Blog Tags:

Pages

Subscribe to RSS - Infrastructure