Data Center

Why Interconnection Matters to Wholesale Colocation

Providing large scale, secure space and power has been the focus of wholesale data center providers for many years. Until recently, innovation in wholesale data center solutions has centered on developing data center designs that improve power resiliency and building efficiency. The result being that wholesale customers today are receiving data centers that are more flexible, scalable, and cost effective than ever before.

Data Center Interconnection - RagingWire Data Centers

The cloud and surge in wholesale data center demand are changing the industry, but not in the way that many expected. Interconnection between wholesale data centers and public cloud environments is becoming a key decision criteria rather than an afterthought or “nice-to-have”. Interconnection has become a major component of the wholesale data center colocation strategy.

The Hybrid-Cloud Changes Everything.

The demand for interconnection is being driven by the changing market dynamics, specifically in hybrid cloud. Over the past five years, enterprise organizations have been successfully adopting the public cloud as a complement to rather than as a replacement for their enterprise or colocation environments. The term “hybrid cloud,” came from the desire for enterprises to utilize a combination of clouds, in-house data centers, and external colocation facilities to support their IT environments.

The challenge with having a hybrid environment arises from the need for the data centers and the cloud providers to interconnect and share data securely and with low network latency as part of one extended environment. Within a data center, servers and storage can be directly connected. A hybrid cloud environment doesn’t have the luxury of short distances, so bring on the innovation.

From Carriers to Interconnection

The first interconnection solutions were provided by telecommunications service providers. Dark fiber, lit fiber, and Internet access were all leveraged to interconnect hybrid cloud environments. However, as cloud deployments grew and matured, these network solutions became difficult to secure and manage.

Next, data center providers began to offer “direct” connections within the data center to bring cloud and colocation environments into one location, allowing interconnections to be provided through fiber cross-connects. This approach, however, restricts the choices where companies can place their colocation environments and limits the types of cloud environments to which they can connect.

The newest solutions are being introduced by interconnection platform providers, which leverage the concepts of network exchange points and software defined networking (SDN) to provide a flexible solution that can be self-managed. These innovative solutions solve many key network challenges introduced by hybrid clouds.

The Keys to Interconnection – Dedicated, secured, and many-to-many

Beyond the simple one-to-one interconnection of a cloud environment to a wholesale colocation environment, an interconnection platform is designed to allow many-to-many connections in a dedicated and secured fashion. With an interconnection platform, wholesale colocation environments can be connected to multiple cloud providers (multi-cloud) and multiple cloud locations (availability zones). This design opens the door to unique options for companies to architect their IT environments to optimize resiliency and availability while minimizing cost and complexity. The SDN aspect of the interconnection platform allows the customer to manage their connections in real time without needing involvement from the provider. They can turn up, turn down, and change individual connections at any time, usually through a simple web based user interface.

Interconnection Goes Beyond the Cloud

The first generation of interconnection solutions focused on delivering dedicated private connections to the top cloud providers. As interconnection and cloud environments evolved, it became easier to move and share workloads across clouds and the data centers. Web portals allowed users to configure, manage, and trouble shoot their cloud connections.

Today a next generation of interconnection is rolling out in wholesale data centers that extends the connectivity platform beyond the cloud, to provide data center customers more options for interconnection partners. The first of these partners – SaaS Providers. New interconnection platforms allow enterprises to directly connect to applications such as web conferencing, help desk, customer service and human resources. For the enterprise customer, they receive a dedicated and secure connection to the application that is easier to manage and integrate. For the SaaS provider, they now have a “direct connection” offering to their software that improves the availability and performance of their application service.

The second new category of interconnection partners is other enterprises. Interconnect platforms now cover the globe to connect wholesale data centers in dozens of countries and through hundreds of points-of-presence (PoPs). Any enterprise connected to the interconnect platform becomes a potential interconnection partner. For example, your company may partner with a data provider or analytics service to deliver a solution to your customers. The interconnect platform makes it easy to connect your hybrid cloud environment to your partner’s hybrid cloud environment. You can even use the interconnect portal to invite new partners to join the ecosystem.

What Next? A Hybrid of Hybrids.

It’s clear that the hybrid computing model combining data centers and clouds with a global, seamless, and secured network is the direction that corporate IT is heading. To support hybrid computing, wholesale data centers have evolved beyond space, power, telecommunications, and security. Wholesale data centers have become a critical infrastructure platform for both cloud providers and enterprises. Interconnection now becomes a core element in wholesale data center solutions bringing together clouds and enterprises into a flexible and scalable hybrid of hybrids.

Data Center Knowledge: Hyperscale Cloud Case Study (webinar and white paper)

The cloud changes everything – the computers we buy (or don’t buy), the way we write applications, how we collect and store data, and the design and location of our data centers.

Selecting the right West Coast data center solutionRagingWire is home to many top cloud providers. We are working with them to turn their requirements for space, power, cooling, telecommunications, and security into data center designs. You can see these designs deployed across our data center portfolio, including our CA3 Data Center in Sacramento, our TX1 Data Center in Dallas, and our VA3 Data Center in Ashburn, Virginia.

To help us better understand the impact of cloud computing on data centers, we hired Bill Kleyman, Featured Cloud and Data Center Analyst at Data Center Knowledge, one of the largest industry websites, to study how cloud providers and Fortune 1000 enterprises are optimizing their data centers worldwide and the unique data center requirements found in Northern California, one of the top data center markets in the world.

Based on this research, Bill wrote the white paper “Hyperscale Cloud Case Study: Selecting the Right West Coast Data Center Solution” and produced a webinar on the subject, both featuring Groupon, a global leader in local and online commerce.

Click here to download the white paper and watch the webinar.

Here are some of the key findings from the white paper and webinar:

  • Cloud applications require data centers in key internet hub locations in order to manage network latency
  • Having a data center near Silicon Valley and the Bay Area is preferred, but it is best to be outside the earthquake zone in order to reduce risk and lower costs
  • Data center scalability and flexibility are critical to support ongoing cloud capacity
  • Rigid IT architectures are being replaced with hybrids
  • As applications scale, the flexibility of the cloud can be outweighed by escalating costs
  • Multi-megawatt, large footprint deployments are driving the need for wholesale data center colocation
  • Carrier neutrality and direct cloud connectivity are required, improving reliability and performance and reducing costs
  • Using a wholesale colocation provider provides significantly faster time to delivery than deploying a traditional powered shell

VIDEO: 451 Research on the Dallas Data Center Market

With over 100 analysts worldwide, 451 Research is one of the top industry analyst firms covering the competitive dynamics of innovation in technology and digital infrastructure, from edge to core.

We were honored that Kelly Morgan, Research Vice President, and Stefanie Williams, Associate Analyst, both from 451 attended the grand opening of our Dallas TX1 Data Center on April 18, 2017.

Kelly’s team tracks hosting, managed services, and multi-tenant data center providers worldwide. They study providers, do market sizing, analyze supply and demand, and provide insights into the dynamics of the industry. In addition, 451 maintains two critical strategic tools: the Datacenter Knowledgebase, an authoritative data base with more than 100 data points on 4,500 global data centers, and the M&A Knowledgebase of 50,000 tech transactions.

In short, 451 Research knows data centers!

After the grand opening celebration, we invited Kelly to spend a day with us to tour our TX1 Data Center and talk with our President and CEO, Doug Adams. This video shares highlights of the tour and conversation as well as Kelly’s insights into the booming Dallas data center market.

According to Kelly, Dallas is the third largest data center market in the U.S. with 100 leasable data centers measuring 3,000,000 square feet and 300 megawatts – and growing fast! 

RagingWire’s Dallas Data Center Campus sits on 42 acres of land and will ultimately have five interconnected buildings totaling 1,000,000 square feet with 80 megawatts of power. Phase 1 of the campus, knows as the TX1 Data Center, has 230,000 square feet of space and 16 megawatts of power. 

TX1 was designed for scalability, flexibility, and efficiency, ideal for cloud providers and Fortune 1000 enterprises. Vaults from 1 MW to 5 MW vaults are available as well as private suites and cages, with options for dedicated infrastructure and build-to-suit solutions. TX1 features a highly efficient, waterless cooling system that leverages available outside cool air and does not stress local water supplies. The campus has fiber connectivity to the carrier hotels providing access to 70 telecommunications providers and direct connectivity to the major cloud providers, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Would You Drive 90 Miles to Save $1 Million Per Year on Your Data Center?

One of the top data center markets in the world is Northern California, including Silicon Valley and the Bay Area.

According to the most recent report from Data Center Frontier and datacenterHawk, the Silicon Valley area is home to nearly 2.6 million square feet of commissioned data center space, representing 343 megawatts of commissioned power. That makes Silicon Valley the second-largest market for data center space in the U.S., trailing only Northern Virginia.

The problem is that the costs for power, people, and real estate in Northern California are some of the highest in the United States. Plus, data center supply in Northern California can be constrained, and there is the overhanging risk of earthquakes.

What if you could have the benefits of having your data center in Northern California with a lower price point, reduced earthquake risk, and available supply?

According to our recent analysis you could save nearly $8 million over a 7-year term by having your data center in Sacramento instead of San Francisco. The savings are between about $1 million and $7 million when compared to Phoenix, Reno, and Las Vegas.

So the question is, “Would you be willing to drive the 90 miles from Silicon Valley to Sacramento to save $1 million a year on your data center?”

Data Centers in Northern California

Base Rent – land and construction costs in Silicon Valley are high.

We all know that the cost of land and construction in Silicon Valley are high. Data from the National Association of Realtors published in August 2016, showed that the median price for a home in the region around San Jose, California was over $1 million — a first for any metro area in the country.

The same factors that make your Silicon Valley home expensive are true for your Silicon Valley data center. Supply of land is scarce. Plus, the expertise to build and operate a data center in Silicon Valley is often hard to find, making these human resources expensive as well.

Power – the largest single line item in your data center Total Cost of Ownership (TCO).

For a large-footprint, hyperscale cloud or enterprise data center deployment, it’s not out of line to spend $2 on power for every $1 you spend on base rent.

The mistake in many data center TCO models is that the cost of power is viewed as a sunk cost not a variable cost – a value to be plugged in, not managed. The good news is that data center operators tend to negotiate better power rates than you could get yourself due to quantity discounts. In addition, your overall power consumption in a new state of the art colocation facility will probably be lower than in your own data center through the use of more efficient cooling technologies and automation systems.

The even better news is that wholesale data center power pricing through the Sacramento Municipal Utility District (SMUD) is the lowest in the state of California. For example, data center power pricing in San Francisco is about 12 cents per kilowatt hour. In Sacramento it’s 6.9 cents – almost ½ the price. For a typical 1 megawatt deployment, the savings in power is about $648,000 per year for a total of nearly $5 million over seven years!

Planes, trains, and automobiles – which do you prefer?

How far do you want to be away from your data center, and how do you want to get there?  Sacramento is about 90 miles from the Bay Area. Reno is 220 miles, Las Vegas is 570 miles, and Phoenix is 750 miles. Would you rather drive or fly? Driving is probably the most flexible and cost effective option. A flight might take less effort than driving, but you need to make time for getting to the airport, parking, checking in, security, boarding, etc. Plus you will need a hotel and transportation when you land, and a return flight. Airports can also be more susceptible to weather delays. In an earthquake emergency, the airports are often closed.

Networks and the speed of light.

We’re living in the most connected era in history. But even with all the fiber in the ground, network performance is still bounded by the speed of light. Network latency can be a critical variable in the end-user application experience. No one wants to be looking at the hourglass. Roundtrip network latency to and from Sacramento and the Bay Area is 3 milliseconds (ms). In Las Vegas, the roundtrip network latency to and from the Bay Area is 15.3 ms. And in Phoenix the roundtrip network latency to and from the Bay Area is 18.1 ms. These network numbers make a big difference in application performance.

Environmental risk – earthquakes and severe weather.

The discussion around environmental risk and data centers in Silicon Valley or the Bay Area usually focuses on earthquakes. According to the U.S. Geological Survey, the Bay Area has the highest density of active faults per square mile of any urban center in the country. There is a 72% chance of a magnitude 6.7 or greater earthquake occurring over the next 30 years on one of these Bay Area faults (6.7 is the same size as the 1994 Northridge earthquake which caused 57 deaths and property damage estimated at $13-40 billion). The percentage shoots up to 89% for a magnitude 6 or greater quake. 

The good news is that once you get outside the Bay Area, the risk of earthquakes drops dramatically. Sacramento, for example, is on a separate tectonic plate from the Bay Area and is rated as a “very low risk”. However, not all data center locations outside the Bay Area have a low risk of earthquakes. For example, even though Reno is 218 miles away from the Bay Area, it has a similar risk of earthquake as the Bay Area.

Regarding severe weather, the desert locations need to deal with extreme temperatures and drought conditions. During the years 1981-2015, Las Vegas averaged 75 days per year of 100+ degree temperatures. During the same time period, Phoenix averaged 110 days per year of 100+ degree temperatures. Sacramento averages 11 days per year of over 100 degree temperatures, with half of those days in July.

Sacramento may experience heavy “El Nino” rains and excessive snow melt from the Sierra Nevada Mountains which can cause the rivers to overflow. Fortunately, Sacramento has spent billions of dollars over the last 20 years on a sophisticated system of levees and spillways, and has another $2.4 billion of flood-control projects in development. Record snowfalls of 471 inches from January-March, 2017 in Lake Tahoe were a good test for the flood control measures and all of the Sacramento data centers were safe.

Run the numbers yourself

Northern California continues to stand out as a “must have” location as part of a global data center deployment. Sacramento has established itself as a primary spot for data centers in Northern California, offering available supply, lower costs, excellent telecommunications, and room to grow. Click here to use a total cost of ownership (TCO) calculator where you can run the numbers yourself. The business case is compelling.

Blog Tags:

WEBINAR: “Colocation and the Enterprise: A Conversation with 451 Research and the CIO of NCR”

According to the Uptime Institute 70% of enterprise workloads are running in corporate data centers. Colocation data centers have 20% of enterprise applications, and cloud providers have 9%.

Webinar - Is Data Center Colocation the Right Approach for the EnterpriseWhat does this data mean? The next wave of demand for colocation and cloud is going to come from the enterprise.

In fact, colocation providers will get a double hit from the enterprise. First, workloads will move directly from enterprise data centers to colocation data centers. Second, enterprise workloads that move to public cloud providers will cause those cloud companies to need more servers, storage, and potentially more colocation data center capacity.

If you are an enterprise with in house data centers, it’s time to start scenario planning for migrating your apps and data to colocation data centers and the cloud. This webinar will help you get started.

WEBINAR: “Colocation and the Enterprise: A Conversation with the 451 Research and the CIO of NCR”

Kelly Morgan, Vice President Services from 451 Research, is one of the leading industry analysts looking after the data center space. In the webinar, Kelly presents data from the 451 Voice of the Enterprise Survey that you can use to build the strategy and business case for workload migration.

Bill VanCuren is the CIO of the NCR Corporation, a 130-year old icon with $6.3 billion in revenue and 30,000 employees that is transforming itself into a nimble, internet-based software and services company. Bill has been consistently recognized as one of the top enterprise CIOs. He has over 30 years of global and corporate IT management experience.

Bill and Kelly discuss NCR’s journey from 50 in-house data centers to a handful of colocation facilities and the cloud. Bill talks about the drivers that led him to consider colocation, the analysis he presented to the Board of Directors, and the critical success factors for his team to execute the migration.

It’s a rare treat to be able to tap into the knowledge, experience, and expertise of these two industry leaders. Many thanks to Kelly and Bill for participating in this exclusive webinar. Click the link to watch the recording: Is Data Center Colocation the Right Approach for the Enterprise?

10+ years of 100% uptime for global video delivery provider

Can you imagine how great you would feel if your applications and websites never went down, even during maintenance windows? What if you could count on your servers, storage devices, and network equipment to always be powered on, plugged in to the network, and running with the right temperature and humidity?

Global video delivery provider MobiTV does not have to imagine, because they have experienced it in real life for more than 10 years. Recently, MobiTV and RagingWire announced they had celebrated their 10th anniversary of zero downtime.

10+ years of 100% uptime for global video delivery provider MobiTV

MobiTV is a global leader in live and on-demand video delivery solutions. They collaborate with industry leading media partners such as NBC, CBS, ESPN, and Disney to deliver cutting edge “TV everywhere” solutions to mobile devices. They serve a global customer base of broadband/DSL operators, cable and IPTV operators, as well as wireless and over-the-top operators like AT&T, Deutsche Telekom, Sprint, T-Mobile and Verizon.

When you think of demanding, infrastructure-intensive applications, MobiTV sits near the top of the pyramid. So MobiTV houses their IT systems in one of California’s largest and most-reliable data centers, a 53MW campus from RagingWire.

Casey Fann, MobiTV’s vice president of operations and professional services said, “RagingWire’s Sacramento data center campus is an ideal location for Bay Area companies looking for data centers with minimal earthquake risk, low-latency network access, and reduced power utility costs. By selecting RagingWire, MobiTV has a partner to facilitate the delivery of live or on-demand reliable, and world-class media content with 100% uptime and impeccable customer service.”

You may find it surprising to know the average duration of a North American data center outage is 95 minutes. And, the maximum outage was 415 minutes long. This is according to a Ponemon Institute study from January 2016. It stated 25% of these outages came from UPS system failures. The same study also measured the average cost of a data center outage at $740,357 and the maximum downtime costs was $2,409,991.

When you hear stats like this from Ponemon Institute, the ROI on uptime is compelling.

See the rest of MobiTV’s amazing story about 100% uptime for 10 years

Blog Tags:

Tech Primer: Dark Fiber, Lit and Wavelengths

Some IT colleagues have asked me, “What is dark fiber and what’s the difference between lit and wavelengths?” Let’s begin by understanding the basic concepts of fiber optics and the difference between dark and lit fiber.

Difference between dark fiber and lit fiber

Unlike wire, which passes electricity through a metal conductor, fiber optic cables use a specialized glass or plastic allowing for data to be transmitted great distances by passing light through the glass. Fiber that isn't currently being used and has no light passing through it is called dark fiber.

Utilizing this fiber, telecommunications carriers can provide something called “wavelength” services, also known as “waves.” This works by splitting the light into various wavelength groups called colors or “lambdas”. Carriers sell these wavelengths to separate customers and then recombine the colors and transmit it across fiber. Therefore, lit fiber is fiber that has been lit with light by a carrier.

Dark and lit fiber explainedTo better understand lit fiber’s wavelengths, think of a rainbow where each color is a channel of light. Remember Mr. "ROY G. BIV" from middle school – Red, Orange, Yellow, Green, Blue, Indigo, and Violet?

Wavelengths essentially split a single fiber into channels. Unlike copper wire, which uses an electrical signal, fiber optic communications utilize either a LASER or a LED operating at a very high frequency. Fiber optic cables have the ability to carry much higher frequencies than copper cables. Traditional bandwidth throughput (1Gb/10Gb/40Gb/100Gb) will easily fit into a single color channel. Each fiber can be split into hundreds of colors, but a typical lit fiber is split into sixteen colors or lambdas.

The business of fiber optics

In the late 1990's, there was an uptick in the number of carriers building out dark fiber networks. In addition, there was a high degree of inter-carrier trades – a practice where carriers would swap dark fiber assets with other carriers in order to gain a foothold in markets where they were not present or had limited capacity. Inter-carrier trades coupled with mergers and acquisitions allowed even the smallest of carriers to offer competitive data transport agreements around the world.

However, a significant portion of this built capacity remained untapped for years. Carriers wanted to avoid potential long-term lost telecommunications revenues and were reluctant to enable competitors in their high margin wavelength services market. In addition, carriers did not want to cannibalize their often oversubscribed and lucrative Ethernet services market with inexpensive high-capacity fiber. For these reasons, many carriers today still do not sell dedicated fiber assets directly to customers.

New demand for bandwidth

Technology needs have changed over time. Enterprises have become more dependent upon cloud services, interconnected infrastructures have grown in number, and a massive growth in the Internet of Things (IoT) all require a large data communications infrastructure that can scale rapidly, predictably, and on demand.

To fulfill these needs, dark fiber providers have entered the market and are working to provide massive bandwidth, low latency, and high quality connectivity to the end customer in the form of raw glass: dark fiber.

For additional information on the pros and cons of dark fiber versus lit services from carriers, read my blog post titled, “Shedding Light on Dark Fiber for Data Centers.”

White Paper and Webinar from Data Center Knowledge: “Strategic, Financial, and Technical Considerations for Wholesale Colocation”

One of the more interesting developments in the data center industry over the last few years has been the emergence of the wholesale data center market.

Think of wholesale data centers in the context of the traditional retail data center market. Wholesale data centers offer dedicated, multi-megawatt deployments spread out over large footprints of multiple thousands of square feet. These deployments are configured as secured vaults, private suites and cages, and entire buildings.

In fact, RagingWire has made a strategic shift into wholesale data center solutions as was reported in Data Center Knowledge in the article, “RagingWire Pursuing Cloud Providers with New Focus on Wholesale.”

White Paper - Strategic Considerations for Wholesale Data Center BuyersWhile RagingWire has been a leader in wholesale data center solutions, we have not seen very much substantive analysis published on the wholesale space. So we decided to underwrite a research project with Data Center Knowledge to study wholesale colocation and publish a white paper and webinar entitled, “Strategic, Financial, and Technical Considerations for Wholesale Colocation.” Both the white paper and webinar are available free of charge.

You can watch/listen to the webinar by clicking here.

You can download the white paper by clicking here.

What will you learn from the white paper and webinar?

From a strategic perspective, there are a number of new applications, such as video, social media, mobile, big data, and content that are leading to new computing paradigms where the design, scale, and location of data centers become increasingly important.

The financial considerations point out how sales tax abatement, scale economics, and targeting top data center markets as part of your data center portfolio can be advantageous with wholesale data centers. For example, one customer of ours said that for every dollar they spend on colocation they spend $10 on computing equipment. Say you are spending $1 million on wholesale colocation leading to $10 million in equipment purchases. At 5% sales tax, that’s a savings of $500,000.  And equipment is often refreshed every 3-5 years!

Finally, the section on technical considerations studies power density, energy efficiency, PUE and ASHRAE standards, DCIM (Data Center Infrastructure Management), and maintenance. Each of these technical elements can have a significant impact on the performance/cost of your wholesale data center, and ultimately on your business.

RagingWire is proud to support this important research and pleased to share it with the industry.

In the Data Center Colocation Industry, Top Markets Matter

You may have seen some articles recently talking about the rise of secondary markets in the data center colocation industry. Cities often mentioned on a list of secondary data center markets include: Cleveland, Jacksonville, Kansas City, Milwaukee, Minneapolis, Nashville, and Pittsburgh.

For those of us involved in the data center industry, whether as suppliers or buyers, it is important to consider secondary markets as part of our data center strategy. But, it is also important to remember that top markets are top markets for a reason. The top six US markets (Northern Virginia, New York / New Jersey, Bay Area / Silicon Valley, Dallas-Fort Worth, Chicago, Los Angeles) represent over 70% of US sales, and the US represents over 50% of the world’s data center sales.

In short, in the data center colocation industry – top markets matter.

Let’s take a look at a few key considerations regarding data center colocation markets.

The Fundamentals of the Colocation Market are Sound

The good news is that whether you are looking at top markets, secondary markets, or a combination of the two, the fundamentals of the data center colocation industry continue to be strong.

Businesses of all sizes are taking advantage of the “pay as you go” model offered by colocation, shifting their financials from up-front capital expenses to ongoing operational expenses. Enterprises looking to replace aging in-house data centers or support the growth of their business applications are increasingly looking to colocation.  Cloud-based companies, both hosting and software applications, need a place for their systems to live. These cloud-based companies typically do not want to design, build, and operate their own data centers.

Economies of Scale Add Up

The data center colocation industry is vast, growing, and commoditizing all at the same time. This combination of attributes tends to drive economies of scale which can be more prevalent in top markets. The colocation providers that win in these market conditions have access to low-cost capital and then spread the infrastructure costs across a diverse and growing customer base. Scale economies are particularly strong in the wholesale colocation markets where multi-megawatt deployments are the norm.

Be Wary of Growth Rates

Most of the analysis of secondary markets talks about growth rates. As always, be wary of comparing growth rates across bases of different sizes. For example, the most recent report from 451 Research on data center supply in secondary markets lists Nashville as having 109,500 square feet of operational data center square footage. If the entire Nashville data center market grew by 50%, it would still not be as large as one of RagingWire’s data centers in the top market of Ashburn, Virginia.

“Competitive Mass” Helps Everyone

We’ve all heard of critical mass being required to get a market growing. The same concept can be applied to competition. Both buyers and suppliers benefit from having multiple providers of similar data center colocation offerings in the same market. Buyers benefit from having multiple options to compare, and the assurance of getting a fair price. Suppliers benefit from having access to the talent, technology, and potential customers that the market attracts.

In can be difficult to find “competitive mass” in secondary markets. For example, according to 451 Research, the top three providers in a secondary market typically have 50-70% of the operational space and these market leaders vary from city to city. In addition, secondary markets tend to have lower utilization rates and absorption per year when compared to top markets, leading to reduced market efficiencies. According to 451 Research, top markets achieved utilization rates over 80% while secondary markets had an average utilization rate of 68% in 2014.

The Drivers for Secondary Markets: Regulations, Network Optimization, Economic Development

As we have seen, there are a number of forces driving the top data center colocation markets. What’s driving the secondary markets? Regulations can require that data generated within a geography stay in data centers within the geography. For example, some hospitals build on-site data centers as part of their HIPAA regulations. Network optimization might drive you to have a data center in a secondary market as part of your global footprint. Finally, economic development incentives might attract data center companies to build a facility in a secondary market.

Conclusion: Top Markets Matter

Top data center markets are critical as part of your data center portfolio. Top markets offer dense fiber and robust telecommunications, reliable and cost effective utility power, experienced data center staff, and an economic environment that enables a data center ecosystem to thrive.

We expect that top markets will continue to drive the data center colocation industry while secondary data center markets will develop as spokes to the top market hubs, not as a replacement.

Blog Tags:

Subscribe to RSS - Data Center