Blogs

VIDEO: 451 Research on the Dallas Data Center Market

With over 100 analysts worldwide, 451 Research is one of the top industry analyst firms covering the competitive dynamics of innovation in technology and digital infrastructure, from edge to core.

We were honored that Kelly Morgan, Research Vice President, and Stefanie Williams, Associate Analyst, both from 451 attended the grand opening of our Dallas TX1 Data Center on April 18, 2017.

Kelly’s team tracks hosting, managed services, and multi-tenant data center providers worldwide. They study providers, do market sizing, analyze supply and demand, and provide insights into the dynamics of the industry. In addition, 451 maintains two critical strategic tools: the Datacenter Knowledgebase, an authoritative data base with more than 100 data points on 4,500 global data centers, and the M&A Knowledgebase of 50,000 tech transactions.

In short, 451 Research knows data centers!

After the grand opening celebration, we invited Kelly to spend a day with us to tour our TX1 Data Center and talk with our President and CEO, Doug Adams. This video shares highlights of the tour and conversation as well as Kelly’s insights into the booming Dallas data center market.

According to Kelly, Dallas is the third largest data center market in the U.S. with 100 leasable data centers measuring 3,000,000 square feet and 300 megawatts – and growing fast! 

RagingWire’s Dallas Data Center Campus sits on 42 acres of land and will ultimately have five interconnected buildings totaling 1,000,000 square feet with 80 megawatts of power. Phase 1 of the campus, knows as the TX1 Data Center, has 230,000 square feet of space and 16 megawatts of power. 

TX1 was designed for scalability, flexibility, and efficiency, ideal for cloud providers and Fortune 1000 enterprises. Vaults from 1 MW to 5 MW vaults are available as well as private suites and cages, with options for dedicated infrastructure and build-to-suit solutions. TX1 features a highly efficient, waterless cooling system that leverages available outside cool air and does not stress local water supplies. The campus has fiber connectivity to the carrier hotels providing access to 70 telecommunications providers and direct connectivity to the major cloud providers, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Would You Drive 90 Miles to Save $1 Million Per Year on Your Data Center?

One of the top data center markets in the world is Northern California, including Silicon Valley and the Bay Area.

According to the most recent report from Data Center Frontier and datacenterHawk, the Silicon Valley area is home to nearly 2.6 million square feet of commissioned data center space, representing 343 megawatts of commissioned power. That makes Silicon Valley the second-largest market for data center space in the U.S., trailing only Northern Virginia.

The problem is that the costs for power, people, and real estate in Northern California are some of the highest in the United States. Plus, data center supply in Northern California can be constrained, and there is the overhanging risk of earthquakes.

What if you could have the benefits of having your data center in Northern California with a lower price point, reduced earthquake risk, and available supply?

According to our recent analysis you could save nearly $8 million over a 7-year term by having your data center in Sacramento instead of San Francisco. The savings are between about $1 million and $7 million when compared to Phoenix, Reno, and Las Vegas.

So the question is, “Would you be willing to drive the 90 miles from Silicon Valley to Sacramento to save $1 million a year on your data center?”

Data Centers in Northern California

Base Rent – land and construction costs in Silicon Valley are high.

We all know that the cost of land and construction in Silicon Valley are high. Data from the National Association of Realtors published in August 2016, showed that the median price for a home in the region around San Jose, California was over $1 million — a first for any metro area in the country.

The same factors that make your Silicon Valley home expensive are true for your Silicon Valley data center. Supply of land is scarce. Plus, the expertise to build and operate a data center in Silicon Valley is often hard to find, making these human resources expensive as well.

Power – the largest single line item in your data center Total Cost of Ownership (TCO).

For a large-footprint, hyperscale cloud or enterprise data center deployment, it’s not out of line to spend $2 on power for every $1 you spend on base rent.

The mistake in many data center TCO models is that the cost of power is viewed as a sunk cost not a variable cost – a value to be plugged in, not managed. The good news is that data center operators tend to negotiate better power rates than you could get yourself due to quantity discounts. In addition, your overall power consumption in a new state of the art colocation facility will probably be lower than in your own data center through the use of more efficient cooling technologies and automation systems.

The even better news is that wholesale data center power pricing through the Sacramento Municipal Utility District (SMUD) is the lowest in the state of California. For example, data center power pricing in San Francisco is about 12 cents per kilowatt hour. In Sacramento it’s 6.9 cents – almost ½ the price. For a typical 1 megawatt deployment, the savings in power is about $648,000 per year for a total of nearly $5 million over seven years!

Planes, trains, and automobiles – which do you prefer?

How far do you want to be away from your data center, and how do you want to get there?  Sacramento is about 90 miles from the Bay Area. Reno is 220 miles, Las Vegas is 570 miles, and Phoenix is 750 miles. Would you rather drive or fly? Driving is probably the most flexible and cost effective option. A flight might take less effort than driving, but you need to make time for getting to the airport, parking, checking in, security, boarding, etc. Plus you will need a hotel and transportation when you land, and a return flight. Airports can also be more susceptible to weather delays. In an earthquake emergency, the airports are often closed.

Networks and the speed of light.

We’re living in the most connected era in history. But even with all the fiber in the ground, network performance is still bounded by the speed of light. Network latency can be a critical variable in the end-user application experience. No one wants to be looking at the hourglass. Roundtrip network latency to and from Sacramento and the Bay Area is 3 milliseconds (ms). In Las Vegas, the roundtrip network latency to and from the Bay Area is 15.3 ms. And in Phoenix the roundtrip network latency to and from the Bay Area is 18.1 ms. These network numbers make a big difference in application performance.

Environmental risk – earthquakes and severe weather.

The discussion around environmental risk and data centers in Silicon Valley or the Bay Area usually focuses on earthquakes. According to the U.S. Geological Survey, the Bay Area has the highest density of active faults per square mile of any urban center in the country. There is a 72% chance of a magnitude 6.7 or greater earthquake occurring over the next 30 years on one of these Bay Area faults (6.7 is the same size as the 1994 Northridge earthquake which caused 57 deaths and property damage estimated at $13-40 billion). The percentage shoots up to 89% for a magnitude 6 or greater quake. 

The good news is that once you get outside the Bay Area, the risk of earthquakes drops dramatically. Sacramento, for example, is on a separate tectonic plate from the Bay Area and is rated as a “very low risk”. However, not all data center locations outside the Bay Area have a low risk of earthquakes. For example, even though Reno is 218 miles away from the Bay Area, it has a similar risk of earthquake as the Bay Area.

Regarding severe weather, the desert locations need to deal with extreme temperatures and drought conditions. During the years 1981-2015, Las Vegas averaged 75 days per year of 100+ degree temperatures. During the same time period, Phoenix averaged 110 days per year of 100+ degree temperatures. Sacramento averages 11 days per year of over 100 degree temperatures, with half of those days in July.

Sacramento may experience heavy “El Nino” rains and excessive snow melt from the Sierra Nevada Mountains which can cause the rivers to overflow. Fortunately, Sacramento has spent billions of dollars over the last 20 years on a sophisticated system of levees and spillways, and has another $2.4 billion of flood-control projects in development. Record snowfalls of 471 inches from January-March, 2017 in Lake Tahoe were a good test for the flood control measures and all of the Sacramento data centers were safe.

Run the numbers yourself

Northern California continues to stand out as a “must have” location as part of a global data center deployment. Sacramento has established itself as a primary spot for data centers in Northern California, offering available supply, lower costs, excellent telecommunications, and room to grow. Click here to use a total cost of ownership (TCO) calculator where you can run the numbers yourself. The business case is compelling.

Blog Tags:

What Makes a Data Center ‘Texas Ready’?

The Dallas / Fort Worth region is one of the top data center markets in the U.S. and growing fast. According to the recent Data Center Frontier Special Report on the Dallas Data Center Market, the Dallas/Fort Worth region has 275 megawatts (MW) of commissioned data center power and 200 MW in the pipeline.

RagingWire is part of the data center boom in North Dallas. We are planning to open our Dallas TX1 Data Center on April 18, 2017 as the first phase of a 1,000,000 square foot (sq. ft.) campus with 80 megawatts (MW) of power on 42 acres of land. The TX1 facility has 16 MW of power and 230,000 sq. ft. of total space, 118,000 sq. ft. of raised floor, 10,000 square feet of customer office space, and 28,000 sq. ft. of customer amenity space including a conference center, meeting rooms, and a gym. If you are interested in attending our grand opening, please request your invitation.

Like most successful colocation companies, RagingWire has a proven design that is the foundation for our data center portfolio. However, to build a great data center, the design template must be tailored to the unique aspects of the site and location. This was certainly the case for RagingWire in Dallas. Here is the list of critical items we believe make a data center “Texas Ready.”

Everything is bigger in Texas – including data centers.

RagingWire TX1 Data Center in Dallas, Texas
In downtown Dallas, you’ll find the carrier hotels where space is optimized for telecommunications access. North Dallas, which includes Richardson, Plano, Carrollton, and Garland, is where you’ll find the massive wholesale data centers which are in such high demand by Fortune 1000 enterprises and cloud companies. The new RagingWire TX1 data center is built to mega scale and also with the flexibility to configure the space and power to meet the needs of individual clients. Vaults are available in 1 MW, 2 MW, and 5 MW increments and can be subdivided.

An underpowered data center is just a building.

Texas Ready data centers need a substantial amount of utility power to support the millions of websites and applications running on the servers in the facility. The good news is that today’s data centers are highly efficient users of power. The bad news is that not all sites in the Dallas-Fort Worth metroplex have sufficient utility power for data centers. RagingWire is fortunate to be working directly with Garland Power & Light (GP&L) to build a massive 108 MVA substation right next to the TX1 Data Center. The substation has diverse feeds from GP&L and Oncor. GP&L is providing two new 138KV lines connected to multiple transfer systems that can choose between two independent transmission lines connected to dedicated RagingWire transformers. Eventually this new substation will connect to the new 345KV transmission system which will be the backbone for electrical power throughout Texas.

Texas can-do attitude and the U.S. power grid.

Texas is famous for its independent nature and can-do attitude. So it should be no surprise that Texas is the only state in the U.S. with its own power grid, known as ERCOT which stands for the organization that runs the Texas grid, the Electric Reliability Council of Texas. ERCOT is a great benefit for national colocation providers that want to provide diverse access to the three power grids in the U.S. For example, RagingWire in Ashburn connects to the Eastern Grid. Our Sacramento campus connects to the Western Grid. And TX1 connects to ERCOT.

Be ready for extreme weather.

Since we’ve been in Texas, we’ve experienced draught, tornados, flash floods, and golf-ball sized hail. To be Texas Ready, a data center needs to be built to withstand extreme weather conditions and the cooling systems should minimize water usage. The Enhanced Fujita scale (also known as the EF scale) uses wind speed to gauge the potential damage from a tornado. EF ratings go from EF-0 (65-85 mph winds) to EF-5 (over 200 mph winds). Everything from roof to foundation in our TX1 Data Center was designed to withstand an EF3 tornado of 136 mph. For efficient and effective cooling even in draught conditions, we have installed one of the largest water-free mechanical systems in the U.S. Plus our data center infrastructure management (DCIM) software is tuned to take advantage of the 5000+ hours of free cooling/year expected in Dallas.

Interconnection is key.

Dallas is a destination market for data centers, meaning enterprises and cloud companies want to include Dallas as part of their global data center footprint. These companies need to distribute applications around the world for maximum performance and reliability. For this strategy to work, interconnection is key. Our TX1 data center is carrier neutral with a number of onsite carriers as well as dark fiber connections to the carrier hotels at the Infomart and 2323 Bryan, providing access to over 70 carriers and global interconnectivity.  For global secured networks, you can use the Arcstar Universal One virtual private network (VPN) from our parent company NTT Communications which offers high-quality, global network coverage in over 190 countries. In addition, TX1 is connected with RagingWire’s campuses in Sacramento, California and Ashburn, Virginia so workloads can be balanced and backed up across the country. Lastly, we offer direct, secure connections to the world’s largest cloud providers and a number of software providers.

RagingWire Data Center in the Lone Star State - TexasThe Lone Star State is a great place to have a data center, particularly as part of a national or global data center footprint. The core elements are in place to build a world-class data center – land, power, and fiber connectivity. The economy is strong, growing, and diverse. A data center community is emerging that attracts both suppliers and buyers of data centers. And the pride and optimism of Texas can’t be beat. We are thrilled that the RagingWire TX1 Data Center will be a new star in the great state of Texas.

Will Virtual Reality Make On-Site Data Center Tours Obsolete?

One of the time-honored traditions of data center buying is the on-site tour. For most data center buyers, building your short list of providers can occur remotely through conference calls and reading product literature. But the final selection requires a site visit and tour.

With data centers – seeing is believing.

RagingWire CA3 data center virtual reality tourWe see these tours all the time in Ashburn, Virginia, the largest market for data centers in the world. Ashburn is a destination market for data centers. Every day groups travel to Ashburn to tour the facilities that are the finalists for their data center deployment. Typically, these groups visit 3-5 data center providers over 2-3 days.

It is becoming more common to see “world tours” of data centers as companies deploy computing systems globally in an effort to improve systems performance and availability. Many of these tour groups are in the process of visiting Europe, Asia, and the U.S. We met with one tour group recently that was visiting 5 countries in 8 days!

Tour groups want to see the electrical, mechanical, and security systems of the data center. They walk the campus and look at the customer amenities and office space. Overall, they want to get a sense of how well the facility is run. Tours are important because a data center selection can result in a contract that lasts 5-10 years or longer.

What I’m wondering is, will virtual reality (known as VR) make these on-site data center tours obsolete?

The Challenge: A Remote Data Center You Need to Visit

In April 2015 we opened our CA3 Data Center in Sacramento, California. This facility draws most of it customers from the Bay Area and Silicon Valley.  These customers are interested in CA3 because they get a mission-critical data center with low-cost power that is in a seismically safe area and within driving distance of their offices.

The thought process for these customers is that should an earthquake hit San Francisco, their computers will be safe in Sacramento. And when a disaster strikes, they are more likely to be able to get in a car and drive to Sacramento as opposed to getting a flight out of an airport in San Francisco, Oakland, or San Jose that is delayed or the airport is closed.

When considering a potential data center site that is not across town, its biggest benefit can also be its biggest challenge. For example, Bay Area entrepreneurs can be reluctant to drive the two hours to get to Sacramento. VR is a solution that brings the data center tour to them.

The Solution: A Virtual Reality Tour in 3D

RagingWire Sacramento CA3 Data Center 3D Virtual RealityIf you haven’t tried on a pair of virtual reality goggles and tested a sample app, you should. You will immediately be impressed with the potential of these devices and applications, and you’ll recognize that we are still in the early stages of development.

On the plus side, you will feel as if you are in another reality. The challenges can be navigating through the new reality, some discomfort from the goggles, and maybe a little motion sickness.

A well developed VR tour creates a 360-degree space in 3D, and allows you to go anywhere you want. Unlike traditional virtual tours that use animation and computer renderings, or 360-degree videos made with static individual scenes, today’s VR tour offers the depth and space of the actual data center and gives users complete control over their exploration of the environment. To move through the space you just look at where you want to go and the system automatically takes you there.

A good VR tour will also offer a menu in the app that lets you go directly to a location.

Let Your Eyes Be Your Guide.

When you put on the goggles for the CA3 data center VR tour, you find yourself standing in front of the building. Look straight ahead and the front door opens so you can “walk” into the security lobby.  Enter through the mantrap and you are standing in a dramatic, two-story atrium. Spend some time on the secured patio where you can see the rock climbing wall. Pass by the workout room and the game room on your way to the terrazzo staircase. Go upstairs to the showcase conference room with a glass wall that lets you look out on one of the data vaults. Travel down the elevated skywalk overlooking rows of server racks and enter the observation room where you can see the electrical and mechanical systems. Finish your tour by standing on the balcony to view the rows of generators, massive cooling towers, and on-site, dedicated power substation. You’ve just done a walk-through of a data center and never taken a step.

What does VR mean for data center buyers?

While VR tours for data centers are still in version 1.0, the potential benefits are compelling.

See what you want, when you want. The new VR tours are self-directed and immersive. That means every view of the space is defined digitally. It’s not just a static image or animation.

Save money, time, and inconvenience. You get a full tour of a data center without planes, trains, or automobiles. You don’t have to deal with bad weather, bad traffic, and potentially bad tour guides.

Better support from the C-suite. You can show the data center to a broader audience of executives to help build support for your final selection.

Safety is better. Now you can visit the electrical room and other locked-down areas of the data center with complete safety.

Is the on-site data center tour obsolete?

Virtual reality is definitely a welcome innovation for data center buyers, but does it make on-site data center tours obsolete? Not yet.

Too often with innovations, we think in terms of “either/or.” Meaning you have a choice between either the new approach or the old. In most cases, innovation is initially more of an “and.” Both approaches work together as they progress down their development curves.

On-site tours of data centers aren’t going away any time soon as a result of virtual reality, especially for the large footprint, wholesale deployments that are currently driving the industry. Virtual reality will enhance the data center buying process by making on-site tours optional for some buyers and by making the data center experience available to a broader audience of decision makers and influencers.

Shedding Light on Dark Fiber for Data Centers

When evaluating your data center connectivity, there are many reasons to consider dark fiber, including cost control, flexibility, security, and scalability. To quickly understanding the basics of fiber optics, see my blog posting, “Tech Primer: Dark Fiber, Lit and Wavelengths.

Importance of Dark Fiber for Data Centers

The number of internet connected devices known as the Internet of Things (IoT) is expected to reach 20+ billion by the year 2020, according to a recent Gartner report.

Gartner report - IoT growth from 2014 - 2020Likewise, cloud usage has been escalating at a similar rate, year over year. The reliance on cloud platforms such as Amazon’s AWS, Microsoft Azure, IBM SoftLayer and Google Cloud also continues to skyrocket, as indicated by cloud revenues seen in this report from Synergy Research Group.

These growth markets are driving enterprises and online businesses to a level of network dependence that is becoming hyper-critical.

Growth of Cloud Providers - Synergy Research Group Report

Connectivity is King

A loss of network connectivity or degraded network performance across a network connection can cause more than the loss of revenue. Poor network performance could even cause the loss of a life in the case of some environments like healthcare, public safety, and the military.

How vital is your network? When stability, along with latency, security, and bandwidth are at the forefront of the decision makers mind, then dark fiber may be the answer.

RagingWire understands that connectivity is of paramount value in a data center. As such, RagingWire has both partnered with connectivity providers and made a significant capital investment in telecommunications infrastructure to service our customer’s unique needs.

For example, in our Sacramento data center campus, we partner with multiple carriers to provide lit and dark fiber services that deliver excellent network performance of ~2ms latency to San Francisco, and ~4ms latency to the South Bay – a location jam-packed with cloud providers.

In our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center.

In Garland, Texas, within the Dallas-Fort Worth Metroplex, RagingWire has built a fiber network that connects its 1,000,000 square foot of data center campus to over 128 locations in the Dallas and Fort Worth market, including popular carrier hotels and cloud providers.

The Good, the Bad, and the Ugly of Dark Fiber

Dark fiber may be the right decision for many of today’s infrastructure connectivity needs, but make sure you go into it with full awareness of its advantages and disadvantages.

The Good:

  • Cost-control: Dark fiber costs the same whether you intend to use 1Gb, 10Gb, or 100Gb.
  • Flexibility: You may run any protocol and any service. You may even choose to install your own multiplexing equipment and slice the fiber into multiple channels (generally 16 channels, but current off-the-shelf hardware allows for up to 160), each usable for 1Gb, 10Gb, or 100Gb.
  • Security: Public access telecommunications networks generally have multiple access points at various nodes throughout the network, whereas dark fiber routes are accessible only at each of the two endpoints of the fiber run.
  • Scalability: Service may be upgraded as required by simply using higher performance equipment. Available bandwidth on dark fiber is limited by only three things: Physics, current technology, and your budget.

The Bad:

  • Cost-control: When your bandwidth requirements are 1Gb or less, lit services will usually be less expensive than fiber when considering the initial lease of fiber and capital outlay for hardware. Additionally, long-distance dark fiber may be more expensive than purchasing a wavelength. You’ll have to do the math and figure out which meets your needs and budget.

The Ugly:

  • Reliability: Your architect will need to design around the fact that there is no built-in fault-tolerance or connectivity failure protection. This will usually require the purchase of a second diverse dark fiber path between your two locations.
  • Scalability and cost-control: Dark fiber is point-to-point. Unlike many other carrier products available, dark fiber does not allow for multiple end-points on a network. It may be necessary to purchase multiple fiber paths for larger networks.

Summary

When considering dark fiber from fiber providers instead of lit fiber or carrier services from telecom providers, it is beneficial to map your unique IT connectivity needs with the strengths and weaknesses of dark fiber. This mapping exercise should help shed some light on the best connectivity options for your custom environment.

Connectivity Questions

Is your data center carrier neutral? Carrier neutrality is vital when choosing a data center. You want your data center to freely allow interconnectivity between all carriers and other colocation providers. This protects your interests and allows for future scale, plus it maximizes your flexibility.

What types of lit connectivity are available? It is less important to focus on the number of carriers in the campus; instead focus on whether the carriers you care about are available. Also ask if their direct competitors are available. This will be helpful for bidding – to keep your primary carrier as cost competitive as possible.

Is dark fiber available? If so, where does it go? Does the data center have a dark fiber product or a partnership? Where does it go and is the pricing competitive? Does the data center have lit connectivity options or a partnership?

 

WEBINAR: “Colocation and the Enterprise: A Conversation with 451 Research and the CIO of NCR”

According to the Uptime Institute 70% of enterprise workloads are running in corporate data centers. Colocation data centers have 20% of enterprise applications, and cloud providers have 9%.

Webinar - Is Data Center Colocation the Right Approach for the EnterpriseWhat does this data mean? The next wave of demand for colocation and cloud is going to come from the enterprise.

In fact, colocation providers will get a double hit from the enterprise. First, workloads will move directly from enterprise data centers to colocation data centers. Second, enterprise workloads that move to public cloud providers will cause those cloud companies to need more servers, storage, and potentially more colocation data center capacity.

If you are an enterprise with in house data centers, it’s time to start scenario planning for migrating your apps and data to colocation data centers and the cloud. This webinar will help you get started.

WEBINAR: “Colocation and the Enterprise: A Conversation with the 451 Research and the CIO of NCR”

Kelly Morgan, Vice President Services from 451 Research, is one of the leading industry analysts looking after the data center space. In the webinar, Kelly presents data from the 451 Voice of the Enterprise Survey that you can use to build the strategy and business case for workload migration.

Bill VanCuren is the CIO of the NCR Corporation, a 130-year old icon with $6.3 billion in revenue and 30,000 employees that is transforming itself into a nimble, internet-based software and services company. Bill has been consistently recognized as one of the top enterprise CIOs. He has over 30 years of global and corporate IT management experience.

Bill and Kelly discuss NCR’s journey from 50 in-house data centers to a handful of colocation facilities and the cloud. Bill talks about the drivers that led him to consider colocation, the analysis he presented to the Board of Directors, and the critical success factors for his team to execute the migration.

It’s a rare treat to be able to tap into the knowledge, experience, and expertise of these two industry leaders. Many thanks to Kelly and Bill for participating in this exclusive webinar. Click the link to watch the recording: Is Data Center Colocation the Right Approach for the Enterprise?

10+ years of 100% uptime for global video delivery provider

Can you imagine how great you would feel if your applications and websites never went down, even during maintenance windows? What if you could count on your servers, storage devices, and network equipment to always be powered on, plugged in to the network, and running with the right temperature and humidity?

Global video delivery provider MobiTV does not have to imagine, because they have experienced it in real life for more than 10 years. Recently, MobiTV and RagingWire announced they had celebrated their 10th anniversary of zero downtime.

10+ years of 100% uptime for global video delivery provider MobiTV

MobiTV is a global leader in live and on-demand video delivery solutions. They collaborate with industry leading media partners such as NBC, CBS, ESPN, and Disney to deliver cutting edge “TV everywhere” solutions to mobile devices. They serve a global customer base of broadband/DSL operators, cable and IPTV operators, as well as wireless and over-the-top operators like AT&T, Deutsche Telekom, Sprint, T-Mobile and Verizon.

When you think of demanding, infrastructure-intensive applications, MobiTV sits near the top of the pyramid. So MobiTV houses their IT systems in one of California’s largest and most-reliable data centers, a 53MW campus from RagingWire.

Casey Fann, MobiTV’s vice president of operations and professional services said, “RagingWire’s Sacramento data center campus is an ideal location for Bay Area companies looking for data centers with minimal earthquake risk, low-latency network access, and reduced power utility costs. By selecting RagingWire, MobiTV has a partner to facilitate the delivery of live or on-demand reliable, and world-class media content with 100% uptime and impeccable customer service.”

You may find it surprising to know the average duration of a North American data center outage is 95 minutes. And, the maximum outage was 415 minutes long. This is according to a Ponemon Institute study from January 2016. It stated 25% of these outages came from UPS system failures. The same study also measured the average cost of a data center outage at $740,357 and the maximum downtime costs was $2,409,991.

When you hear stats like this from Ponemon Institute, the ROI on uptime is compelling.

See the rest of MobiTV’s amazing story about 100% uptime for 10 years

Blog Tags:

Tech Primer: Dark Fiber, Lit and Wavelengths

Some IT colleagues have asked me, “What is dark fiber and what’s the difference between lit and wavelengths?” Let’s begin by understanding the basic concepts of fiber optics and the difference between dark and lit fiber.

Difference between dark fiber and lit fiber

Unlike wire, which passes electricity through a metal conductor, fiber optic cables use a specialized glass or plastic allowing for data to be transmitted great distances by passing light through the glass. Fiber that isn't currently being used and has no light passing through it is called dark fiber.

Utilizing this fiber, telecommunications carriers can provide something called “wavelength” services, also known as “waves.” This works by splitting the light into various wavelength groups called colors or “lambdas”. Carriers sell these wavelengths to separate customers and then recombine the colors and transmit it across fiber. Therefore, lit fiber is fiber that has been lit with light by a carrier.

Dark and lit fiber explainedTo better understand lit fiber’s wavelengths, think of a rainbow where each color is a channel of light. Remember Mr. "ROY G. BIV" from middle school – Red, Orange, Yellow, Green, Blue, Indigo, and Violet?

Wavelengths essentially split a single fiber into channels. Unlike copper wire, which uses an electrical signal, fiber optic communications utilize either a LASER or a LED operating at a very high frequency. Fiber optic cables have the ability to carry much higher frequencies than copper cables. Traditional bandwidth throughput (1Gb/10Gb/40Gb/100Gb) will easily fit into a single color channel. Each fiber can be split into hundreds of colors, but a typical lit fiber is split into sixteen colors or lambdas.

The business of fiber optics

In the late 1990's, there was an uptick in the number of carriers building out dark fiber networks. In addition, there was a high degree of inter-carrier trades – a practice where carriers would swap dark fiber assets with other carriers in order to gain a foothold in markets where they were not present or had limited capacity. Inter-carrier trades coupled with mergers and acquisitions allowed even the smallest of carriers to offer competitive data transport agreements around the world.

However, a significant portion of this built capacity remained untapped for years. Carriers wanted to avoid potential long-term lost telecommunications revenues and were reluctant to enable competitors in their high margin wavelength services market. In addition, carriers did not want to cannibalize their often oversubscribed and lucrative Ethernet services market with inexpensive high-capacity fiber. For these reasons, many carriers today still do not sell dedicated fiber assets directly to customers.

New demand for bandwidth

Technology needs have changed over time. Enterprises have become more dependent upon cloud services, interconnected infrastructures have grown in number, and a massive growth in the Internet of Things (IoT) all require a large data communications infrastructure that can scale rapidly, predictably, and on demand.

To fulfill these needs, dark fiber providers have entered the market and are working to provide massive bandwidth, low latency, and high quality connectivity to the end customer in the form of raw glass: dark fiber.

For additional information on the pros and cons of dark fiber versus lit services from carriers, read my blog post titled, “Shedding Light on Dark Fiber for Data Centers.”

White Paper and Webinar from Data Center Knowledge: “Strategic, Financial, and Technical Considerations for Wholesale Colocation”

One of the more interesting developments in the data center industry over the last few years has been the emergence of the wholesale data center market.

Think of wholesale data centers in the context of the traditional retail data center market. Wholesale data centers offer dedicated, multi-megawatt deployments spread out over large footprints of multiple thousands of square feet. These deployments are configured as secured vaults, private suites and cages, and entire buildings.

In fact, RagingWire has made a strategic shift into wholesale data center solutions as was reported in Data Center Knowledge in the article, “RagingWire Pursuing Cloud Providers with New Focus on Wholesale.”

White Paper - Strategic Considerations for Wholesale Data Center BuyersWhile RagingWire has been a leader in wholesale data center solutions, we have not seen very much substantive analysis published on the wholesale space. So we decided to underwrite a research project with Data Center Knowledge to study wholesale colocation and publish a white paper and webinar entitled, “Strategic, Financial, and Technical Considerations for Wholesale Colocation.” Both the white paper and webinar are available free of charge.

You can watch/listen to the webinar by clicking here.

You can download the white paper by clicking here.

What will you learn from the white paper and webinar?

From a strategic perspective, there are a number of new applications, such as video, social media, mobile, big data, and content that are leading to new computing paradigms where the design, scale, and location of data centers become increasingly important.

The financial considerations point out how sales tax abatement, scale economics, and targeting top data center markets as part of your data center portfolio can be advantageous with wholesale data centers. For example, one customer of ours said that for every dollar they spend on colocation they spend $10 on computing equipment. Say you are spending $1 million on wholesale colocation leading to $10 million in equipment purchases. At 5% sales tax, that’s a savings of $500,000.  And equipment is often refreshed every 3-5 years!

Finally, the section on technical considerations studies power density, energy efficiency, PUE and ASHRAE standards, DCIM (Data Center Infrastructure Management), and maintenance. Each of these technical elements can have a significant impact on the performance/cost of your wholesale data center, and ultimately on your business.

RagingWire is proud to support this important research and pleased to share it with the industry.

In the Data Center Colocation Industry, Top Markets Matter

You may have seen some articles recently talking about the rise of secondary markets in the data center colocation industry. Cities often mentioned on a list of secondary data center markets include: Cleveland, Jacksonville, Kansas City, Milwaukee, Minneapolis, Nashville, and Pittsburgh.

For those of us involved in the data center industry, whether as suppliers or buyers, it is important to consider secondary markets as part of our data center strategy. But, it is also important to remember that top markets are top markets for a reason. The top six US markets (Northern Virginia, New York / New Jersey, Bay Area / Silicon Valley, Dallas-Fort Worth, Chicago, Los Angeles) represent over 70% of US sales, and the US represents over 50% of the world’s data center sales.

In short, in the data center colocation industry – top markets matter.

Let’s take a look at a few key considerations regarding data center colocation markets.

The Fundamentals of the Colocation Market are Sound

The good news is that whether you are looking at top markets, secondary markets, or a combination of the two, the fundamentals of the data center colocation industry continue to be strong.

Businesses of all sizes are taking advantage of the “pay as you go” model offered by colocation, shifting their financials from up-front capital expenses to ongoing operational expenses. Enterprises looking to replace aging in-house data centers or support the growth of their business applications are increasingly looking to colocation.  Cloud-based companies, both hosting and software applications, need a place for their systems to live. These cloud-based companies typically do not want to design, build, and operate their own data centers.

Economies of Scale Add Up

The data center colocation industry is vast, growing, and commoditizing all at the same time. This combination of attributes tends to drive economies of scale which can be more prevalent in top markets. The colocation providers that win in these market conditions have access to low-cost capital and then spread the infrastructure costs across a diverse and growing customer base. Scale economies are particularly strong in the wholesale colocation markets where multi-megawatt deployments are the norm.

Be Wary of Growth Rates

Most of the analysis of secondary markets talks about growth rates. As always, be wary of comparing growth rates across bases of different sizes. For example, the most recent report from 451 Research on data center supply in secondary markets lists Nashville as having 109,500 square feet of operational data center square footage. If the entire Nashville data center market grew by 50%, it would still not be as large as one of RagingWire’s data centers in the top market of Ashburn, Virginia.

“Competitive Mass” Helps Everyone

We’ve all heard of critical mass being required to get a market growing. The same concept can be applied to competition. Both buyers and suppliers benefit from having multiple providers of similar data center colocation offerings in the same market. Buyers benefit from having multiple options to compare, and the assurance of getting a fair price. Suppliers benefit from having access to the talent, technology, and potential customers that the market attracts.

In can be difficult to find “competitive mass” in secondary markets. For example, according to 451 Research, the top three providers in a secondary market typically have 50-70% of the operational space and these market leaders vary from city to city. In addition, secondary markets tend to have lower utilization rates and absorption per year when compared to top markets, leading to reduced market efficiencies. According to 451 Research, top markets achieved utilization rates over 80% while secondary markets had an average utilization rate of 68% in 2014.

The Drivers for Secondary Markets: Regulations, Network Optimization, Economic Development

As we have seen, there are a number of forces driving the top data center colocation markets. What’s driving the secondary markets? Regulations can require that data generated within a geography stay in data centers within the geography. For example, some hospitals build on-site data centers as part of their HIPAA regulations. Network optimization might drive you to have a data center in a secondary market as part of your global footprint. Finally, economic development incentives might attract data center companies to build a facility in a secondary market.

Conclusion: Top Markets Matter

Top data center markets are critical as part of your data center portfolio. Top markets offer dense fiber and robust telecommunications, reliable and cost effective utility power, experienced data center staff, and an economic environment that enables a data center ecosystem to thrive.

We expect that top markets will continue to drive the data center colocation industry while secondary data center markets will develop as spokes to the top market hubs, not as a replacement.

Blog Tags:

Pages

Subscribe to RSS - blogs