Data Center

Do you reach customers around the world?

You can’t get around it. Global reach is key to maximizing revenue.

But to reach customers anywhere and everywhere, companies need to reliably distribute applications around the world. For this strategy to work, solid interconnection is key.

So where do you find the perfect city to set up your data center for global reach? Go where the hyperscalers go. Those social media, cloud, and e-commerce giants need to use servers that are customized to meet the needs of millions – maybe even billions – of users. They depend on an architecture that contracts and expands to scale up or down with flexible memory, storage, and networking capabilities.

By going to the same market that the hyperscalers are in, other companies — perhaps like yours — can utilize parts of those hyperscalers’ platforms. Plus, when you see that hyperscalers depend so much on global reach have built their own data center in a particular area, it's a strong indication that other companies should feel secure setting up their own data center there too.

For instance, let’s look at Dallas-Fort Worth. Facebook recently built a new data center there. Why? Well, what Facebook found out, you can too.

Dallas-Fort Worth offers a dense fiber network, low risk of natural disasters, reliable and affordable utility power on a standalone grid, a business-friendly environment, and a significant concentration of wholesale colocation and cloud providers. For these reasons (and more), Dallas is a prime destination for companies looking for a large data center footprint with turnkey or build-to-suit infrastructure.

RagingWire Dallas Data Center

Clearly, Dallas is a smart choice to set up a data center, whether your business calls the State of Texas home or somewhere else in the country. So, if it’s time to sunset your current data center, or you’re looking to expand into a new facility, take a good look at Dallas.

At our 42-acre Dallas data center campus, we have the flexibility to support hyperscale cloud deployments, large enterprise IT shops, emerging technologies such as artificial intelligence, machine learning, healthcare IT and any other industry vertical.

Your global connectivity begins in Dallas. Click here to explore our other data centers in the United States. In addition, we connect you to 160 data centers around the world as part of the network of our parent company, NTT.

Our 16-megawatt Dallas TX1 Data Center is ready for you today. You may find the answers there to help your company meet strategic business objectives -- all while addressing both your current and future IT demands. Contact us to take a tour and find out how you can benefit from this mission-critical facility.

Blog Tags:

Be Ready to Outsmart the Inevitable

We can’t predict earthquakes.

I bet a lot of people don’t know that. With all our modern technology, it stands to reason that we must have some kind of earthquake warning system, like we do for tsunamis. However, in this article titled “Can you predict earthquakes?”, the United States Geological Survey (USGS) clearly says otherwise: Neither the USGS nor any other scientists have ever predicted a major earthquake. We do not know how, and we do not expect to know how any time in the foreseeable future.”

Okay then. They also mention that there is a 100% chance of an earthquake somewhere in the world today, given that several million earthquakes occur annually. That’s pretty scary stuff.

So we can’t see earthquakes coming, and we certainly can’t stop them. But there is something we can do. Be ready for the inevitable.

This is especially true if you happen to do business in one of the more earthquake-prone areas of the world, such as Silicon Valley in California. There will definitely be an impact if a major earthquake hits that area. The question is: how much of an impact will your business feel? For companies that will house their mission-critical computer equipment in RagingWire’s new Silicon Valley SV1 Data Center, the answer is probably “no impact”.

Why are we so sure? Our company has outsmarted devastating earthquakes before.

In 2011, the Great East Japan Earthquake struck with a 9.1 magnitude, damaging more than one million buildings and causing property damage totaling $235 billion. That total made this the costliest natural disaster ever. However, the Tokyo data centers of NTT Communications (RagingWire’s parent company) withstood this earthquake with no damage. How?

NTT’s data centers utilize a base isolation system that incorporates four types of vibration-absorbing devices. This strategic technology reduces any seismic impact by up to 80%, which is more than enough cushioning to protect data vaults and the IT devices within them.

RagingWire’s four-story Silicon Valley SV1 Data Center will be the first facility in Santa Clara to use the same proven earthquake-absorbing technology that NTT does. Even better, SV1 will also be seismically braced on all floors to further dissipate any shaking.

This all adds up to give you peace of mind that your essential equipment will be safe when the next earthquake strikes – and it will. Check out the SV1 Data Center highlights video below and contact us to learn more.

RagingWire Silicon Valley SV1 Data Center, Santa Clara, CA

Why are Data Center Designs Changing Right Now?

Data center designs are changing significantly, and for good reason.

Hyperscale cloud companies and large enterprises are driving new design requirements that include less redundancy, more space, lower costs, and shorter construction times.

Thus, we are at a key inflection point in the history of data centers. The market is changing, and colocation data centers must respond with new ways to provide everything that customers need, and nothing they don’t.

Data Center Knowledge: New Data Center Designs for Hyperscale Cloud and the Enterprise

RagingWire is proud to collaborate with Data Center Knowledge to produce a webinar and a white paper, both titled “New Data Center Designs for Hyperscale Cloud and the Enterprise”, that explain how this new design will be executed, and how customers will benefit.  

Click here to listen to the webinar, in which RagingWire VP of Product Management Bruno Berti shares how data center customers are benefitting from this new design, and RagingWire VP of Critical Facilities Engineering and Design Bob Woolley explains how data centers are delivering that design. Bruno and Bob are preceded on the webinar by 30-year technology journalist Scott Fulton, who gives an enthusiastic and entertaining retrospective of how we got to this point in the evolution of data center design.

Click here to download the white paper, which shows how this new design addresses the facility, power, cooling, telecommunications and security requirements of hyperscale and enterprise applications, while also lowering costs and improving overall data center performance and availability.

How can we improve safety in data centers?

While participating in a recent roundtable discussion among data center industry leaders, I was shocked to hear of an estimate that at least 50% of data centers allow energized work. And it gets worse. Depending on how you define energized work, that figure could be even higher.

Data Center Safety

Simply put, data center technicians who work on energized equipment put themselves and others around them at risk. According to Industrial Safety and Hygiene News, between 500 and 700 people die every year from arc flash incidents. More than 2,000 people with arc flash injuries are treated annually in burn centers. The average cost of medical treatment from an arc flash injury is $1.5 million, with average litigation costs between $10 million-$15 million.

These arc flash accidents are absolutely devastating, and they are preventable. That’s why I believe that data center executives must step up and take a stand to prohibit working on energized equipment. The culture needs to change.

But how do we convince more data centers operators to adopt a culture of compliance toward current safety rules? What are the steps forward to a safer workplace?

I delved into this topic in an article titled “It’s Time to Upgrade Data Center Safety” published recently on Data Center Frontier. Click here to give it a read.

How to Build Data Centers Faster, Better, and More Cost Effectively

There is no question that the data center market is booming. The research firm Markets and Markets projects that the data center colocation market is expected to grow from $31.52 billion in 2017 to $62.30 billion by 2022.

Those numbers are impressive, but they’re only part of the story. Client expectations are changing as well. When evaluating data centers, clients now demand:

Speed – Savvy customers now expect construction cycles of six to nine months for new state-of-the-art data centers, as opposed to the 12 to 18 month cycles of only a few years ago.

Cost-Efficiency – Construction projects that cost $10 million per megawatt a few years ago are in the range of $7 million per megawatt today. Customers expect to see that downward cost trend continue.

Aesthetics – Exteriors must be attractive, and interiors must be comfortable – all while integrating mission critical infrastructure for power, cooling, telecommunications, and security.

RagingWire VA3 Data Center - How to build data centers faster better and cost-effectively

With so much at stake, the mega-billion-dollar question is: How can construction managers stay ahead of the industry growth rates while exceeding the new expectations of clients?

Construction managers will need to:

  1. Firmly control the “Project Triangle”
  2. Effectively deploy and manage the supply team
  3. Creatively marry form and function

Let’s take a closer look at each of these objectives.

 Firmly Control the “Project Triangle”

To complete a data center project in six to nine months, construction managers must control scope, budgets, and schedule, otherwise known as the three legs of the “Project Triangle.”

Scope must be managed with crystal clarity, ensuring alignment with your company’s business goals (markets, clients, scale) and well documented in the owner’s project requirements (OPR) to the data center design. Change management must be agile to adapt to innovation and changing conditions.

Budgets to build data centers need to be aligned with scope and schedule to deliver on the business case pro-forma of the project.

Schedules should be end-to-end, including permitting, supply chain, design, construction, commissioning, and fit-out, and use earned value management (EVM) to stay on plan for time-cost-resources.

Effectively Deploy and Manage the Supply Team

To further accelerate the construction cycle and the time it takes to build data centers, top data center companies are turning the supply chain concept into a precisely organized “supply team” of program managers, infrastructure manufacturers, and construction partners.

The biggest differences between a supply chain and a supply team are how the work gets done and the nature of the work itself.

Supply teams leverage the expertise and capabilities of internal resources as well as resources of supplier partners. By identifying key skill sets, selecting the right team members, integrating closely with the business plan, and managing and measuring team performance, a successful supply team can be deployed and reconfigured with predictable timelines for project milestones.

The nature of the construction project is changing too. Years ago, critical infrastructure was largely assembled onsite. This process slowed the production schedule by consuming considerable manpower, non-concurrent time, and space. Today, industry leading data center companies partner with key suppliers to design infrastructure components which are then built at the factory and shipped to the construction site for installation. The result is better quality, lower costs, and faster delivery.

The effect of a well-managed supply team can be profound. For example, our newest data center in Ashburn, Virginia, which features 245,000 square feet of space and 16 megawatts of power, will be IST (integrated systems test) completed in approximately six months from the start of precast.

Creatively Marry Form and Function

Building a world-class data center requires addressing local environmental and weather conditions. In one location the power utility might have unique requirements for transmission and delivery. In another location, the local government might have special zoning or aesthetic regulations. Depending on the region, data centers must be prepared for snow, ice, hurricanes, tropical storms, droughts and any other harsh elements.

For example, our Dallas TX1 Data Center was built to withstand an EF-3 tornado with winds of 136 mph. To address water quality and draught conditions, we installed one of the largest water-free cooling systems in the U.S.

In addition, new data centers must provide a work environment for technology professionals that sparks collaboration, creativity, and comfort, while adding beauty and character to the surrounding neighborhood. Our data centers include multi-function meeting spaces, lounges, exercise rooms, and architecturally significant exteriors so that tech professionals used to working for Bay Area or Silicon Valley companies will feel right at home.

Plan. Build. Improve. Repeat.

The best data center construction teams relentlessly focus on improvement. We can always do better, and we’re focused on learning ways to improve our time to market, cost-efficiency, functionality, and flexibility for future projects.

Internally, my team’s mantra is “Credible, Capable, Best-in-Class.” We strive to do what we say, expand our knowledge and skills, and be the best at what we do. We maintain an intense focus on benchmarking all elements of project performance and metrics. This benchmarking allows us to target and track continuous improvement in cost, schedule, manpower levels, safety, and quality.

Blog Tags:

Why Interconnection Matters to Wholesale Colocation

Providing large scale, secure space and power has been the focus of wholesale data center providers for many years. Until recently, innovation in wholesale data center solutions has centered on developing data center designs that improve power resiliency and building efficiency. The result being that wholesale customers today are receiving data centers that are more flexible, scalable, and cost effective than ever before.

Data Center Interconnection - RagingWire Data Centers

The cloud and surge in wholesale data center demand are changing the industry, but not in the way that many expected. Interconnection between wholesale data centers and public cloud environments is becoming a key decision criteria rather than an afterthought or “nice-to-have”. Interconnection has become a major component of the wholesale data center colocation strategy.

The Hybrid-Cloud Changes Everything.

The demand for interconnection is being driven by the changing market dynamics, specifically in hybrid cloud. Over the past five years, enterprise organizations have been successfully adopting the public cloud as a complement to rather than as a replacement for their enterprise or colocation environments. The term “hybrid cloud,” came from the desire for enterprises to utilize a combination of clouds, in-house data centers, and external colocation facilities to support their IT environments.

The challenge with having a hybrid environment arises from the need for the data centers and the cloud providers to interconnect and share data securely and with low network latency as part of one extended environment. Within a data center, servers and storage can be directly connected. A hybrid cloud environment doesn’t have the luxury of short distances, so bring on the innovation.

From Carriers to Interconnection

The first interconnection solutions were provided by telecommunications service providers. Dark fiber, lit fiber, and Internet access were all leveraged to interconnect hybrid cloud environments. However, as cloud deployments grew and matured, these network solutions became difficult to secure and manage.

Next, data center providers began to offer “direct” connections within the data center to bring cloud and colocation environments into one location, allowing interconnections to be provided through fiber cross-connects. This approach, however, restricts the choices where companies can place their colocation environments and limits the types of cloud environments to which they can connect.

The newest solutions are being introduced by interconnection platform providers, which leverage the concepts of network exchange points and software defined networking (SDN) to provide a flexible solution that can be self-managed. These innovative solutions solve many key network challenges introduced by hybrid clouds.

The Keys to Interconnection – Dedicated, secured, and many-to-many

Beyond the simple one-to-one interconnection of a cloud environment to a wholesale colocation environment, an interconnection platform is designed to allow many-to-many connections in a dedicated and secured fashion. With an interconnection platform, wholesale colocation environments can be connected to multiple cloud providers (multi-cloud) and multiple cloud locations (availability zones). This design opens the door to unique options for companies to architect their IT environments to optimize resiliency and availability while minimizing cost and complexity. The SDN aspect of the interconnection platform allows the customer to manage their connections in real time without needing involvement from the provider. They can turn up, turn down, and change individual connections at any time, usually through a simple web based user interface.

Interconnection Goes Beyond the Cloud

The first generation of interconnection solutions focused on delivering dedicated private connections to the top cloud providers. As interconnection and cloud environments evolved, it became easier to move and share workloads across clouds and the data centers. Web portals allowed users to configure, manage, and trouble shoot their cloud connections.

Today a next generation of interconnection is rolling out in wholesale data centers that extends the connectivity platform beyond the cloud, to provide data center customers more options for interconnection partners. The first of these partners – SaaS Providers. New interconnection platforms allow enterprises to directly connect to applications such as web conferencing, help desk, customer service and human resources. For the enterprise customer, they receive a dedicated and secure connection to the application that is easier to manage and integrate. For the SaaS provider, they now have a “direct connection” offering to their software that improves the availability and performance of their application service.

The second new category of interconnection partners is other enterprises. Interconnect platforms now cover the globe to connect wholesale data centers in dozens of countries and through hundreds of points-of-presence (PoPs). Any enterprise connected to the interconnect platform becomes a potential interconnection partner. For example, your company may partner with a data provider or analytics service to deliver a solution to your customers. The interconnect platform makes it easy to connect your hybrid cloud environment to your partner’s hybrid cloud environment. You can even use the interconnect portal to invite new partners to join the ecosystem.

What Next? A Hybrid of Hybrids.

It’s clear that the hybrid computing model combining data centers and clouds with a global, seamless, and secured network is the direction that corporate IT is heading. To support hybrid computing, wholesale data centers have evolved beyond space, power, telecommunications, and security. Wholesale data centers have become a critical infrastructure platform for both cloud providers and enterprises. Interconnection now becomes a core element in wholesale data center solutions bringing together clouds and enterprises into a flexible and scalable hybrid of hybrids.

Data Center Knowledge: Hyperscale Cloud Case Study (webinar and white paper)

The cloud changes everything – the computers we buy (or don’t buy), the way we write applications, how we collect and store data, and the design and location of our data centers.

Selecting the right West Coast data center solutionRagingWire is home to many top cloud providers. We are working with them to turn their requirements for space, power, cooling, telecommunications, and security into data center designs. You can see these designs deployed across our data center portfolio, including our CA3 Data Center in Sacramento, our TX1 Data Center in Dallas, and our VA3 Data Center in Ashburn, Virginia.

To help us better understand the impact of cloud computing on data centers, we hired Bill Kleyman, Featured Cloud and Data Center Analyst at Data Center Knowledge, one of the largest industry websites, to study how cloud providers and Fortune 1000 enterprises are optimizing their data centers worldwide and the unique data center requirements found in Northern California, one of the top data center markets in the world.

Based on this research, Bill wrote the white paper “Hyperscale Cloud Case Study: Selecting the Right West Coast Data Center Solution” and produced a webinar on the subject, both featuring Groupon, a global leader in local and online commerce.

Click here to download the white paper and watch the webinar.

Here are some of the key findings from the white paper and webinar:

  • Cloud applications require data centers in key internet hub locations in order to manage network latency
  • Having a data center near Silicon Valley and the Bay Area is preferred, but it is best to be outside the earthquake zone in order to reduce risk and lower costs
  • Data center scalability and flexibility are critical to support ongoing cloud capacity
  • Rigid IT architectures are being replaced with hybrids
  • As applications scale, the flexibility of the cloud can be outweighed by escalating costs
  • Multi-megawatt, large footprint deployments are driving the need for wholesale data center colocation
  • Carrier neutrality and direct cloud connectivity are required, improving reliability and performance and reducing costs
  • Using a wholesale colocation provider provides significantly faster time to delivery than deploying a traditional powered shell

VIDEO: 451 Research on the Dallas Data Center Market

With over 100 analysts worldwide, 451 Research is one of the top industry analyst firms covering the competitive dynamics of innovation in technology and digital infrastructure, from edge to core.

We were honored that Kelly Morgan, Research Vice President, and Stefanie Williams, Associate Analyst, both from 451 attended the grand opening of our Dallas TX1 Data Center on April 18, 2017.

Kelly’s team tracks hosting, managed services, and multi-tenant data center providers worldwide. They study providers, do market sizing, analyze supply and demand, and provide insights into the dynamics of the industry. In addition, 451 maintains two critical strategic tools: the Datacenter Knowledgebase, an authoritative data base with more than 100 data points on 4,500 global data centers, and the M&A Knowledgebase of 50,000 tech transactions.

In short, 451 Research knows data centers!

After the grand opening celebration, we invited Kelly to spend a day with us to tour our TX1 Data Center and talk with our President and CEO, Doug Adams. This video shares highlights of the tour and conversation as well as Kelly’s insights into the booming Dallas data center market.

According to Kelly, Dallas is the third largest data center market in the U.S. with 100 leasable data centers measuring 3,000,000 square feet and 300 megawatts – and growing fast! 

RagingWire’s Dallas Data Center Campus sits on 42 acres of land and will ultimately have five interconnected buildings totaling 1,000,000 square feet with 80 megawatts of power. Phase 1 of the campus, knows as the TX1 Data Center, has 230,000 square feet of space and 16 megawatts of power. 

TX1 was designed for scalability, flexibility, and efficiency, ideal for cloud providers and Fortune 1000 enterprises. Vaults from 1 MW to 5 MW vaults are available as well as private suites and cages, with options for dedicated infrastructure and build-to-suit solutions. TX1 features a highly efficient, waterless cooling system that leverages available outside cool air and does not stress local water supplies. The campus has fiber connectivity to the carrier hotels providing access to 70 telecommunications providers and direct connectivity to the major cloud providers, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Would You Drive 90 Miles to Save $1 Million Per Year on Your Data Center?

One of the top data center markets in the world is Northern California, including Silicon Valley and the Bay Area.

According to the most recent report from Data Center Frontier and datacenterHawk, the Silicon Valley area is home to nearly 2.6 million square feet of commissioned data center space, representing 343 megawatts of commissioned power. That makes Silicon Valley the second-largest market for data center space in the U.S., trailing only Northern Virginia.

The problem is that the costs for power, people, and real estate in Northern California are some of the highest in the United States. Plus, data center supply in Northern California can be constrained, and there is the overhanging risk of earthquakes.

What if you could have the benefits of having your data center in Northern California with a lower price point, reduced earthquake risk, and available supply?

According to our recent analysis you could save nearly $8 million over a 7-year term by having your data center in Sacramento instead of San Francisco. The savings are between about $1 million and $7 million when compared to Phoenix, Reno, and Las Vegas.

So the question is, “Would you be willing to drive the 90 miles from Silicon Valley to Sacramento to save $1 million a year on your data center?”

Data Centers in Northern California

Base Rent – land and construction costs in Silicon Valley are high.

We all know that the cost of land and construction in Silicon Valley are high. Data from the National Association of Realtors published in August 2016, showed that the median price for a home in the region around San Jose, California was over $1 million — a first for any metro area in the country.

The same factors that make your Silicon Valley home expensive are true for your Silicon Valley data center. Supply of land is scarce. Plus, the expertise to build and operate a data center in Silicon Valley is often hard to find, making these human resources expensive as well.

Power – the largest single line item in your data center Total Cost of Ownership (TCO).

For a large-footprint, hyperscale cloud or enterprise data center deployment, it’s not out of line to spend $2 on power for every $1 you spend on base rent.

The mistake in many data center TCO models is that the cost of power is viewed as a sunk cost not a variable cost – a value to be plugged in, not managed. The good news is that data center operators tend to negotiate better power rates than you could get yourself due to quantity discounts. In addition, your overall power consumption in a new state of the art colocation facility will probably be lower than in your own data center through the use of more efficient cooling technologies and automation systems.

The even better news is that wholesale data center power pricing through the Sacramento Municipal Utility District (SMUD) is the lowest in the state of California. For example, data center power pricing in San Francisco is about 12 cents per kilowatt hour. In Sacramento it’s 6.9 cents – almost ½ the price. For a typical 1 megawatt deployment, the savings in power is about $648,000 per year for a total of nearly $5 million over seven years!

Planes, trains, and automobiles – which do you prefer?

How far do you want to be away from your data center, and how do you want to get there?  Sacramento is about 90 miles from the Bay Area. Reno is 220 miles, Las Vegas is 570 miles, and Phoenix is 750 miles. Would you rather drive or fly? Driving is probably the most flexible and cost effective option. A flight might take less effort than driving, but you need to make time for getting to the airport, parking, checking in, security, boarding, etc. Plus you will need a hotel and transportation when you land, and a return flight. Airports can also be more susceptible to weather delays. In an earthquake emergency, the airports are often closed.

Networks and the speed of light.

We’re living in the most connected era in history. But even with all the fiber in the ground, network performance is still bounded by the speed of light. Network latency can be a critical variable in the end-user application experience. No one wants to be looking at the hourglass. Roundtrip network latency to and from Sacramento and the Bay Area is 3 milliseconds (ms). In Las Vegas, the roundtrip network latency to and from the Bay Area is 15.3 ms. And in Phoenix the roundtrip network latency to and from the Bay Area is 18.1 ms. These network numbers make a big difference in application performance.

Environmental risk – earthquakes and severe weather.

The discussion around environmental risk and data centers in Silicon Valley or the Bay Area usually focuses on earthquakes. According to the U.S. Geological Survey, the Bay Area has the highest density of active faults per square mile of any urban center in the country. There is a 72% chance of a magnitude 6.7 or greater earthquake occurring over the next 30 years on one of these Bay Area faults (6.7 is the same size as the 1994 Northridge earthquake which caused 57 deaths and property damage estimated at $13-40 billion). The percentage shoots up to 89% for a magnitude 6 or greater quake. 

The good news is that once you get outside the Bay Area, the risk of earthquakes drops dramatically. Sacramento, for example, is on a separate tectonic plate from the Bay Area and is rated as a “very low risk”. However, not all data center locations outside the Bay Area have a low risk of earthquakes. For example, even though Reno is 218 miles away from the Bay Area, it has a similar risk of earthquake as the Bay Area.

Regarding severe weather, the desert locations need to deal with extreme temperatures and drought conditions. During the years 1981-2015, Las Vegas averaged 75 days per year of 100+ degree temperatures. During the same time period, Phoenix averaged 110 days per year of 100+ degree temperatures. Sacramento averages 11 days per year of over 100 degree temperatures, with half of those days in July.

Sacramento may experience heavy “El Nino” rains and excessive snow melt from the Sierra Nevada Mountains which can cause the rivers to overflow. Fortunately, Sacramento has spent billions of dollars over the last 20 years on a sophisticated system of levees and spillways, and has another $2.4 billion of flood-control projects in development. Record snowfalls of 471 inches from January-March, 2017 in Lake Tahoe were a good test for the flood control measures and all of the Sacramento data centers were safe.

Run the numbers yourself

Northern California continues to stand out as a “must have” location as part of a global data center deployment. Sacramento has established itself as a primary spot for data centers in Northern California, offering available supply, lower costs, excellent telecommunications, and room to grow. Click here to use a total cost of ownership (TCO) calculator where you can run the numbers yourself. The business case is compelling.

Blog Tags:

WEBINAR: “Colocation and the Enterprise: A Conversation with 451 Research and the CIO of NCR”

According to the Uptime Institute 70% of enterprise workloads are running in corporate data centers. Colocation data centers have 20% of enterprise applications, and cloud providers have 9%.

Webinar - Is Data Center Colocation the Right Approach for the EnterpriseWhat does this data mean? The next wave of demand for colocation and cloud is going to come from the enterprise.

In fact, colocation providers will get a double hit from the enterprise. First, workloads will move directly from enterprise data centers to colocation data centers. Second, enterprise workloads that move to public cloud providers will cause those cloud companies to need more servers, storage, and potentially more colocation data center capacity.

If you are an enterprise with in house data centers, it’s time to start scenario planning for migrating your apps and data to colocation data centers and the cloud. This webinar will help you get started.

WEBINAR: “Colocation and the Enterprise: A Conversation with the 451 Research and the CIO of NCR”

Kelly Morgan, Vice President Services from 451 Research, is one of the leading industry analysts looking after the data center space. In the webinar, Kelly presents data from the 451 Voice of the Enterprise Survey that you can use to build the strategy and business case for workload migration.

Bill VanCuren is the CIO of the NCR Corporation, a 130-year old icon with $6.3 billion in revenue and 30,000 employees that is transforming itself into a nimble, internet-based software and services company. Bill has been consistently recognized as one of the top enterprise CIOs. He has over 30 years of global and corporate IT management experience.

Bill and Kelly discuss NCR’s journey from 50 in-house data centers to a handful of colocation facilities and the cloud. Bill talks about the drivers that led him to consider colocation, the analysis he presented to the Board of Directors, and the critical success factors for his team to execute the migration.

It’s a rare treat to be able to tap into the knowledge, experience, and expertise of these two industry leaders. Many thanks to Kelly and Bill for participating in this exclusive webinar. Click the link to watch the recording: Is Data Center Colocation the Right Approach for the Enterprise?

Pages

Subscribe to RSS - Data Center