Blogs

Failing Up: Stronger Data Centers through Incident Management

In the critical facilities industry, incidents are typically given a bad rap. Executives and operators view incidents – events that affect the redundancy of the data center – as bad business. So winning an award for managing incidents would seem like being recognized for your ability to bail water rather than build a sound boat. But to the right company, incidents aren’t a measure of failure; they’re challenges that improve your business process. The upside to incidents is the ability to learn from them and, more importantly, the opportunity to share those lessons with others; both internally and throughout our industry.

Bob Wichert and TJ Ciccone from RagingWire receiving the 2014 Uptime Institute Incident Management Award

Bob Wichert and TJ Ciccone from RagingWire receiving the 2014 Uptime Institute Incident Management Award

At the Uptime Institute’s Critical Facilities Summit in Charlotte, NC on October 5, 2015, RagingWire Data Centers received the 2014 Uptime Incident Management Award. This award was presented in recognition of achievement in tracking and responding to – not avoiding – incidents in data center infrastructure (as determined by incident contributions to the Uptime Institute Network Abnormal Incident Report (AIR) Database).

In simplest terms, this does not mean that RagingWire experienced the most incidents. It means as a company, we successfully capitalized on them; helping to spread knowledge to other members of the organization. So much so, in fact, that we submitted more than three times the amount of lessons learned over our nearest competitors.

What does it take to win this award? It takes an operational commitment to sharing data regarding incidents at your facility and implementing changes to prevent them in the future. It is a humbling, but rewarding, task. By being active participants in the AIR database, we have been able to collectively gather statistical data that has helped shape our data center world today.

When needing to build a case for 24/7 staffing, you can access the database and track the percentage of incidents that occur during non-peak hours. If you think your site is incurring an abnormal amount of faults on a piece of equipment, or a high Mean Time Between Failures (MTBF), you can turn to the database and search for others who may be experiencing the same issue. Wondering if a new type of cooling solution would be a good fit? The shared data can help you make a more informed decision.

We all have incidents, let’s just admit that together. They are an unavoidable side effect of what we do, and certainly a smart data center strives not to make the same mistake twice. But what defines your business is not the ability to never have an incident – which would require some tricky bookkeeping and diligent rug-sweeping – but the ability to learn from them and come out stronger as a company, and ultimately as an industry.

A Hard Thump Followed by Shaking

This morning an earthquake along the Hayward Fault shook up the San Francisco Bay Area with a 4.0+ magnitude earthquake a little before 7 a.m. Pacific time. The United States Geological Survey (USGS) reported that the temblor struck less than a mile north of the town of Piedmont, Calif., near Oakland.

A report from the USGS warned earlier this year that the risk of 'the big one' hitting California has increased dramatically. According to the Los Angeles times, the quake was felt most strongly on the East Bay, including Oakland, Berkeley and surrounding areas. There were no immediate reports of widespread damage.

Last year in August, Bay Area was hit by a severe 6.1 magnitude earthquake some six miles southwest of Napa and it was reported to be the largest earthquake to hit the area in 25 years. 

Silicon Valley, Bay Area Earthquake and Data Centers

Natural disasters like earthquakes, hurricanes and tornados can devastate a community, including homes and enterprises. According to the USGS, California experiences many earthquakes each month.  Although most can’t be felt, the fact that they occur in the first place is a reason to look for a safer place to house your mission critical computing infrastructure.

As you probably know, San Francisco Bay Area is a leading hub for high-tech innovation and development, accounting for one-third of all of the venture capital investment in the United States. But, what keeps most Bay Area high-tech leaders up at night is the safety and reliability of their IT infrastructure – the threat of losing connectivity and accessibility to their critical IT systems due to natural disasters like an earthquake.

RagingWire offers highly available, reliable, and natural disaster safe data center colocation services in Northern California. Just 90 miles northeast of San Francisco Bay Area, our Sacramento data center campus "The ROCK" is far from the earthquake risk zone of Northern California. That’s why many Silicon Valley and Bay Area Internet, and enterprise companies house their computers in a low-risk location that lies outside the regional earthquake boundaries and is within driving distance of their offices.   

So, the easiest and most economical option is to select a reliable data center that is at a drivable distance and located in a disaster-free zone.

View to this on-demand webinar Data Centers and the Bay Area: Should I Stay or Should I Go? And listen to a panel of industry experts who discuss:

  • The latest market research on Bay Area data centers
  • Mitigating operational risks: earthquakes, power costs, network latency
  • Strategies for selecting and designing a robust data center platform

Are traditional power protection metrics good enough?

On a rainy Wednesday morning, at the stunning GE building in Washington, DC, I was treated to an in-depth panel discussion regarding the validity of traditional power protection metrics in the data center. The focus of this discussion was centered on the idea of TCO (Total cost of ownership) for data center equipment. In a classic battle of old versus new, the group was tasked with discussing whether or not the current metrics and standards being used today to measure total cost of ownership (TCO) were still valid considering the significant changes in data center topology that have occurred over the past decade. The 5-member panel consisted of some of the heaviest hitters in the regional data center industry.

There were six specific issues discussed by the panel. These issues were:

  1. A comparison of current versus best practice ownership models
  2. How to design 'right size' data center power protection for the right application?
  3. How to determine what exactly needs to be included in life cycle costs and how to quantify these?
  4. How does service response time affect TCO?
  5. How can initial design and specification decisions affect downstream costs?
  6. Do alternative financing approaches change TCO metrics?

With each of these being noteworthy enough to warrant their own panel discussion, one can only imagine the plethora of information uncovered in the three hour session. The portion of the discussion that was most revealing was how data gathering has changed the landscape of the TCO outlook. Obsolete methods include putting faith in tribal knowledge, and manufacturer claims, instead of what was actually being seen in the field.  The data center world is evolving away from relying on answering the question, "How did we do it before?" to "How can we do it better in the future?" In my current role, I was extremely interested in how what we do at RagingWire Data Centers fits into the latest trends in the industry. As it turns out, with the incorporation of our N-Matrix DCIM system, we are on the brink of the latest technological advances the industry has to offer regarding total cost of ownership.

RagingWire N-Matrix DCIM

Many of the players in the data center world are relying on either outdated methods, or third party software, in order to produce data that they should be producing in-house.  By doing this, you can utilize cross-departmental collaboration in order to make the best decisions in regards to purchasing the best equipment to suit your facility’s needs. The lesson to be learned here is by bringing this analysis in house, you can minimize downtime, reduce total capital expenditures and provide the best value for clients-present and future.

Demystifying the Data Center RFP

If you are shopping around for a data center now or in the near future, I am sure that you either have a request for proposal (RFP) document available or one is being put together. Several RFP templates and suggestions have been put out over the years and companies put lots of time and efforts in developing and reviewing these proposals.  

Data center RFP documents come in all different sizes, shapes and yes, even weights. In a typical RFP, there are questions around the providers corporate information, data center facility specifications, availability, service level agreements (SLA), electrical and cooling specifications, network and connectivity options, professional and support services, and of course contract and pricing. 

As a smart data center buyer, you should properly define your business goals and lay out exactly what your company is looking for in terms of your computing requirements. Be clear and concise and ask the right questions to make sure your specific and unique needs are covered for today and the future.

To develop your request for proposal (RFP) core questions, start by involving everyone within your organization, partners, and customers that will be impacted by the data center. Also, make sure your questions are fully reviewed and organized before they make it into the final document. You may rephrase your questions as necessary so that they are meaningful, understandable, and yet specific. Don’t be too vague in your questions and include any supporting documents or helpful details to the data center providers.   

The RFP is a vital part of your data center selection process and asking the right questions is absolutely critical. Click here to watch an on-demand webinar on “5 RFP Questions Buyers Must Ask a Data Center Provider”. The key takeaways from this webinar can be applied to any data center RFPs whether you are looking for a few kilowatts or several megawatts of IT load. You will also be able to download an RFP questions guideline with some additional questions that can help with your RFP and data center selection process.

Data Center RFP Questions. Watch the webinar.

The request for proposal process should be the start of a great conversation with your data center provider and asking the right questions will help you reach a smarter decision faster. Good Luck!

To Share, or Not to Share? The infrastructure dilemma of a wholesale data center customer

Enterprise customers who are searching for a data center for a 200kW or higher critical infrastructure, have a wide range of wholesale colocation providers to choose from. Besides deciding on the physical location to house their infrastructure, these customers must have some important questions to ask a colocation provider such as redundancy, power billing options, network connectivity, high density availability, scalability and services such as DCIM or remote hands. One of the biggest challenges that many of these enterprise customers face is deciding between the infrastructure delivery options that are available in the industry.

Most colocation providers follow any one of the two delivery models for providing infrastructure to wholesale customers: Shared or Dedicated. The traditional wholesale colocation design is based on dedicated infrastructure, where the customer is allocated a fixed infrastructure that maybe isolated from other customers. Dedicated infrastructure can be difficult and costly to scale beyond the initial allocation and usually comes with lower availability due to the small number of fault domains.

In a shared infrastructure colocation design, the customer is allocated a portion of the total infrastructure of the facility. Often, these shared elements are oversubscribed, relying on multiple customers not to reach or exceed their usage at the same time. Due to oversubscription of power, shared facilities can be less expensive, but more risky.

So, which infrastructure delivery model is the best fit for a wholesale customer? Is there a third option?

Data Center - Shared vs. Dedicated InfrastructureThis whitepaper presents RagingWire’s distributed redundancy model which is an enhancement of shared and dedicated infrastructure models. The load is distributed at the UPS and generator level across the facility, using a patented 2N+2 electrical design. Using this scalable system, RagingWire does not oversubscribe its infrastructure so customers are not at risk from the load or actions of other customers. This model also provides the highest level of provable availability in the industry, and it allows for a robust SLA for wholesale colocation: 100% Availability with no exclusions for maintenance. The authors also compare and identify benefits and pitfalls of the three power delivery models and offer practical advice to businesses looking for wholesale colocation. Click here to download this white paper.

White Paper: "Cloud + Data Centers: An IT Platform for Internet Applications and Businesses"

We are fortunate to be living in an era of incredible technological innovation. Over the last four decades, we have seen amazing advances in computing, storage, networking, and most recently data.

With every innovation, our nature is to view the new technology in light of what is currently available and understood. This “either or” mindset can limit the value we derive from both the current and the next generation of technology.

History teaches us that rarely does one technology platform completely replace completely. Instead, technologies build on each other like intertwined gears that combine to generate more power than they could individually.

For example, some IT experts predicted the death of the mainframe in light of personal computers. In fact, today the mainframe is a vibrant computing platform with demand being driven by the new computing systems. Mainframes, personal computers and a host of other devices work together in a computing fabric that delivers massive processing power.

The same has been said of the data center. On October 22, 2009, Jim Cramer, the host of CNBC’s Mad Money, now famously said, “Get out of the data-center stocks. I see an industry that’s about to be brought low by new technology, so I think you should sell, sell, sell.”

That "new technology" was the cloud, which not only did not kill data centers, but gave data centers a big growth shot in the arm. Consider RagingWire, a pure-play data center colocation company. Some of our biggest and fastest growing customers are in the cloud space. They count on us for 100% availability so their businesses are always running, and world-class customer service so they can focus their resources on their business not on their data center.

Much has been written about the cloud and data centers from a technical and a financial perspective. We decided to research the topic from a new view – the application developer.

Application developers are driving this next wave of technological innovation. They are taking the IT platforms that are available and writing powerful applications that make our lives better… or just more fun. In many cases, these applications have some genesis in the public cloud and then integrate private clouds and data centers.

We call this technology platform "Cloud+" and we partnered with Gigaom Research to write a paper that explores how and when this platform can be leveraged by the application developer.

We are proud of this paper and offer it to you free of charge so that you can include this Cloud+ approach in your application development and deployment strategy.

And The Award Goes To...

It was a cold and rainy evening in New York City but Gotham Hall which is an old art deco style building on Broadway Street was buzzing with dressed up data center industry professionals. The 2014 Datacenter Dynamics North American awards ceremony was taking place at this remarkable and dramatic venue.

The annual Datacenter Dynamics Awards recognize and celebrate individual and team project excellence at a technical and business level across the data center world. With a heavy focus on innovative practices, these awards are handed out in 11 categories from the more than 200 submissions. RagingWire submitted entries in two categories this year, “Innovation in IT Optimization” and “Datacenter Special Assignment Team of the Year.” Both entries were selected as finalists by a distinguished panel of judges for the prestigious Datacenter Dynamics Awards which are also known as the ‘Oscars’ of the datacenter industry.

The swanky gala started out in a large rotunda with a networking reception followed by the dinner and award ceremony. We waited nervously at our table as the award ceremony began and the winners were being announced. The award for ‘Innovation in IT optimization’ category was announced first and we did not win that category. Next award was for the ‘Datacenter Special Assignment Team of the Year’ category. The competition was fierce as we were selected along with Digital Realty Trust and Sabey Data Centers who had both already won awards that evening. At that time, we had psyched ourselves up to lose in that category as well, just so that we wouldn’t be disappointed.

The announcement was made for this category and the nominees were named. After a short pause we heard, “And the award goes to... RagingWire Data Centers”. What a great feeling it was. We all stood up, high-fived each other, shouted with joy and walked up to the stage to receive our ‘Oscar’ of the night! RagingWire’s in-house commissioning taskforce had won the 2014 award for ‘Datacenter Special Assignment Team of the Year’ category. 

Datacenter Dynamics North American Awards 2014

RagingWire’s in-house commissioning taskforce

Commissioning (Cx) is a critical step in the design and construction of any new to existing data center facility, system or addition. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), asserts that the focus of commissioning is “verifying and documenting that the facility and all of its systems and assemblies are planned, designed, installed, tested, operated and maintained to meet the needs of the owner.”

Most datacenters outsource their commissioning process. At RagingWire, we have an in-house commissioning process. Here are some of the advantages of in-house commissioning:

  • Better communications and process flow between construction and operations
  • Reduce life cycle and increased ROI
  • Continuous quality control and process improvements
  • Increased customer support through the sales cycle and occupancy.

The RagingWire’s in-house commissioning task force was established as an opportunity to improve the quality and speed of critical commissioning process as we are rapidly expanding in our existing Ashburn, Virginia and Sacramento, California campuses and also our plans for entering into new markets in the United States. The taskforce developed a central process and comprehensive method of procedure (MOP) documentation to facilitate communication across functional areas, learn from past experiences to continually improve processes, implement changes to future efforts using industry best practices, and formalize a gold standard for an in-house commissioning process. The RagingWire in-house commissioning task force has significantly improved the commissioning workflow, contributing to our 100% availability and superior customer experience.

To learn more about the commissioning process, please click here and watch a video which was also put together by our in-house commissioning task force for our sales team training.

2014 is the second year for the Datacenter Dynamics North American Awards. RagingWire is proud to have been named a finalist in both 2013 and 2014!

Earthquakes and Bay Area Data Centers: It’s Not If, but When

It’s been a long time since we’ve had a severe earthquake in the Bay Area, but today, a 6.1 magnitude earthquake struck 6 miles southwest of Napa. If you’ve never experienced an earthquake, trust me, 6.1 is a big one and scary! We live in Napa and our whole house was shaking at 3.20 AM!

As I help friends and family clean up today, I had a few thoughts to share with you. On a personal level, I’m thankful everyone is safe and accounted for. This earthquake had the potential to be much worse. Because the quake hit early in the morning, most people were home and asleep. Fortunately, the older buildings that were damaged were mostly unoccupied. All that we lost was stuff, and in the end, stuff doesn’t matter that much.

Bay Area Data Centers and Earthquake Risks

From a work perspective, it was a good reminder why RagingWire considers natural disaster risk as a primary selection criteria when building our data centers. We call our Sacramento data center campus "The ROCK" for a reason. That’s because it’s built on bedrock and is far from the earthquake risk zone of Northern California. Even though we’re only driving distance from San Francisco (90 miles) and San Jose (120 miles), we are a world apart when it comes to natural disaster risks.

The last major earthquake in the Bay Area was the Loma Prieta quake in 1989. A magnitude 6.9 shaker that caused part of the Bay Bridge to collapse and interrupted the World Series. Back then, like today, Sacramento was unaffected, because Sacramento is on a different tectonic plate and essentially has no earthquake risk.

In the 25 years since Loma Prieta, there have been many data centers built in the Bay Area. Memories are short, especially for IT people who weren’t here at the time. The Bay Area is a great place to live and work, but it isn’t an ideal place to put your critical IT infrastructure.

Remember, even if the data center building survives a major quake, the surrounding infrastructure is not resilient. Bridges, roads, power grids, fiber paths, and fuel suppliers are all vulnerable and have a direct impact on your operations and service availability. And there’s no question, another quake will hit the Bay Area.

It’s not a matter of IF, but WHEN.

Blog Tags:

Is Customer Service a Dying Art?

At RagingWire, providing superior customer service is part of our DNA. But lately, dealing with a lot of other vendors, I have to ask, “Is customer service a dying art?”

As an example, I recently moved to a new house. The telephone company took three weeks to move my phone. My work order was screwed up in their system (programming glitch). It took me calling them daily for weeks, getting transferred 5 to 13 times per call, and an unimaginable number of “escalations” to resolve. Finally the “Retention Department,” the one department charged with preventing customers from quitting the phone company, figured out the solution. The rep just typed a new work order and everything started working. Needless to say, I switched companies soon after.

In another situation, I contacted a company because they miscalculated sales tax on an order. They charged tax on a service fee which is a non-taxable item. Rather than resolving the issue and giving me some assurance that they’d fix their problem, the service rep gave me a canned response. “Our system charges tax on everything.” “Even though that’s illegal?” “Yep.” I hope for their sake that a more litigious person doesn’t notice the error.

My point here is not to complain about bad service, but to point out that good service is increasingly rare. The problem is that too many companies are trying to squeeze too many pennies by putting up walls between themselves and the customer. Outsourced call centers, automated phone trees, and refusals to hand out information do not make for a good customer experience.

Our customers hopefully experience something different. Every time you call RagingWire, a real live person answers the phone. Every time – 24x7. And that person’s job is to solve your issue. Or get you to the person who can solve that problem.

And we measure our results. Every time. Every single service ticket is followed up with a survey so our customers can tell us what we did well and where we can improve. And every single survey response is reviewed by management.

Your relationship with your data center is a long term one. It’s expensive and disruptive to switch vendors. Some data center providers take advantage of this, because they believe their customers are trapped. At RagingWire, we value our customers. We consider it an obligation to offer exceptional service so our customers never feel trapped. Ever.

RagingWire Net Promoter Score (NPS) - Aug 2014Every quarter we also ask every one of our customers how we are doing, via a Net Promoter Score (NPS) survey. NPS is the gold standard for measuring customer experience used by some of the world’s best service organizations including USAA, Costco, Apple, and Nordstrom. The average NPS across all companies is +23 on a scale of -100 to +100. A +50 is considered outstanding.

In our most recent quarter, RagingWire earned an NPS of +62, which is top in the data center industry.  By the way, we’ve had a 60+ NPS for four quarters in a row and we are committed to maintaining our leadership position in the industry.

We don’t think customer service is dead yet. At least not for our customers.

Where the Cloud Lives Matters

Cloud computing may be the most disruptive technology of this generation. It introduced a new computing paradigm that makes processing, storage, and networks more accessible, scalable, and flexible than ever before.

Most of the technical discussion around cloud computing has been about virtualization, automation, scalability, security, orchestration, and provisioning. What has been largely overlooked in this discussion is the importance of data centers to cloud computing. 

Data centers for cloud infrastructure

According to RagingWire’s CTO, Bill Dougherty, "The best cloud in the world is useless if customers can’t rely on it. With all the focus on cloud virtualization, we have lost sight of the physical reality of the cloud. The cloud lives in a data center, and where the cloud lives matters. To power the clouds of the future, data centers must deliver 100% availability, high-density power and cooling, full security, massive low-latency telecommunications, and efficient operations."

Gigaom Webinar - Cloud and Data Centers

To learn more about the critical relationship between cloud computing and data centers, you can watch this webinar called "Cloud + Data Centers: The IT Platform for Internet Applications and Businesses." The webinar is hosted by Larry Cornett, Ph.D. of Gigaom Research, and features Bill Dougherty and cloud computing experts David Linthicum and Rich Morrow.

RagingWire is proud that many of the top cloud companies house their server farms and storage arrays in our data centers. These cloud companies tell us that they look for five core elements in a data center:

Power – cloud computing requires reliable, affordable, and high-density power feeds from their data centers. The best data centers also have battery backup, diesel generators, and monitoring and automation systems so that if the utility power drops, the power to your servers stays up.

Cooling – cloud systems generate a lot of heat and can run even hotter during usage spikes. Cloud data centers should have high-capacity cooling systems that can target heat points and adjust dynamically. To manage costs and help maintain the environment, these systems need to run efficiently at low or high speeds and take advantage of cool outside air when possible.

Telecommunications – look for a carrier neutral data center that delivers local network points of presence (PoPs) to multiple telecommunications providers and offers interconnectivity to sites in the data center and across data centers.

Security – only authorized individuals should be allowed to approach your cloud system and their activity should be continuously monitored. Multi-factor security systems should be integrated end to end with no gaps, include advanced technologies such as high definition video and iris scanners, and provide detailed access reports when needed. Don’t forget the human element of an onsite security team that you can trust.

Operations – it’s not easy to run a mission critical facility 7x24x365. Cloud data center operators should be experts and be able to show you extensive run books and method of procedure (MOP) documentation that they use to run the data center. Also, maintaining equipment is key to 100% availability. Look for data centers that can maintain equipment during production by using spares and backup. Maintenance windows without live backups can be times of risk for your cloud.

Pages

Subscribe to RSS - blogs