Resources

Data Center Knowledge: Hyperscale Cloud Case Study (webinar and white paper)

The cloud changes everything – the computers we buy (or don’t buy), the way we write applications, how we collect and store data, and the design and location of our data centers.

Selecting the right West Coast data center solutionRagingWire is home to many top cloud providers. We are working with them to turn their requirements for space, power, cooling, telecommunications, and security into data center designs. You can see these designs deployed across our data center portfolio, including our CA3 Data Center in Sacramento, our TX1 Data Center in Dallas, and our VA3 Data Center in Ashburn, Virginia.

To help us better understand the impact of cloud computing on data centers, we hired Bill Kleyman, Featured Cloud and Data Center Analyst at Data Center Knowledge, one of the largest industry websites, to study how cloud providers and Fortune 1000 enterprises are optimizing their data centers worldwide and the unique data center requirements found in Northern California, one of the top data center markets in the world.

Based on this research, Bill wrote the white paper “Hyperscale Cloud Case Study: Selecting the Right West Coast Data Center Solution” and produced a webinar on the subject, both featuring Groupon, a global leader in local and online commerce.

Click here to download the white paper and watch the webinar.

Here are some of the key findings from the white paper and webinar:

  • Cloud applications require data centers in key internet hub locations in order to manage network latency
  • Having a data center near Silicon Valley and the Bay Area is preferred, but it is best to be outside the earthquake zone in order to reduce risk and lower costs
  • Data center scalability and flexibility are critical to support ongoing cloud capacity
  • Rigid IT architectures are being replaced with hybrids
  • As applications scale, the flexibility of the cloud can be outweighed by escalating costs
  • Multi-megawatt, large footprint deployments are driving the need for wholesale data center colocation
  • Carrier neutrality and direct cloud connectivity are required, improving reliability and performance and reducing costs
  • Using a wholesale colocation provider provides significantly faster time to delivery than deploying a traditional powered shell

VIDEO: 451 Research on the Dallas Data Center Market

With over 100 analysts worldwide, 451 Research is one of the top industry analyst firms covering the competitive dynamics of innovation in technology and digital infrastructure, from edge to core.

We were honored that Kelly Morgan, Research Vice President, and Stefanie Williams, Associate Analyst, both from 451 attended the grand opening of our Dallas TX1 Data Center on April 18, 2017.

Kelly’s team tracks hosting, managed services, and multi-tenant data center providers worldwide. They study providers, do market sizing, analyze supply and demand, and provide insights into the dynamics of the industry. In addition, 451 maintains two critical strategic tools: the Datacenter Knowledgebase, an authoritative data base with more than 100 data points on 4,500 global data centers, and the M&A Knowledgebase of 50,000 tech transactions.

In short, 451 Research knows data centers!

After the grand opening celebration, we invited Kelly to spend a day with us to tour our TX1 Data Center and talk with our President and CEO, Doug Adams. This video shares highlights of the tour and conversation as well as Kelly’s insights into the booming Dallas data center market.

According to Kelly, Dallas is the third largest data center market in the U.S. with 100 leasable data centers measuring 3,000,000 square feet and 300 megawatts – and growing fast! 

RagingWire’s Dallas Data Center Campus sits on 42 acres of land and will ultimately have five interconnected buildings totaling 1,000,000 square feet with 80 megawatts of power. Phase 1 of the campus, knows as the TX1 Data Center, has 230,000 square feet of space and 16 megawatts of power. 

TX1 was designed for scalability, flexibility, and efficiency, ideal for cloud providers and Fortune 1000 enterprises. Vaults from 1 MW to 5 MW vaults are available as well as private suites and cages, with options for dedicated infrastructure and build-to-suit solutions. TX1 features a highly efficient, waterless cooling system that leverages available outside cool air and does not stress local water supplies. The campus has fiber connectivity to the carrier hotels providing access to 70 telecommunications providers and direct connectivity to the major cloud providers, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

WEBINAR: “Colocation and the Enterprise: A Conversation with 451 Research and the CIO of NCR”

According to the Uptime Institute 70% of enterprise workloads are running in corporate data centers. Colocation data centers have 20% of enterprise applications, and cloud providers have 9%.

Webinar - Is Data Center Colocation the Right Approach for the EnterpriseWhat does this data mean? The next wave of demand for colocation and cloud is going to come from the enterprise.

In fact, colocation providers will get a double hit from the enterprise. First, workloads will move directly from enterprise data centers to colocation data centers. Second, enterprise workloads that move to public cloud providers will cause those cloud companies to need more servers, storage, and potentially more colocation data center capacity.

If you are an enterprise with in house data centers, it’s time to start scenario planning for migrating your apps and data to colocation data centers and the cloud. This webinar will help you get started.

WEBINAR: “Colocation and the Enterprise: A Conversation with the 451 Research and the CIO of NCR”

Kelly Morgan, Vice President Services from 451 Research, is one of the leading industry analysts looking after the data center space. In the webinar, Kelly presents data from the 451 Voice of the Enterprise Survey that you can use to build the strategy and business case for workload migration.

Bill VanCuren is the CIO of the NCR Corporation, a 130-year old icon with $6.3 billion in revenue and 30,000 employees that is transforming itself into a nimble, internet-based software and services company. Bill has been consistently recognized as one of the top enterprise CIOs. He has over 30 years of global and corporate IT management experience.

Bill and Kelly discuss NCR’s journey from 50 in-house data centers to a handful of colocation facilities and the cloud. Bill talks about the drivers that led him to consider colocation, the analysis he presented to the Board of Directors, and the critical success factors for his team to execute the migration.

It’s a rare treat to be able to tap into the knowledge, experience, and expertise of these two industry leaders. Many thanks to Kelly and Bill for participating in this exclusive webinar. Click the link to watch the recording: Is Data Center Colocation the Right Approach for the Enterprise?

White Paper and Webinar from Data Center Knowledge: “Strategic, Financial, and Technical Considerations for Wholesale Colocation”

One of the more interesting developments in the data center industry over the last few years has been the emergence of the wholesale data center market.

Think of wholesale data centers in the context of the traditional retail data center market. Wholesale data centers offer dedicated, multi-megawatt deployments spread out over large footprints of multiple thousands of square feet. These deployments are configured as secured vaults, private suites and cages, and entire buildings.

In fact, RagingWire has made a strategic shift into wholesale data center solutions as was reported in Data Center Knowledge in the article, “RagingWire Pursuing Cloud Providers with New Focus on Wholesale.”

White Paper - Strategic Considerations for Wholesale Data Center BuyersWhile RagingWire has been a leader in wholesale data center solutions, we have not seen very much substantive analysis published on the wholesale space. So we decided to underwrite a research project with Data Center Knowledge to study wholesale colocation and publish a white paper and webinar entitled, “Strategic, Financial, and Technical Considerations for Wholesale Colocation.” Both the white paper and webinar are available free of charge.

You can watch/listen to the webinar by clicking here.

You can download the white paper by clicking here.

What will you learn from the white paper and webinar?

From a strategic perspective, there are a number of new applications, such as video, social media, mobile, big data, and content that are leading to new computing paradigms where the design, scale, and location of data centers become increasingly important.

The financial considerations point out how sales tax abatement, scale economics, and targeting top data center markets as part of your data center portfolio can be advantageous with wholesale data centers. For example, one customer of ours said that for every dollar they spend on colocation they spend $10 on computing equipment. Say you are spending $1 million on wholesale colocation leading to $10 million in equipment purchases. At 5% sales tax, that’s a savings of $500,000.  And equipment is often refreshed every 3-5 years!

Finally, the section on technical considerations studies power density, energy efficiency, PUE and ASHRAE standards, DCIM (Data Center Infrastructure Management), and maintenance. Each of these technical elements can have a significant impact on the performance/cost of your wholesale data center, and ultimately on your business.

RagingWire is proud to support this important research and pleased to share it with the industry.

Is it hot in here?

During a recent RagingWire data center tour, a potential client asked, “Is it hot in here?” Much to everyone’s surprise, the tour director smiled as he answered, “Yes, yes it is.” The reason behind the tour director’s happiness goes much deeper than you might think.

Water Cooled Chillers - RagingWire Data CentersWalking into a RagingWire Data Center, you may notice something unlike most other data centers - it’s warm...in certain spots. By utilizing extensive air flow analysis, employing a top-notch operations team, and adopting the 2011 ASHRAE TC9.9 guidelines for higher end temperatures, RagingWire is leading the way in creating a more energy efficient data center environment. It’s still a comfortable place to work. It’s just more energy efficient than 5-10 years ago.

Though no global data center temperature standard exists, in 2011 ASHRAE published an update to its whitepaper titled, “Thermal Guidelines for Data Center Processing Environments.” This guideline raised the recommended high-end temperature range from 77°F to 80.6°F, and raised the allowed high-end to reach 89.6F. Still, many data center operators have failed to embrace the broader, more environmentally friendly guidelines. Why?

Server and other electronic equipment suppliers have embraced the TC9.9 guidelines for years and most warranty their equipment to meet the new specifications. The problem exists with outdated data centers or vintage computing equipment that require lower temperatures, and fear of changing current operating parameters. According to a 2013 Uptime Institute Survey of more than 1,000 data centers globally, nearly half of all data centers reported operating at 71-75°F. The next largest segment, from 65-70°F accounted for 37% of all data centers surveyed!

Why does RagingWire operate at these higher temperatures? It all comes down to one small, three-letter acronym, PUE. PUE, or Power Usage Effectiveness, is a measure of the data center IT load vs. the total power consumption including mechanical and electrical load.

In some cases, with cooling accounting for up to 50% of the data center load, reducing the amount of consumption will lead to positive change in critical facility PUE. By some estimates, every 1°F increase in server inlet temperature can lead to a 4-5% savings in energy costs.

But let’s put some money where our math is: If you operate a facility with a PUE of 1.4, and your total IT load is 1MW, increasing your server inlet temperature just 1°F can lower your annual energy consumption by over 600,000kWh per year!

By achieving a lower design PUE, RagingWire Data Centers captures significant savings, and is able to pass these savings on to its customers. This allows retail and wholesale data center clients the ability to operate in a world-class facility with a small-world footprint. Lowering operating costs and resource consumption, without a reduction in service is usually the kind of undertaking that makes a Board of Directors stand up and applaud. And it can be as simple as ticking up that thermostat.

Data Center 2014: Top 10 technologies and how they impact you

Welcome to 2014! By now we’ve gone through most, if not all, of our budgets and we are setting plans for the future. As we look back on the past two years we see a direct acceleration in the IT world. Users are connecting in new ways, there is more content to be delivered – and this whole cloud thing just won’t let up. In fact, the recent Cisco Global Cloud Index report goes on to show that while the amount of traffic crossing the Internet and IP WAN networks is projected to reach 1.4 zettabytes per year in 2017, the amount of data center traffic is already 2.6 zettabytes per year – and by 2017 will triple to reach 7.7 zettabytes per year. This represents a 25 percent CAGR. The higher volume of data center traffic is due to the inclusion of traffic inside the data center (typically, definitions of Internet and WAN stop at the boundary of the data center).

Cisco Global Cloud Index

Cisco goes on to state that global cloud traffic crossed the zettabyte threshold in 2012, and by 2017 over two-thirds of all data center traffic will be based in the cloud. Cloud traffic will represent 69 percent of total data center traffic by 2017.

Significant promoters of cloud traffic growth are the rapid adoption of and migration to cloud architectures, along with the ability of cloud data centers to handle significantly higher traffic loads. Cloud data centers support increased virtualization, standardization, and automation. These factors lead to increased performance, as well as higher capacity and throughput.

Cloud computing aside, there have been other technologies that have promoted this increased usage of the Internet, WAN services and data center platforms. More resource availability, datacenter optimizations, and of course mobility are all impacting the direct boom in data center demand. So, looking ahead at 2014 – what are some of the major technological solutions that will impact the data center? What are the key trends that will change the way a data center delivers content? How can data centers continue to meet the demands of both the users and the business? Finally, how can these top 10 technologies and trends impact your industry vertical and business? Let’s find out!

  1. The Hybrid Cloud.

    What to look for: This technology is getting pretty hot. With so much cloud adoption happening, organizations need a good way to interconnect a wide variety of resources. A hybrid cloud is no longer defined by a simple connection between a private and public data center. Now, data centers can interconnect with resources spanning a variety of environments. This means that pieces can be in a private, public or "as-a-Service" delivery. All of these components can be connected together to bring home a powerful hybrid cloud platform. More services, many more users and a more interconnected world will prove to be the driving force behind a hybrid cloud platform.

    Who it will impact: Any organization looking at the cloud will examine a hybrid cloud infrastructure. This can range from educational institutions to government organizations. Remember, hybrid cloud platforms are now also becoming compliant and regulation-ready.

  2. Apps and APIs.

    What to look for: The application world is completely changing. However, so is the mechanism that allows these applications to interconnect and function in today’s IT world. The amazing thing here is that applications are becoming completely hardware agnostic. They aim to deliver clear usability and data. The future of the application world will revolve around constant connectivity, ease-of-use, and the power of the data center. New types of APIs are also allowing applications to access key resources much faster. This type of granular interconnect creates better and more agile cloud-centric applications. Remember, all of these future cloud applications reside within the data center.

    Who it will impact: Application developers, data center cloud providers, and organizations creating their own apps should take note. In fact, any organization looking to deliver applications via the cloud needs to understand just how powerful new applications and APIs can really be.

  3. The Next-Generation Cloud.

    What to look for: Over the next few years – many of the traditional cloud platforms we’ve come to known will evolve. Already we are seeing new ways to utilize cloud traffic and distribute data. The next-generation cloud will allow for greater levels of interconnectivity, optimized resource utilization, and creating a more powerful environment for the end-user. As more devices connect and more content is delivered – web-based communication will grow to become even more important.

    Who it will impact: This truly impacts all verticals. Since cloud-based traffic is bound to increase, organizations will need to utilize WAN-based resources much more effectively.

  4. Fog Computing.

    What to look for: Just when you started to make sense of the cloud, now you need to see through the fog! It may just be a short-lived buzz term, but the concept of fog computing is for real – we’re taking our cloud environments to the edge – literally. More environments are working to bring critical data closer to the user. This allows for a lot more data processing and rich content delivery. Creating these micro-clouds and fog networks allows for quite a bit of versatility for organizations. Plus, edge infrastructure doesn’t have to be expensive. Creating a fog computing environment can include branch offices, big data processing points, and even content distribution solutions.

    Who it will impact: Creating an edge network has its direct benefits. However, organizations looking to create big data processing points, or stream content, should look at a Fog. Companies like Netflix get the idea. Bringing the data closer to the user helps with delivery and overall performance.

  5. Everything-as-a-Service.

    What to look for: Service delivery models only continue to grow and evolve. We now have the internet of everything and even the concept of everything-as-a-service. Let me give you an example. By utilizing software development kits (SDKs) and the APIs we discussed in #2, an emerging service platform known as backend-as-a-service (BaaS) can directly integrate various cloud services with both web and mobile applications. Already, there is a broad focus where open platforms aim to support every major platform including iOS, Android, Windows, and Blackberry. Furthermore, the BaaS platform aims to further enhance the mobile computing experience by integrating with cloud-ready data center providers. These new types of services will continue to grow. Data center platforms will strive to be your one-stop source for all service delivery needs.

    Who it will impact: Service-based data center options are powerful solutions for organizations to take advantage of. For example, private-sector enterprises can deliver key services from distributed data centers and not have to utilize their own resources.

  6. Mobility Management.

    What to look for: If you’re not ready for the mobility revolution – it’s time to get on this very fast-moving train. Users, devices and data are becoming a lot more mobile. But don’t think that this conversation is about devices alone. Users are consuming information in entirely new ways and your organization must be ready to facilitate these demands. This means managing mobile resources, users, and data center resources. Whether these are mobile application servers, virtual hosts, or entire racks dedicated to a mobile user platform – be ready to accommodate the new way that users compute.

    Who it will impact: The stats don’t lie. Users are becoming more mobile and accessing their information in new ways. This impacts pharma, healthcare, government, education – and everything in between. Lawyers, doctors and professionals are connecting to their data from a number of different devices.

  7. Software-Defined Technologies.

    What to look for: Software-defined technologies now incorporate network, storage, and compute. We are able to do brilliant things with hardware systems by introducing a more intelligent logical layer. This layer allows for better configurations, optimized resource utilization, and it helps create a more efficient data center infrastructure. SDx will create more resiliencies on a global scale by allowing complex connections to happen at a simplified level. Single hardware controllers can now create thousands of independent connections spanning numerous networks. No more 1-to-1 mapping. The future of intelligent hardware utilization revolves around multi-tenancy and high-density solutions.

    Who it will impact: It’s hard to identify just one vertical that will benefit from this. Government entities and public sector organizations leverage SDx technologies to accomplish a lot of necessary tasks. Whether it’s logically segmenting a storage array or creating a globally distributed, resilient, data center connection – software-defined technology is making its mark in the industry.

  8. Web-Ready Content and Device Agnosticism.

    What to look for: Much like the mobility revolution of #6 – the content that will be delivered to these devices will have to be optimized as well. On top of it all, maintaining device agnosticism is crucial. Users want access to their applications and data regardless of OS or hardware. This is why new types of applications and rich content will be delivered to a variety of users located all over the world. Intelligent data center automation controls will optimize the user’s connection by creating automated experience orchestration. That is, engines will dynamically define the user experience based on device, connection, location, security, and other factors. This is the future of how users will consume their information.

    Who it will impact: Cloud service providers and organizations based in the cloud will look to leverage this trend heavily. Users care about their apps and data. So, all organizations looking to optimize the user experience must look at web-content delivery. Whether you’re a healthcare shop granting access to a benefits app or a finance firm allowing users to conduct complex trades – mobility and security will be critical.

  9. Converged Infrastructure.

    What to look for: This technological platform will continue to pick up steam. The direct integration of storage, network, compute and pure IOPS (Input/Output Operations Per Second) has created a platform capable of high levels of resource optimization and workload delivery. We’re able to place more users per blade, deliver richer content, and create a data center model that follows the sun. Basically, we’re creating mobility within the data center. These new platforms take up less space and are much easier to manage. Furthermore, converged systems create even more capabilities for edge networks and organizations entering the cloud environment.

    Who it will impact: Although a lot of organizations can benefit from a converged system – there are some that can benefit more than others. Call centers, schools, hospitals, data entry organizations and any other shop that has a dense amount of users doing similar things. In using virtualization and a converged infrastructure – organizations are able to optimize their resources while still increasing user density.

  10. The Personal Cloud | The Evolution of the User.

    What to look for: A typical user may carry 2-3 devices with them which connect to the cloud. What if this person is a techie? What if we take into account all of the devices they have at home as well? The reality is that the user is evolving and now maintains a continuous connection to the cloud across multiple devices. This trend will continue to push forward as users connect cars, homes, refrigerators, thermostats and other devices directly to the Internet. Moving forward, a user’s personal cloud will identify who they are, which devices they utilize and how to best optimize their experience. This means creating the same experience regardless of device or OS, controlling apps and devices remotely, and introducing even greater levels of personal cloud security. Right now, the personal cloud is just a concept applied to a user’s personal cloud experience. In the future – a personal cloud may identify a user’s overall cloud persona.

    Who it will impact: Not only will this impact the user environment, it will impact all those that interact with it as well. Organizations looking to optimize the user experience and deliver new types of content will need to be aware of how the user evolves their compute process. Service delivery, application development, and workload management will all evolve as the cloud and the user continue to change.

The modern data center has truly become the home of everything. We’re seeing entire businesses born from a cloud model as executives take direct advantage of new data center resources. The next couple of years will certainly be interesting. We’ll see more cloud-centric workloads deployed as the modern user becomes even more mobile. In our 2013 IT Predictions blog we looked at more consumerization, a lot more data and a new type of computing platform. Now, all of these technologies are certainly in place and are being evolved. There is more big data and business intelligence, we have a lot more mobility on the user front, and we are certainly seeing a lot more data center convergence take place. At the heart of it all – where so many new technologies and solutions live – sits the all-important data center. Looking ahead even further, we know that the data center will continue to serve a critical role in the evolution of IT. We’ll see even more data center automation, greater distributed technologies – and even the utilization of intelligent robotics. One thing will always be true – it’ll be up to the IT professional, cloud architect, or technology executive to utilize these powerful tools to align business goals with IT solutions.

Blog Tags:

When N+1 just isn’t good enough

2006 was a pivotal year for RagingWire. 2006 was the year RagingWire learned that for data centers, N+1 just isn't good enough. 2006 is the year RagingWire went dark. It started normally enough – a beautiful spring day in April. During normal operations, a 4,000Amp breaker failed. Material failures happen, even with the best maintenance programs in place. Our UPS's took the load while the generators started – then the generators overloaded. The data center went dark.

After bringing the data center back online, we performed a detailed post-mortem review and identified the root causes of the outage to be design flaws and human error. Our management team declared that this could never, ever happen again. We knew that we needed to invest heavily in our people, and that we needed to rethink how data centers operate. We started with investing in our people because human error can overwhelm even the best of infrastructure designs. We focused our recruitment efforts in the nuclear energy industry and the navy nuclear engineering program – both working environments where downtime is not an option and process control, including operations and maintenance, is second nature. We hired a talented team and asked them to design and operate our data center to run like a nuclear sub.

Our revamped team of engineers determined  that the then-current N+1 design did not meet their requirements, so they changed it and implemented the concept of a 2N+2 design. Their work was recognized last week as RagingWire announced the issuance of Patent #8,212,401 for “redundant isolation and bypass of critical power equipment.” This is one of 2 patents that resulted from RagingWire’s outage in 2006 and our efforts to design a system that would never go down again.

RagingWire’s systems are built to a 2N+2 standard. RagingWire exceeds the Uptime Tier IV standard by providing fault tolerance during maintenance. We call this “fix one, break one” or FOBO. This means that any active component – UPS, generator, chiller, pump, fan, switchboard , etc. – can be removed from service for maintenance, any other active component can fail, AND we can experience a utility outage, all without loss of power or cooling to the server rack. Having this extra level of redundancy allows RagingWire to perform more maintenance, and to do so without worry about a loss in availability. This enables us provide a 100% uptime SLA, even during maintenance windows.

List of data center outagesLooking at the last year and a half, it’s clear that many data centers are still providing their customers an inferior N+1 design. How do you know? Simply look at the number of providers below who have suffered data center outages over the past 18 months. Since 2006, RagingWire has had 100% availability of its power and cooling infrastructure due to its superior 2N+2 design. If your current provider is still offering N+1, maybe it’s time to ask yourself if N+1 is still good enough for you.

October 22, 2012Amazon Web Services suffered an outage in one of its data centers that takes down multiple customers in its US-East-1 region.  The problem was attributed to a “small number” of storage volumes that were degraded or failed.

October 8, 2012 – A cable cut took down Alaska Airlines’ ticketing and reservation system, causing delays across the airlines’ operations and preventing customers from checking in for flights.

August 7, 2012 – A fiber cut takes nonprofit Wikipedia offline for an hour.

July 28, 2012Hosting.com powered off 1,100 customers due to human error during preventative maintenance on a UPS in their Newark, De data center.

July 10, 2012Level3 East London data center offline for 5 hours after a UPS bus-bar failed.

July 10, 2012Salesforce.com suffers worldwide outage after a power failure in one of Equinix’ Silicon Valley data centers.

June 29, 2012Amazon Web Services suffers a power outage in its Northern Virginia data center. Multiple generators failed to start automatically due to synchronization issues, and had to be started manually.

June 14, 2012Amazon Web Services suffers a power outage in its Northern Virginia data center. The problem was blamed on a defective generator cooling fan and a mis-configured power breaker.

June 13, 2012 – US Airways had a nationwide disruption of their computer system, affecting reservations, check-in and flight status due to a power outage at their AT&T data center in Phoenix.

January 20, 2012 – A power failure in Equinix SV4 data center took several customers including Zoho offline.

October 10, 2011Research in Motion cut Blackberry service to most of Europe for 6 hours due to a power failure in their Slough, UK data center. The outage caused service disruptions for 3 days worldwide.

August 11, 2011Colo4 in Dallas, TX failed an automatic transfer switch, resulting in a 6 hour power outage.

August 7, 2011 - Amazon Web Services Dublin, Ireland data center lost power due to a generator phase-synchronization error, disrupting service to the EU West region.

Technology is great, but it’s all about the people

Often, when we take potential customers through our data centers and show them our patented technology, they remark at what incredible technology we have designed and implemented. My first response is always this: it is a result of the people we hire to design, build, and operate our data centers. My two priorities in anything we do are availability of the customer application and outstanding customer service. These are enabled by technology, but driven by people. As demonstrated by numerous studies in the data center industry and from my previous life in the nuclear industry, people remain and are still the leading cause of downtime in data centers (more on that in follow-on posts).

First, hire the right people and then give them the tools to succeed. One of the best things RagingWire has done is give our employees and our clients a clear definition of our data center design: "Fix one, break one, concurrent with a utility outage." In other words: we are designed for concurrent maintainability and fault tolerance during a utility outage, including power or water -- this philosophy resonates through RagingWire's design, construction, and operations groups, and even in our concurrent engineering sessions with our clients. The philosophy is driven by the people we have at RagingWire.

Many people in the industry have tried to treat the data center as commoditized real estate. It is unequivocally not real estate; it is a product which at the end of the day delivers availability of a service and an application to our customers. As people try to commoditize and treat data centers as real estate, they lose focus on availability and product delivery and therefore they outsource design, construction, and operations - driving down service and quality. Data centers, the product that we provide, and the availability of the service is not a commodity that can easily be white washed between providers. There is an amazing amount of technology and innovation being put into our data centers and the product is backed up by incredible people dedicated to the availability and uptime of that product.

RagingWire has made a conscious decision to hire and in-source the life cycle of the data center. We design what we build, we build what we design and we operate what we design and build. And we provide these resources to our customers to ensure that when they build out, their IT environment is as hardened and redundant as possible and that their hardware, network and application level architecture is designed in conjunction with our data center design. The people, enabled by the technology, are the cornerstone of how we accomplish this with our clients and provide 100% availability of their applications and services.

Whenever we search for potential technology vendors, RagingWire always interviews the provider’s team and makes an evaluation of the people behind the product. You can take the greatest technology in the world, place it in the wrong hands and end up with a product that no one wants. Similarly, the right people can make all of the difference, especially when given incredible technology and tools.

The next time you go to your data center, evaluate the technology, how they do business, and their availability record. Just as important, evaluate who is behind the product and the people that are ultimately going to be ensuring your critical application availability.

Three Data Center Myths

"Raised Floor vs Slab Floor" - This is a religious dispute, masquerading as a serious engineering issue. There are two sorts of folks who obsess over this point - design engineers with a very narrow worldview, and marketing executives with overactive imaginations and a casual relationship with the facts. Raised floor works great. Its more flexible for operators. Slab works great, too - it has other advantages, including reduced cost, and no heat transfer media running under the data center floor. You can build a well designed data center, with either floor configuration.

VESDA Fire Detection is "Superior" - This is taken as revealed writ by many folks, because there is one obvious fact: VESDA does an incredible job of detecting fires earlier than any other technology. However, data centers very rarely catch fire, and there are many things that will generate false positives. The most important aspect of data center fire safety is reducing false positives and unintended fire suppression, both of which result in Bad Things. VESDA works, if you are careful and can minimize false positives. Laser smoke detection works well too, but you also have to carefully limit anything that could cause a false positive.

Dual Action Drypipe fire suppression, however, is the most important fire safety technology you can have in your data center. It should be really hard to douse your servers with water. Make sure you have thermal imagers for finding hot spots on wiring and a great relationship with your local fire department.

Physical Security "Doesn’t" Matter: Customers spend a lot of effort trying to choose a data center, but sometimes don’t carefully study security issues. That’s a huge mistake. Data Centers have been and are actively targeted for theft. Certain data centers (by no means, all) are very real terrorism targets. Three factor security with double man traps (for both data center and raised floor entry) are absolutely necessary. Also, look at the security staff - can you socially engineer them? At least one major (large) data center theft occurred due to successful social engineering.

Blog Tags:

Subscribe to RSS - Resources