Annie George's blog

Making Every (Inter)Connection Count

It’s been an interesting week or two of data center news! “London Internet Exchange takes space in EvoSwitch.”  “Digital Realty announces Open Internet Exhange.” “Open-IX movement goes public.”

So what is happening here? What is the problem that is solved with “open” internet exchanges?

As a frequent North American Network Operators Group (NANOG) meeting participant, I’ve heard growing angst in the internet peering ranks about perceived points of failure presented by having single buildings in major internet hubs (e.g. New York, Ashburn, London, Amsterdam) house commercial internet exchanges. Remember Hurricane Sandy? Beyond geography, questions were raised over the treatment of telecommunications carriers and the manner in which interconnections are made as opposed to the European interconnection model (member-driven, multi-site, public).

The biggest problem Open-IX is trying to solve, however, has nothing to do with geographic diversity or carrier treatment. It’s simple economics. In the United States, the major Internet exchanges are concentrated in the hands of a few data center companies and those companies charge carriers a premium for the right to participate in the exchange. Open-IX lays this case out in their framework document as “The Interconnect Problem.”

RagingWire operates, from an interconnection point of view, in line with open internet exchange principles. All of the company’s data center facilities are carrier neutral.

RagingWire Data Centers - Marrier Meet Me Room

Carriers built into our data center aren't our customers, they're our partners in bringing highly available connectivity to our customers. Our network engineers are dedicated to building trusted, close relationships with all our carrier partners to make the ordering and provisioning process as easy and seamless as possible.

Open-IX is still in its infancy, but we look forward to continuing our long relationship with the participants. We share the desire to continually improve service and reduce costs for our customers. RagingWire is the nation’s leading data center colocation provider, focused on delivering 100% availability of power and cooling with easy access to internet connectivity and the industry’s best customer service. It’s all part of our commitment to making every connection count.

Blog Tags:

Introducing RagingWire Backup Services

I used to be a tape-based data backup manager – a tape labeler -- a tape rotator – a tape scheduler – a tape library/spreadsheet manager – a restore-from-tape nightmare wake-upper. My tape-based backup programs generally worked as they should, but they involved a lot of manual labor that could have been devoted to other tasks. Over two years as an IT manager, it became clear to me that there had to be a simpler, more reliable way to backup data.

That’s why I’m excited to introduce RagingWire’s new Backup Services, a disc-based enterprise grade data backup platform that automates onsite storage and offsite replication for customers inside RagingWire’s world-class data centers. Best of all – no more tapes!

RagingWire’s New Backup Services – Engineered for the Enterprise

We utilize the same best-in-breed enterprise data protection technology for our Backup Services that many large enterprises use for their own data backups: Symantec NetBackup™. The product supports multiple databases and applications as well as both physical and virtual servers in order to automate and standardize your backups onto one unified platform. Secure encryption at the customer device keeps your data safe and sophisticated deduplication technology shortens your server backup windows. RagingWire’s 24x7 Network Operations Center is just a phone call or email away when you need to restore.

Onsite Storage. Offsite Replication.

As a company, RagingWire focuses on providing the nation’s best data center colocation services. We back up our 100% Service Level Agreement (SLA) with the most advanced data center infrastructure and we have the happiest customers in the industry. So why is RagingWire introducing Backup Services? Simply speaking, our customers ask for disc-based backup services located inside both our data center campuses in Ashburn and Sacramento. Options include onsite storage in a customer’s current RagingWire data center and offsite replication to a customer’s remote RagingWire data center. We want to give our customers the assurance that unlike a cloud backup, they’ll always know where their data is. It’s located inside a RagingWire data center and our customers have access to restore 24x7 via our expert Network Operations Center staff.

Are you a current RagingWire customer? Take a minute to check out the Backup Services data sheet or contact your account manager to see how we can help you get rid of your tapes and simplify your data backup strategy.

Curious as to how you can further optimize your data backup strategy? Head over to Jerry Gilreath’s excellent blog post: 5 Ways to Fool-Proof Your Data Backup Strategy.

Why You Should Know About NANOG

I had the pleasure to attend last week’s 57th meeting of the North American Network Operators’ Group (NANOG) in Orlando, FL as part of the RagingWire team, which included folks from our data center and network engineering groups. Not only was the weather a balmy 70 degrees, but it was one of the most informative events I have been to in my career thus far. The NANOG meeting covered some of the most important and controversial topics on the cutting edge of internet security, connectivity, and governance going into 2013. Some highlights:

World Conference on International Telecommunications (WCIT) Update.

Put simply, NANOG is a community of network operators who exchange technical and operational information in support of a single goal: to make the internet as connected and resilient as possible, ensuring free flow of information around the world. With this end-state in mind, internet governance is a topic of much conversation and consternation among the NANOG members who attend this session of the meeting.

The main thrust of the December 2012 WCIT, was to update and ratify a new, 21st century iteration of the International Telecommunications Regulations (ITRs) which were last ratified in 1988. Sally Wentworth, a public policy manager at the Internet Society, presented a "postmortem" on the effects and way forward from the ITR treaty that was voted on at the conference. Much of the presentation focused on dangers posed by those nations that wish to regulate and/or have the capability to censor, on a nationwide scale, the availability or usefulness of the internet as a weapon to quell popular sentiment or anti-government organization. The presentation was a timely reminder that unfettered, low-cost access to the internet is an ideal that must be protected. Ms. Wentworth also called on the NANOG membership body, as an expert knowledge base, to be a contributor and party to making that ideal a reality both now and in the future.

The Infrastructure and Internet Impacts of Hurricane Sandy.

Two sessions during the NANOG meeting were dedicated to the effects of Hurricane Sandy on the internet and the infrastructure that supports it. One the first day of the meeting, several data center providers discussed their responses and lessons learned from their facilities located in the NJ and NY areas. This presentation really highlighted the importance of data center location from a risk management point of view, but it isn’t just about location. Protecting data center infrastructure is also about the pre-planning that must be in place before a natural disaster occurs: diesel fuel refueling contracts, reliable hotel arrangements (with reliable backup power), work-from-home arrangements, food storage at the facility, and staffing arrangements, to name a few.

The second day featured a session on the impacts of Hurricane Sandy on the internet and posed the question: What happens if we turn off power to one of the key traffic exchange cities? One of the most interesting presentations ensued, demonstrating the interconnectedness and flexibility at the core of the internet as trace routes changed in real time to pass through Ashburn, VA instead of NYC as Sandy made landfall.

Arbor Networks Infrastructure Security 2012 Report

This meeting session focused on a survey by Arbor Networks that explored the landscape of network threats and attacks (multi-vector, DDoS) over the past year. Top issues for network operators included DDoS attacks (trending towards multi-vector sustained attacks), enterprise data centers as vulnerable even with firewalls, the increased concern over “shared risk” in migrating applications to the cloud, and the inability of mobile service providers to have visibility into their networks in order to detect or combat any kind of attack. Most attack incidents appear to be motivated by ideology, be it politics or revenge. The presentation took a deep dive into the recent, persistent, multi-vector DDoS attacks, Annie George at NANOG 57nicknamed “Operation Ababil,” of many top Wall Street financial institutions that are ongoing. Overall, an eye-opening account of the ever-growing threat to network security, not just on a governmental level, but at the industrial/enterprise level as well.

If you work or play on the internet, you should know about NANOG, the Internet Society, and other groups who discuss topics like the ones presented last week – and who desire to keep the internet accessible and reliable for all of us. Congrats to NANOG members for the community they have built and the collective expertise they have to influence the international internet community.

Why the City of Seattle’s Data Center Outage Last Weekend Matters

There are a lot of things that the City of Seattle did right in their management of last weekend’s data center outage to fix a faulty electrical bus bar. They kept their customers – the public – well informed as to the outage schedule, the applications affected, and the applications that were still online. Their critical facilities team completed the maintenance earlier than expected, and they kept their critical applications online (911 services, traffic signals, etc) throughout the maintenance period.

Seattle’s mayor, Mike McGinn, acknowledged in a press conference last week on the outage that the city’s current data center facility “is not the most reliable way for us to support the city’s operations.” Are you looking for a data center provider, especially one where you’ll never have to go on record with that statement? If so, here are a few take-aways:

A failure to plan is a plan to fail. While the city of Seattle planned to keep their emergency and safety services online, had they truly planned for the worst? I’m sure they had a backup plan if the maintenance took a turn for the worse, but did they consider the following: what if a second equipment fault occurs? Traditionally, the “uptime” of an application is defined as the amount of time that there is a live power feed provided to the equipment running that application. I would offer a new definition of “uptime” for mission critical applications: the time during which both a live power feed and an online, ready-to-failover redundant source of power is available to ensure zero interruptions. “Maintenance window” shouldn’t be part of your mission critical vocabulary. Which brings me to my next point . . .

Concurrent maintainability and infrastructure redundancy is key. I will go one step further – concurrent maintainability AND fault tolerance are key factors in keeping your IT applications online. The requirement to perform maintenance and sustain an equipment fault at the same time isn’t paranoia – it’s sound planning. Besides, a little paranoia is a good thing when we’re talking about applications like 911 services, payment processing applications, or other business-critical applications.

Location.  Location.  Location. The city of Seattle’s data center is located on the 26th story of a towering office building in downtown Seattle. The fact that they had to take down multiple applications in order to perform this maintenance implies that the electrical feed redundancy to their data center is somewhat limited. There are many competing factors in choosing data center location: risk profile, electrical feed, connectivity options, and natural hazard risk profile, to name a few. For mission critical applications, your location choice has to center on factors that will keep your systems online 100% of the time.

Flexibility and scalability give your IT department room to breathe. The city of Seattle leased out their single-floor data center space before the information economy really took hold. As a result, their solution is relatively inflexible when it comes to the allowable power density of their equipment. They’re quickly outgrowing their space and already looking for an alternate solution. Look for a data center provider that focuses on planning for high-paced increases in rack power draw – do they already have a plan for future cooling capacity? How much power has the facility contracted from the local utility?

Subscribe to RSS - Annie George's blog