Blogs

Interconnection: A Better Way to Manage Data Demands

Due to the global pandemic, IT managers everywhere are trying to manage exponential demands for higher bandwidth. Increases in people working from home, distance learning, e-Medicine, video streaming, online shopping, and online gaming are creating unprecedented data demands. All of your constituents want high-availability, high-quality service.  

Colocation data centers can meet those data demands by providing redundant and diverse paths to transport network traffic across the globe. Whether it is an enterprise running mission-critical applications, or someone working or learning from home, business is expected to run flawlessly and without any interruption.  

Modernized data center network infrastructure is optimized for a wide range of complexity, such as hybrid IT infrastructure, public and private cloud connectivity, multi-cloud, private closed network, SD-WAN, the proximity and density of fiber providers, and the Internet. This infrastructure is especially well-suited for global businesses that want to have their primary and backup sites in different regions and countries with standardized services and operations.  

How can colocation data centers keep my constituents connected? 

The path to streamlined, reliable connectivity starts with a well-conceived topology, such as what we call “Data Center Interconnect.” This technology uses high-speed connectivity to connect two or more data centers over short, medium, or long distances.  

As a global data center company, we deploy a “switched fabric” topology to connect our data center network, cloud exchange, and internet exchange services to our service providers and enterprise customers. This fabric is a network connectivity platform that provides low latency, high availability, and secure connections between hybrid multi-clouds, enterprises, and your own digital assets. 

What are some problems that colocation data centers can solve? 

Data Center Interconnect is geared to help in several scenarios. The main motivation for data center interconnect has traditionally been the assurance of business continuity in a disaster. Companies could avoid network breakdowns by setting up their mission-critical infrastructure in more than one data center in the situation of a metro-wide catastrophe.  

But now, more companies are finding that they are constantly in need of increased bandwidth, as usage increases of laptops, smartphones, game consoles, in-vehicle navigation systems, and other devices. Slow response time to access the information is unacceptable. Data Center Interconnect provides connectivity links precisely when needed:  

  • Businesses relying on remote employees will not be a short-term trend. A geographically distributed workforce requires fast, reliable connectivity with the ability to scale up quickly. VPN usage has gone up more than 130% in the US during the global pandemic. Video communications have become a medium of choice for business, family, and online learning applications. Zoom, Microsoft Teams, and WebEx have seen a considerable increase in subscribers. Zoom reported $328 million in revenue during its February–April 2020 quarter.  

  • In online gaming, Electronic Arts (EA) reported tens of millions of new players dove into their online and mobile games during the pandemic. TDK Corporation sees a significant opportunity for its high-performance 6-axis MEMS motion sensor for gaming controllers and AR/VR applications. These millions of gamers need low latency connectivity, which Data Center Interconnect can facilitate. 

  • Online shopping has also exploded. Forbes reported that total online spending in May hit $82.5 billion, up 77% year-over-year. To consummate their purchases, customers must have an uninterrupted, smooth transaction that comes from the right amount of bandwidth availability. 

What are some use cases for Data Center Interconnect? 

Depending on a company’s goals, it could benefit from several different uses of Data Center Interconnect. Here are some examples: 

  1. A company connects to its own network infrastructure in a different data center within NTT Global Data Centers. For instance, a customer with a global presence across NTT Global Data Centers, say in the US (perhaps in California and Virginia), in APAC (perhaps Tokyo, Singapore, and Hong Kong) and Europe (perhaps London and Frankfurt) can interconnect their network assets.

  2. A company connects to Public Cloud (AWS / MSFT Azure / IBM Cloud / Google Cloud) in the same region and across the globe.

  3. A company connects to a vendor or partner over a private connection in the same region and across the globe.

  4. A company connects to its own infrastructure or third-party in a third-party data center. 

To sum up, the overarching benefits of the global network fabric found in Data Center Interconnect are that companies will avoid delays in connecting with their employees, customers, and business partners. This will result in more efficient external interactions with customers and prospects, and more effective internal interactions between employees. 

NTT Global Data Centers Americas deploys a “switched fabric” topology to connect our data center network, providing low latency, high availability, and secure connections between hybrid multi-clouds, enterprises, and your own digital assets.

Chicago CH1 Data Center Construction Updates

Stay up to date on the latest news and milestones from our new Chicago data center campus currently under construction.

October 2020

 

 

August 2020

The construction team has used a crane to lift the chillers to the roof of the building. The chillers are used to circulate cool water throughout the building.

July 2020

Installation on the IT Room is under way. The main distribution frame, connects NTT’s infrastructure in the Chicago data center to the many providers and locations across the globe. The yellow and white track running above the MDF, otherwise known as the cable track, carries the signal from an enclosure to the MDF and out to the world.  

June 2020

CH1 Vault Flooring

The team is preparing to pour the concrete flooring for vault one. This design includes a slab flooring and fan wall cooling technique. 

May 2020

NorthEast Walls

The North East exterior and South West exterior is in progress. Steel and glass will be placed over the wall studs in the next phase of construction. This is the exterior of where the ops team and other offices will be located. The concrete wall to the far right is where the vaults are located.

Partitioned walls

The interior partition framing is in progress. 

The prefabricated exterior walls of the building have been shipped in and installed. By using prefabricated exterior walls we are able to simultaneously build out the interior speeding up timelines of construction. 

 

April 2020

CH1 Roof Installed

The roof has been set, next the precast walls will be shipped to the site and installed to the sides of the building. 

 

March 2020

Setting the Bean

The steel was officially topped out at the NTT CH1 project.  Honoring a long standing tradition, the final steel beam was painted white and affixed with the American flag.  The beam was then signed by the ironworkers and other tradesmen.  They also added the names of project team members from Clune, Verity and Linesight before it was hoisted into place. The topping out ceremony marks a very exciting milestone for this site!

 

November 2020

CH1 Construction has begun

The first 6MW of critical IT load is currently under construction and is available for pre-lease. “The first building on our Chicago Data Center campus is standing up its core and shell now,” said Doug Adams, President and CEO of NTT Global Data Centers Americas (formerly known as RagingWire Data Centers). “By using modular construction techniques, economies of scale, and carefully planned supply chain management, we have lowered construction costs and will be able to affordably offer all the space and critical IT power needed to help companies grow and scale their data center presence as their business needs evolve.”  

 

October 2019

CH1 Exterior

Introducing NTT’s Chicago Data Center campus. The 19-acre campus is located in Itasca, Illinois, which is in the prime area for wholesale data centers. The campus will have two buildings offering a total of 72 megawatts of critical IT load. Each of the two buildings will offer 36 megawatts and 250,000 sq. ft. of space spread over two stories. The first six megawatts of critical IT load will be available in late 2020.

Grow revenue and reduce costs by using NVIDIA-powered AI in our TX and VA data centers

Artificial intelligence is changing the landscape of business and the foundational beams of companies. Across many verticals, companies are competing not only for market share or revenue, but to survive. Some companies are scaling and innovating faster, creating new markets and business models to drive more business, and offering more customized and personalized services, not locally, but globally.  

We are moving towards “AI-first” companies as businesses rethink their strategy and operating models. Artificial intelligence, interconnection and networks are now the core tenets for businesses to compete and succeed.  

The Power of Artificial Intelligence 

When artificial intelligence experts Dave Copps and Ryan Fontaine spoke as guests on our podcast series, they shared valuable insights about how companies across all industries can use AI to generate revenue or reduce costs.  

“For businesses, if you have access to good AI and good machine learning, you’re going to be all right,” said Fontaine, the CEO of Citadel Analytics. “Data is the new ‘oil’. It is the most valuable commodity in the world, more valuable than silver or gold. But you still have to do something with it. AI helps find the connections in data, and that’s what makes data – and AI – so valuable.” 

Copps, a legend in the AI community who is currently the CEO of Worlds, illustrated the value of AI with several memorable stories. First, he described how a company he was previously with helped the Department of Justice close a case they had been working on for months by using AI to find crucial info in just 30 minutes.  

Another example from early in his career was perhaps even more profound. Copps’ company was helping an intelligence agency in Europe that had been working on a case involving hundreds of millions of documents over 7 years. Mining the data through traditional search engines was not getting the job done. 

So Copps’ company built an AI platform that enabled people to see and navigate information visually, almost like using a Google Earth map. The reaction from the European intelligence agency could be considered … euphoric. 

 “The guy that was working on the data cried – in a good way,” Copps said. “He had been looking at this for so long, and the answers were right there. That points to the power of AI.” 

For more entertaining AI insight from Copps and Fontaine, you can listen to the entire podcast here.  

But what can I do to leverage AI? 

After listening to the podcast, you might think “That’s great, AI really sounds like it could help my company grow in an efficient and profitable way. But what’s my first step? How do I access and use AI technology?” 

Good question. Actually, no … that’s a great question. 

Luckily the answer to that question has just changed.  

Clients at our Dallas TX1 Data Center and our Ashburn VA3 Data Center can talk to us about accessing AI without installing their own infrastructure. That’s because we have become qualified as NVIDIA DGX-Ready data centers. DGX is NVIDIA’s flagship appliance for AI computation. 

This qualification allows us to house NVIDIA’s DGX AI appliances in our data centers, where they can be used “as-a-service” by clients demanding cutting-edge AI infrastructure.  

NVIDIA has plenty of case studies showing how companies across a broad array of industries have already seen significant results from accessing their deep learning AI technology, including:  

  • Baker Hughes has reduced the cost of finding, extracting, processing, and delivering oil. 

  • Accenture Labs is quickly detecting security threats by analyzing anomalies in large-scale network graphs. 

  • Graphistry is protecting some of the largest companies in the world by visually alerting them of attacks and big outages. 

  • Oak Ridge National Laboratory is creating a more accurate picture of the Earth’s population -- including people who have never been accounted for before -- to predict future resource requirements. 

  • Princeton University is predicting disruptions in a tokamak fusion reactor, paving the way to clean energy. 

Other companies (including some startups who you may hear a lot more about soon) have shared their inspiring stories on this page: https://www.nvidia.com/en-us/deep-learning-ai/customer-stories/

What will your story be? There’s only one way to find out – by harnessing the power of AI for your enterprise. With NVIDIA in our data centers, we can help you get there. Contact me at syunus@ragingwire.com to find out more. 
 

 

Cleaner Air Oregon program clears new Hillsboro Data Center for construction

As a new member of Oregon's business community, we're proud to announce that our new data center in Hillsboro, Oregon, has completed its evaluation by the Cleaner Air Oregon program and has been pronounced fit to proceed with construction. 

Oregon began a new era in 2018 by creating the Cleaner Air Oregon program, which makes sure that all new and existing commercial and industrial facilities cannot emit toxic air contaminants at a level that could potentially harm people.  The Oregon Department of Environmental Quality (DEQ) sees this new program as helping to ensure that industrial progress will not cause a regression in health.  

"DEQ is excited to see Cleaner Air Oregon meet the ongoing challenge of maintaining clean and healthy air in Oregon communities," said Lauren Wirtis, Public Affairs Specialist for the Oregon DEQ. 

The requirements of the Cleaner Air Oregon program apply to paper mills, steel rolling mills, oil refining companies, microchip manufacturers, lumber companies, glass manufacturers – the list goes on and on – and includes data centers.  

While smaller data centers have also gained permits from the Cleaner Air Oregon program, our Hillsboro data center is the only data center to have completed a Level 3 risk assessment. Level 3 is nearly the most rigorous on a scale that goes from Level 1 to Level 4, with Level 4 being the most complex.  

To illustrate the level of examination that takes place during a Level 3 risk assessment, and why it can take up to a year to complete, take a look at the efforts needed to gain the Cleaner Air Oregon certification

To complete a Level 2 or 3 Risk Assessment, facilities need to develop a detailed list of model inputs, including site-specific stack characteristics, building information (to account for building downwash), terrain data, specific exposure locations, and site-specific land use parameters. The quantity and complexity of parameters add up quickly and can easily become overwhelming. 

What also gets complicated fast is the amount of data that needs to be managed.  On average, facilities could be reporting anywhere from 10-50 toxic air contaminants per emissions source.  Multiply that by the number of emissions sources, the number of exposure locations, and 43,824 hours (the number of hours in the required 5-year meteorological dataset), and very quickly your Cleaner Air Oregon Risk Assessment includes over a million data points. 

Therefore, it's not only necessary to have a trained air quality modeler involved, but you also need to be able to manage a large amount of data.  This becomes increasingly important when you need to start analyzing the modeling results to determine what sources and what toxic air contaminants may be driving risks and therefore require refinement. 

Why is this level of scrutiny needed? Before the Cleaner Air Oregon rules were adopted, Oregon's based the existing rules on federal law. Those rules allowed industrial facilities to release potentially harmful amounts of air toxics, but still operate within legal requirements. The Cleaner Air Oregon rules closed the regulatory gaps left after the implementation of federal air toxics regulations.  

Change is hardly ever easy, particularly when it involves new processes and invariably new costs. But this kind of change is well worth it. We applaud the state of Oregon for doing not what is easy, but what is right. And that's why we're proud to help keep Oregon's air clean and healthy for generations to come. 

 

 

Silicon Valley SV1 Data Center Construction Updates

Stay up to date on the latest news and milestones from our new Silicon Valley data center campus currently under construction.

August 2020

 

 

July 2020

Concrete Massonary Unit Wall (CMU) in place. The gap in the 3-story steel paneling is for access to place the pre-manufactured generators inside and to be sealed once in place. 

Level one and two concrete placement complete.  

June 2020

SV1 Fireproofing

The team is adding Sprayed Fire- Resistant Material (SFRM) a spray on layer of fireproofing that contains gypsum and other materials like mineral wool, quartz, perlite, or vermiculite to the lower level of the building. The spray helps to delay or prevent the failure of steel by thermally insulating the structural members to keep them below the temperatures that cause failure in the event of a fire.

SV1 Topping Off

May 2020

The team has built the second, third, and fourth floor topping the building off. 16 MW of IT power will be distributed throughout the 160,000 sq. ft. facility.

 

April 2020

SV1 Steel Walls

The team is in the process of installing the building's steel framing. The steel is anchored to the cement flooring that sits on top of the base isolators.

 

March 2020

The final section of the base isolation system's triple bearing base isolators have been installed. Listen to Anoop Mokah, Vice President of Earthquake Protection Systems detail how the triple bearing base isolators operate in the event of an earthquake. To learn more about how the base isolation system works read the following article Taking Earthquake Protection to the Next Level in Data Centers by Bob Woolley our Sr. Vice President of Operations.

 

February 2020 

NTT Silicon Valley Data Center - Earthquake-resistant Base Isolation System Installation

The first section of the base isolation system has been installed at our Silicon Valley SV1 Data Center. The isolators are a very important piece to the state-of-the-art base isolation system, it works to protect the building during an earthquake by following the movement of the earth preventing the building itself from moving. The isolators move 3 meters in any direction to help keep the building in place. There is a greater chance that the building will stay operational after a seismic event when built with isolators.

 

September 2019

Demo has begun at our Silicon Valley SV1 Data Center, the team has recycled the old building to make way for our newest data center.  

 

March 2019

We have purchased land and have begun developing a new world-class, 16 megawatt data center “Silicon Valley SV1 Data Center” in Santa Clara, the heart of the tech capital of the world.With a total of 160,000 sq. ft. and 16MW of critical IT power, SV1 is an ideal choice for companies needing data center capacity in this top market where new inventory sells quickly. This facility is the first in Santa Clara to use an earthquake-resistant design featuring an innovative base isolation system. Our campus will also include 100 percent green energy capabilities.

Hillsboro Data Center Campus Construction Updates

Stay up to date on the latest news and milestones from our new Hillsboro data center campus currently under construction.

August 2020

 

 

July 2020

Installation on the North IT Room is under way. The main distribution frame, connects NTT’s infrastructure in the Hillsboro data center to the many providers and locations across the globe. The yellow and white track running above the MDF, otherwise known as the cable track, carries the signal from an enclosure to the MDF and out to the world. The patch panel acts as a handoff from the internet provider to the data center. 

The fan walls are currently being installed to vault one. Our team uses slab flooring and a fan wall design to cool the data floor, making for a more efficient and sustainable alternative.  

June 2020

Hillsboro Data Center Campus Construction Update - June 2020

The chillers have finished being installed. The chillers circulate cool water throughout the building.

Hillsboro Data Center Campus Construction Update - June 2020

The manufactured Medium Voltage Switch Gear have been delivered to the site. The team is tying in the medium voltage switch gear to the main utility provider adding power to the building.

April 2020

Modules Installed

The prefabricated electrical modules have finished installation on the side of vault one. 

 

HI1 Walls of Vault

The data center floor has been completed, currently the walls of the vault are being installed. 

 

HI1 Vault Walls

The pre-fabricated electrical modules have been installed at the first of NTT's Hillsboro, Oregon data center campus HI1. The pre-fabricated electrical modules are an essential part of the construction and design teams formula to delivering quality data centers quicker and more efficiently. By constructing the electrical modules offsite, the construction team is able to focus on other areas of the build while they are manufactured and shipped to the site. 

March 2020

Equipment Pads HI1 Construction

The construction team prepares for the prefabricated equipment to arrive on site by laying the foundation in which they will be installed. The foundation pads shown closer to the building are for the generators and electrical modules. The foundation pads shown further from the building are for the chillers. Simultaneously, the team prepares the inside of the building. This modular approach to construction allows us to get capacity online faster for our clients. 

February 2020

HI1 Construction Blog- Slab Floor Install Prep

Our construction team prepares the data floor by adding the essential infrastructure below neccessary for our future clients. Once all cabling and piping has been installed the data center floor will be poured, this slab floor and fan wall design ensures an effcient way to keep the data floor cool. 

January 2020

The first 6MW customizable vault is currently under construction and is available to pre-lease. This vault will be located in the first of five planned buildings on our Hillsboro, OR Data Center campus. Vault 1 will be available this summer. 
 

Vault 1 - General Specifications

  • 6MW at 258.7 watts per square foot
  • 23,000 sq. ft.
  • Single-story structure with a concrete slab design
  • Dedicated electrical infrastructure option at 6MW

To learn more about our Hillsboro, OR Data Center campus and get more details on the first 6MW vault layout and specifications, download the brochure here: NTT Hillsboro Data Center Brochure

November 2019

 

Introducing NTT's Hillsboro, Oregon Data Center campus. The 47-acre campus is located in the Pacific Northwest technology hub with one of the richest network infrastructure in the country. The 1,000,000 square foot space will have 144MW of critical IT load. The first of five buildings, HI1 will be opening in the summer of 2020. Our campus will also include 100 percent green energy capabilities.

Why Enterprises Should Use Hyperscale Data Center Techniques

When contemplating what data center customers will need over the next one to three years, several things come to mind.

First, hybrid cloud will continue to be a popular trend, and with good reason. Uncontrollable costs from public cloud service providers are driving people to pull workloads from those applications and into a more economical hybrid cloud environment. Some customers have also reported performance issues when demand on public cloud is high.

Next, many customers are asking for larger footprints and increased power density. In fact, it’s not uncommon to see densities hit 20kW. These higher power densities are a real problem for legacy data center providers that designed their buildings to serve 4-5kW per rack installations, back in the days when a high-density load was considered to be 10kW. We’re long past that now. Data center operators who can build-to-suit can handle these new 20kW and higher requirements, which is really what customers need to run their mission-critical applications.

The bottom line is: to get the most cost-effective, efficient use of a data center, enterprises need to use hyperscale techniques. But how?

Let’s start with utilization rates. Enterprises typically get about a 30 percent utilization rate of their infrastructure when measured on a 24x7x365 basis, whereas hyperscalers get 70-80 percent – more than double that of enterprises. If enterprises can double their utilization rate, it means that they can buy half of what they normally buy and still serve the same workload demand. That will save a lot of money. 

But to improve their utilization rate, enterprises have a choice. They can do it on their own, or buy a hyperconverged system that essentially does the same thing. That hyperconverged system will give them public cloud economics in a private cloud environment. There are also quite a few composable systems from major OEMs that leverage similar techniques.

A few years ago, I sponsored an infrastructure TCO study that still rings true today. The study highlighted the point that most of the cost of running a server is not the cost of the server itself. The TCO of running a server consists of three major components: 1) the cost of the server, 2) administration and management, and 3) space, power and cooling. The actual server represents about 20% of the total, 70% is administration and management, and the remaining 10% is space, power, and cooling. 

So, enterprises that want to reduce costs should look closely at the fact that 70% of their server costs are tied up in administration and management. Hyperscalers have done exactly that. Their investments in software, machine learning, and automation drive utilization rates to 4X that of the average enterprise, creating world-class TCO and programmability of their data center infrastructure.  

Can Growing CDN Providers Overcome These 3 Challenges?

As the COVID-19 pandemic swept across the globe, content delivery network (CDN) providers were quickly thrust into the world’s spotlight. People everywhere depended on CDNs to quickly and smoothly connect them to news, entertainment, education, social media, training, virtual events, videoconferencing, telemedicine … the list goes on and on. 
 
That’s why it’s no surprise that the global CDN market that was valued at $9.9 billion in 2018 is now expected to reach $57.15 billion by 2025, according to Brand Essence Market Research
 
But to turn those lofty revenue projections into revenue growth, the smartest CDN providers must find ways to overcome significant challenges, such as these three mentioned below. 
 
Challenge #1: Deliver high-performance with low latency 
 
People everywhere are demanding high quality content and video, without any speed bumps due to latency issues. Although software, hardware, networks, and bandwidth all affect the level of latency, the single biggest factor that slows down content is the distance that light has to travel. That’s because for all our mind-blowing achievements in technology, one thing we haven’t yet figured out is how to speed up the speed of light. 
 
Light travels at about 125,000 miles per second through optical fibers, which is roughly two-thirds of the speed of light in a vacuum (186,000 miles per second). So for every 60 miles of distance a packet has to travel, about a half a millisecond of time is added to the one-way latency trip, and thus 1 millisecond to the round-trip time.  
 
They say money makes the world go ‘round. So in essence, latency can stop the world from turning, as shown in these examples: 

  • In finance, for years firms have offered premium “ultra-low latency” services to investors who want to receive key data about two seconds before the general public does. What can happen in two seconds? In the stock market, quite a lot. Research by the Tabb Group estimated that if a broker’s platform is even 5 milliseconds behind the competition, it could lose at least 1% of its flow, equating to about $4 million in revenues per millisecond. 
  •  In retail, according to research by Akamai, a 100 ms delay in web site load time leads to a decrease in conversion rates of up to 7%. Conversely, Akamai reported that Walmart noticed that every 100 ms of improvement in load time resulted in up to a 1% increase in revenue.  
  • In the cloud, latency looms as a major hindrance. Among the findings in a research paper by the University of Cambridge Computer Laboratory are that 10µs (those are microseconds, or one-millionth of a second) latency in each direction is enough to have a noticeable effect, and 50µs latency in each direction is enough to significantly affect performance. For data centers connected by additional hops between servers, latency increases further. This has ramifications for workload placement and physical host sharing when trying to reach performance targets. 

Every CDN wants to provide high-performance, but predicting the performance of CDNs can be an imprecise exercise. CDNs use different methodologies to measure performance, and have various types of underlying architecture. However, one universal truth is that the geographic locations of CDN data centers play a big role in performance measurements. 
 
This is one reason why NTT’s Global Data Centers division has strategically chosen certain locations for their data center campuses. For example, our data centers in Sacramento give companies based in San Francisco a low-latency experience as compared to other locations. Those customers experience round-trip latency of only 3 milliseconds to go out and back the 88 miles from Sacramento to San Francisco. That compares well to round-trip latency of 4.2 milliseconds from San Francisco to Reno (218 miles away), 15.3 milliseconds from San Francisco to Las Vegas (570 miles away), or 18.1 milliseconds from San Francisco to Phoenix (754 miles away). 
 
In Chicago, where NTT is building a new 72MW data center campus, customers at that Central U.S. location will enjoy low latency to both U.S. coasts. According to AT&T, IP network latency from Chicago to New York is 17 milliseconds, and from Chicago to Los Angeles is 43 milliseconds. 
 
Reducing latency is a huge point of emphasis at NTT. At our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center. 
 
Challenge #2: Scale up to meet a growing base 
 
Every business wants more customers, but for CDNs, they need to be careful what they wish for. Huge bursts in Internet traffic can bring an overwhelming amount of peak usage. Videoconferencing historians will long remember the third week of March 2020, when a record 62 million downloads of videoconferencing apps were recorded. Once those apps were downloaded, they were quickly put to use – and have only increased in usage time since then. 
 
The instant reaction to those stats and trends would be for CDNs to add on as much capacity as possible. But building to handle the peak demand can be costly, as a CDN also needs to economically account for lower-usage times where there are huge amounts of capacity that will not utilized. 
 
These massive spikes and valleys bring a significant traffic engineering challenge. A well-prepared CDN will minimize downtime by utilizing load balancing to distribute network traffic evenly across several servers, making it easier to scale up or down for rapid changes in traffic. 
 
Technology such as intelligent failover provides uninterrupted service even if one or more of the CDN servers go offline due to hardware malfunction. The failover can redistribute the traffic to the other operational servers.  
 
Appropriate routing protocols will transfer traffic to other available data centers, ensuring that no users lose access to a website. This is what NTT’s Global Data Centers division had in mind when we deployed a fully redundant point-to-point connection, via multiple carriers, between all our U.S. data centers.  
 
We make critical functions such as disaster recovery, load balancing, backup, and replication easy and secure. Our services support any Layer 3 traffic for functions such as web traffic, database calls, and any other TCP/IP based functions. Companies that leverage our coast-to-coast cross connect save significant money over installing and managing your own redundant, firewall-protected, multi-carrier connection. 
 
Challenge #3: Plan for ample redundancy, disaster recovery, and risk reduction 
 
Sure, most of the time disasters don’t happen. But I suppose that depends on your definition of a disaster. It doesn’t have to be the type that you see in movies – with earth-cracking gaps opening up in the middle of major cities. Even events as routine as thunderstorms can have ripple effects that could interrupt service provided by a CDN. 
 
That’s why smart CDNs are reducing their risk of outages by not having all their assets and content in one geographic area. The good thing is that by enacting one fairly simple strategy, CDNs can check the boxes for ample redundancy, disaster recovery, and risk reduction. 
 
That strategy is to have a data center presence in multiple geographic locations. The three sections of the U.S. – East, Central, West – make for a logical mix. 
 
In the East region, well, Ashburn is the capital of the world as far as data centers are concerned. No other markets on the planet have as much deployed data center space as Ashburn, and construction of new data centers is ongoing in order to keep up with demand. Known as “Data Center Alley”, Ashburn is a perfect home for a data center for many reasons, including its dense fiber infrastructure and low risk of natural disasters. Those factors alone make Ashburn a great location as part of a redundancy and disaster recovery strategy.  
 
In the Central region, Dallas has a very low risk of dangerous weather conditions. According to data collected from 1950-2010, no earthquakes of 3.5 magnitude or above have occurred in or near Dallas. No hurricanes within 50 miles of Dallas have been recorded either. And while tornadoes can occur, some data centers such as NTT’s Dallas TX1 Data Center are rated to withstand those conditions. Another appealing aspect of Dallas is that Texas’s independent power grid, managed by ERCOT (the Electric Reliability Council of Texas), is one of three main power grids that feed the continental United States. By maintaining a presence in each of the three grids, companies can make sure that their data centers are as reliable as could be. 
 
In the West, several appealing options are located along the Pacific coast. In the Northwest, the Oregon suburb of Hillsboro is a popular choice for an economical disaster recovery location. Hillsboro has a mild climate which translates to low heating and cooling costs, a minimal natural disaster risk and strong tax incentives. As a bonus, a variety of submarine cables deliver low-latency connections between Hillsboro and high-growth Asian markets.  
 
In Northern California, Sacramento offers a safe data center option as that city is out of the seismic risk area that includes Bay Area cities. Sacramento is also considered preferable to other Western data center markets such as Reno, Nevada. At least 30 seismic faults are in the Reno-Carson City urban corridor, and some studies say that two of those faults in particular appear primed to unleash a moderate to major earthquake. 
 
And then there’s Silicon Valley, which of course is a terrific place to have data center space. However, no one would say that Silicon Valley is a truly seismically stable area. But, that risk can be mitigated if the data center is protected with technology such as a base isolation system, which NTT uses in its new four-story Santa Clara data center. That base isolation system has been proven to enable multi-story buildings to withstand historic earthquakes with no damage to the structure or the IT equipment inside. 

CDN Customer Coverage from NTT U.S. Data Center Locations 

This map shows how NTT’s U.S. data centers can give CDNs the level of low latency, load balancing, and redundancy that they are looking for. 
 
Legend:  

  • White lines indicate NTT customer base outreach per location 
  • Blue lines indicate the interconnection between the locations 

Data at the Center: How data centers can shape the future of AI

In today’s world, we see data anywhere and everywhere. Data comes in different shapes and sizes, such as video, voice, text, photos, objects, maps, charts, and spreadsheets. Can you imagine life without a smartphone, social apps, GPS, ride-hailing, or e-commerce? Data is at the center of how we consume and experience all these services.  
 
Beyond the gathering of data, we need to determine how to use it. That’s where artificial intelligence and machine learning (AI/ML) come in. In services like ride-hailing, navigation/wayfinding, video communications and many others, AI/ML has been designed in.  For example: 
 
    •    Today, a luxury car is built with 100 microprocessors to control various functions 
    •    An autonomous vehicle (AV) may have 200 or more microprocessors and generates 10 GB of data per mile 
    •    Tens of thousands of connected cars will create a massive distributed computing system at the edge of the network 
    •    3D body scanners and cameras generate 80 Gbps of massive raw data for the streaming games 
    •    A Lytro camera, equipped with light field technology, generates 300 GB of data per second 
 
Computer systems now perform tasks requiring human intelligence – from visual perception to speech recognition, from decision-making to pattern recognition. As more data is generated, better algorithms get developed. When better services are offered, the usage of the services goes up. Think of it as a fly wheel that keeps moving.  
 
The AI solutions are only as limited as the number of high-quality datasets you have. Real-world scenarios showing how the technology is used include:    

    •    Autonomous Driving: Data storage on vehicle versus in the data center for neuro map analysis and data modeling 
    •    Autonomous Business Planning: Decision and business models for manufacturing and distribution  
    •    Data Disaggregation: Uncover hidden patterns in shifting consumer taste and buying behavior in retail space 
    •    Video Games: Real-time player level adjustment using low-latency data for enhanced experience 
 
Enabling AI infrastructure in data centers  
 
Because data centers sit right in the middle of compute, storage, networking, and AI, they are the hub that those other technologies revolve around. So as a data center company, how can we make AI/ML computation affordable and accessible for enterprises to keep their innovation engines running? 
 
At NTT Global Data Centers, enabling AI infrastructure is an important pillar of our growth plans. GPUs, memory, storage, and the network are the key components of ‘AI Compute’ infrastructure. Our goal is to make AI Infrastructure-as-a-Service accessible and affordable to forward-looking small, medium, and large businesses.  
 

Modern enterprises must have an AI engine at the core of their business decision-making, operations, and P&L to stay competitive in the marketplace.  But AI projects can be an expensive undertaking … from hiring talent to hardware procurement to deployment your budget may skyrocket. However, if the upfront costs of AI infrastructure can be eliminated, the entire value proposition shifts in a positive way.   So how does NTT Global Data Centers help our customers build AI solutions? 

    1.    We de-construct our customer’s problem statement and design the solution 
    2.    Our AI experts build tailored model for the computation 
    3.    Our customers have full access to AI-ready infrastructure and talent 
 
AI is transforming the way the enterprises conduct their business. With data centers in the middle, GPUs, hardware infrastructure, algorithms, and networks, will change the DNA of enterprises.  

Tax breaks are perfect topping on Chicago pie

For data centers, Chicago had it all… almost.

Great connectivity. Low latency to both U.S. coasts. Hundreds of temperate days without the need to cool the data floor. Affordable power. Deep dish pizza. Sorry, did that last item make you hungry? Well, keep reading, you’ll be glad you did. 

While Chicago offered a deep, broad list of benefits for data center customers, something was missing. And other data center markets had it. 

So close to perfect 

The missing topping on Chicago’s data center pie? Tax incentives. And that left a noticeable blank spot on an otherwise delicious dish. 

Chicago’s situation came to a somewhat dramatic head on Jan. 27, 2019, when Ally Marotti of the Chicago Tribune wrote an article describing what was at stake if Illinois could not provide the same data center tax incentives that were offered in 30 other states. And of those 30 states, it was nice, quiet Iowa that figuratively sounded the alarm that Illinois could not help but hear. 

Back in 2017, Iowa enticed Apple to start building a 400,000 square-foot data center near Des Moines by offering $20 million in tax incentives. A comprehensive report paid for by the Illinois Chamber of Commerce Foundation found that if Apple had chosen to build its project in Illinois, the state could have added 3,360 jobs, $203.9 million in labor income and $521.7 million in economic output.

Let’s do something about it  

Illinois did not want to watch more major players become drawn to Iowa as a top Midwestern magnet for data centers while forsaking Chicago and all its great benefits.

Tyler Diers, executive director of the Illinois chamber’s technology council, made that point clear in Marotti’s article, saying “We hear the war stories all the time, and [data center operators] do too. We’re increasingly losing our desirability and our competitiveness. Even though we’re still relatively high, we want to stop the bleeding before we no longer [are] a desirable location.”

Those are strong words, and they made a powerful impression. Instead of standing by idly as other states flushed their economic coffers full of data center-driven cash, Illinois bore down and did something about the situation. After all, this state is personified by Chicago, the city of the big shoulders where “dem Grabowskis” root for “Da Bears”. The Windy City was not about to get blown away by Iowa.

In June 2019, Chicago essentially called dibs on the Midwest data center market when Illinois added data center tax incentives to the state’s capital spending budget. Now qualifying data centers -- and their customers -- are exempt from paying state and local sales taxes on equipment in the data centers, including everything from cooling and heating equipment to servers and storage racks. To qualify, a data center must spend at least $250 million on the facility and employ at least 20 full-time employees over a five-year period. In addition, it must prove it meets a green building standard, such as  LEED  or  Energy Star.

NTT’s Global Data Centers Americas division will meet all of those qualifications as we build two 36MW data centers on our 19-acre campus in Itasca, Illinois, which is about 27 miles outside of the city of Chicago and right near airports, restaurants, hotels, beautiful lakes, and other amenities.

Here’s what’s in it for you 

So how does that tax break benefit data center colocation customers? Well, the sales tax rate in Itasca is 7.5%. For businesses that are new to colocation, they may have to invest money in new equipment. Say they invest $500,000, that’s an extra $37,500 in sales tax they don’t have to pay because of this benefit.

And this Illinois sales tax exemption is locked in for 20 years, so data center customers will pay no sales tax when they refresh their equipment down the road too.

So that’s the Chicago story, and it’s a darn good one. This is a town that saw it was falling behind, so it lowered its broad shoulders, and charged forward to take the lead again. Now data center customers can enjoy significant tax savings for decades to come in a market that already had everything else they could possibly need. 

Oh, still thinking about that deep dish pizza? I understand. Here’s a great tip: you can get an authentic Chicago deep dish pizza shipped frozen to you from Giordanos or Lou Malnati’s. Either way you can’t go wrong. You’re welcome -- enjoy!

P.S. You can talk the talk – Chicago-style! To learn more about the flavor of Chicago through the city’s unique jargon, check out this page to find out the meanings of frunchroom, the bean, grachki, sammich and more.

Pages

Subscribe to RSS - blogs