Looking Back on 2020

We did it. We made it to the end of 2020, a year that took everyone by surprise and forced us to change the way we operate in business, schools, and communities. This year taught us to be resilient, adaptable, and innovative to ensure our business kept running at the highest level for our clients.

When we reminisce about 2020 many years from now, we’ll most likely remember a time of extreme change and challenges, but we should also remember what we were able to achieve amidst the uncertainty. As this year comes to an end, we wanted to highlight some of those milestones.

RagingWire is Now NTT Global Data Centers Americas

This year we joined the new Global Data Centers division, which incorporates e-shelter, Gyron, Netmagic, NTT Indonesia Nexcenter, RagingWire and other data center companies that formerly sat under the NTT Communications brand. Now as one NTT team, the Global Data Centers division is one of the top three leaders in worldwide colocation and interconnection services. All the benefits of our highly skilled in-house team from the former RagingWire remained for our clients, but by joining the NTT family, we expanded our global footprint and network service options. As NTT Global Data Centers Americas, we’re able to give our clients access to new markets, solutions and help them grow at the pace they need.

A Time of Growth

This was a year of growth for us as we expanded in Ashburn and broke ground on data centers in three new markets – Chicago, Hillsboro, and Silicon Valley – all set to open in 2021. 


6MW is now available on the first of five planned data centers on our 47-acre campus just outside Portland, Oregon. The campus offers more than one million square feet of data center space at full buildout and a total of 126MW of critical IT load. We’re proud to have earned the Cleaner Air Oregon certification, which ensures commercial and industrial facilities can’t emit toxic air contaminants at harmful levels. Clients will also have access to direct subsea cables offering low-latency connections between the Hillsboro campus and high-growth Asian markets, making this a prime spot in the Pacific Northwest. 


After a year of hard work, the first data center offering 36MW is now available at our new 19-acre data center campus in Chicago. When complete, the campus will feature two buildings totaling 72MW of critical IT load. Customizable high-density vaults, low latency to both US coasts, and robust connectivity make Chicago an increasingly desirable data center location for deployments of all sizes. We’re excited to celebrate the opening of CH1 and HI1 early next year.



Silicon Valley

We hit significant milestones at SV1, our new data center in Silicon Valley, this year. Construction is nearly complete on the 160,000 square foot, 16MW facility, and its convenient location in Santa Clara makes it an ideal spot for clients looking for space in this prime market. We leaned in on NTT’s experience in Japan and proactively prepared for the challenges of an earthquake. This facility is the first in Santa Clara to use a state-of-the-art base isolation system proven to absorb vibrations and keep IT equipment safe during a seismic event.




Expansion in Ashburn is moving fast and the first 8MW are available in a new data center on our 78-acre campus. VA5 contains a total of 32MW of critical IT load and 140,000 square feet of data floor space. With the new addition, our Ashburn campus now totals 224MW and 1.8 million square feet over seven buildings.

An Emphasis on Health and Safety

Like most of the world, COVID-19 forced us into different working environments and made us rethink health and safety protocols. To keep our mission-critical employees as safe as possible in the data centers so they could keep our clients’ businesses running, we moved all other employees to remote work and implemented extra safety precautions, including:

  • Temperature checks at each data center
  • Health screenings asking data center entrants about recent travel
  • Increased cleaning/sterilizing throughout the day, especially at frequent contact surfaces
  • Hand sanitizer distribution to all occupants



Employees moving to remote work encountered lots of virtual meetings, canceled events, and a change in dynamic among colleagues. We came up with some unique ways to keep our team and clients connected including a virtual “Craft Cocktails with the Chief of Staff” mixology event and cookie decorating party around the holidays. 

Designing for Density in the Data Center

If we learned anything from 2020, it was how to adapt to rapidly changing situations and find innovative ways to solve problems. As the growing adoption of artificial intelligence (AI) changes density demands in the data center, we’ve made sure our data center facilities can support accelerated computing operations. Our Dallas TX1 and Ashburn VA3 Data Centers are now qualified as NVIDIA DGX-Ready data centers. Clients can utilize DGX, NVIDIA’s flagship appliance for AI computation, and leverage AI benefits without installing their own infrastructure. It’s a major step toward our goal of making AI infrastructure-as-a-service accessible and economical for businesses of all sizes. 

Looking Ahead

As we look to the new year, we have several exciting milestones coming up. We’re planning a (virtual) launch for both our Chicago and Hillsboro Data Centers, and we’ll break ground on a brand-new campus in Phoenix. Stay tuned for more about that site.

The many obstacles we all faced in 2020 brought data to the forefront of our everyday lives more than ever, and we’re looking forward to the opportunities ahead. We hope 2021 brings back a sense of normalcy, but it might be a new normal. No matter what, we remain committed to giving our clients the best global technology solutions to drive their growth and enable their success

Silicon Valley SV1 Data Center Construction Updates

Stay up to date on the latest news and milestones from our new Silicon Valley data center campus currently under construction.

November 2020

Electrical gear is currently being installed. Electrical gear is fed by the electrical modules which are powered by the utility provider. 

October 2020

The generators are being installed on site. They are used to power the facility in the event of an outage. 

The generator is being placed on the pad.

August 2020



July 2020

Concrete Massonary Unit Wall (CMU) in place. The gap in the 3-story steel paneling is for access to place the pre-manufactured generators inside and to be sealed once in place. 

Level one and two concrete placement complete.  

June 2020

SV1 Fireproofing

The team is adding Sprayed Fire- Resistant Material (SFRM) a spray on layer of fireproofing that contains gypsum and other materials like mineral wool, quartz, perlite, or vermiculite to the lower level of the building. The spray helps to delay or prevent the failure of steel by thermally insulating the structural members to keep them below the temperatures that cause failure in the event of a fire.

SV1 Topping Off

May 2020

The team has built the second, third, and fourth floor topping the building off. 16 MW of IT power will be distributed throughout the 160,000 sq. ft. facility.


April 2020

SV1 Steel Walls

The team is in the process of installing the building's steel framing. The steel is anchored to the cement flooring that sits on top of the base isolators.


March 2020

The final section of the base isolation system's triple bearing base isolators have been installed. Listen to Anoop Mokah, Vice President of Earthquake Protection Systems detail how the triple bearing base isolators operate in the event of an earthquake. To learn more about how the base isolation system works read the following article Taking Earthquake Protection to the Next Level in Data Centers by Bob Woolley our Sr. Vice President of Operations.


February 2020 

NTT Silicon Valley Data Center - Earthquake-resistant Base Isolation System Installation

The first section of the base isolation system has been installed at our Silicon Valley SV1 Data Center. The isolators are a very important piece to the state-of-the-art base isolation system, it works to protect the building during an earthquake by following the movement of the earth preventing the building itself from moving. The isolators move 3 meters in any direction to help keep the building in place. There is a greater chance that the building will stay operational after a seismic event when built with isolators.


September 2019

Demo has begun at our Silicon Valley SV1 Data Center, the team has recycled the old building to make way for our newest data center.  


March 2019

We have purchased land and have begun developing a new world-class, 16 megawatt data center “Silicon Valley SV1 Data Center” in Santa Clara, the heart of the tech capital of the world.With a total of 160,000 sq. ft. and 16MW of critical IT power, SV1 is an ideal choice for companies needing data center capacity in this top market where new inventory sells quickly. This facility is the first in Santa Clara to use an earthquake-resistant design featuring an innovative base isolation system. Our campus will also include 100 percent green energy capabilities.

Hillsboro Data Center Campus Construction Updates

Stay up to date on the latest news and milestones from our new Hillsboro data center campus currently under construction.

December 2020

The final security features are being added to the facility. The visitor control center (pictured right) has finished installing the camera feeds throughout the facility. The vault entrances (pictured left) are adding final badge readers for an extra layer of protection. 

November 2020

Now that the electrical room is finished the team must test the electrical gear individually. The load banks are turned on acting as stand in servers testing full load. Once each individual component is tested they are run all at one time. Once the test is complete, the load banks are rolled out of the facility and the vault is ready for final completion.

October 2020

The electrical room has been completed. The electrical room is fed by the electrical modules which are powered by the utility provider. 

September 2020

The anti-climb perimeter fence has been installed around the campus. The campus has armed gates to keep the data center, clients, and employees safe. The data center nonessential lighting will be powered by on campus solar panels (pictured in the far left corner). 

August 2020



July 2020

Installation on the North IT Room is under way. The main distribution frame, connects NTT’s infrastructure in the Hillsboro data center to the many providers and locations across the globe. The yellow and white track running above the MDF, otherwise known as the cable track, carries the signal from an enclosure to the MDF and out to the world. The patch panel acts as a handoff from the internet provider to the data center. 

The fan walls are currently being installed to vault one. Our team uses slab flooring and a fan wall design to cool the data floor, making for a more efficient and sustainable alternative.  

June 2020

Hillsboro Data Center Campus Construction Update - June 2020

The chillers have finished being installed. The chillers circulate cool water throughout the building.

Hillsboro Data Center Campus Construction Update - June 2020

The manufactured Medium Voltage Switch Gear have been delivered to the site. The team is tying in the medium voltage switch gear to the main utility provider adding power to the building.

April 2020

Modules Installed

The prefabricated electrical modules have finished installation on the side of vault one. 


HI1 Walls of Vault

The data center floor has been completed, currently the walls of the vault are being installed. 


HI1 Vault Walls

The pre-fabricated electrical modules have been installed at the first of NTT's Hillsboro, Oregon data center campus HI1. The pre-fabricated electrical modules are an essential part of the construction and design teams formula to delivering quality data centers quicker and more efficiently. By constructing the electrical modules offsite, the construction team is able to focus on other areas of the build while they are manufactured and shipped to the site. 

March 2020

Equipment Pads HI1 Construction

The construction team prepares for the prefabricated equipment to arrive on site by laying the foundation in which they will be installed. The foundation pads shown closer to the building are for the generators and electrical modules. The foundation pads shown further from the building are for the chillers. Simultaneously, the team prepares the inside of the building. This modular approach to construction allows us to get capacity online faster for our clients. 

February 2020

HI1 Construction Blog- Slab Floor Install Prep

Our construction team prepares the data floor by adding the essential infrastructure below neccessary for our future clients. Once all cabling and piping has been installed the data center floor will be poured, this slab floor and fan wall design ensures an effcient way to keep the data floor cool. 

January 2020

The first 6MW customizable vault is currently under construction and is available to pre-lease. This vault will be located in the first of five planned buildings on our Hillsboro, OR Data Center campus. Vault 1 will be available this summer. 

Vault 1 - General Specifications

  • 6MW at 258.7 watts per square foot
  • 23,000 sq. ft.
  • Single-story structure with a concrete slab design
  • Dedicated electrical infrastructure option at 6MW

To learn more about our Hillsboro, OR Data Center campus and get more details on the first 6MW vault layout and specifications, download the brochure here: NTT Hillsboro Data Center Brochure

November 2019


Introducing NTT's Hillsboro, Oregon Data Center campus. The 47-acre campus is located in the Pacific Northwest technology hub with one of the richest network infrastructure in the country. The 1,000,000 square foot space will have 144MW of critical IT load. The first of five buildings, HI1 will be opening in the summer of 2020. Our campus will also include 100 percent green energy capabilities.

Chicago CH1 Data Center Construction Updates

Stay up to date on the latest news and milestones from our new Chicago data center campus currently under construction.

December 2020


October 2020



August 2020

The construction team has used a crane to lift the chillers to the roof of the building. The chillers are used to circulate cool water throughout the building.

July 2020

Installation on the IT Room is under way. The main distribution frame, connects NTT’s infrastructure in the Chicago data center to the many providers and locations across the globe. The yellow and white track running above the MDF, otherwise known as the cable track, carries the signal from an enclosure to the MDF and out to the world.  

June 2020

CH1 Vault Flooring

The team is preparing to pour the concrete flooring for vault one. This design includes a slab flooring and fan wall cooling technique. 

May 2020

NorthEast Walls

The North East exterior and South West exterior is in progress. Steel and glass will be placed over the wall studs in the next phase of construction. This is the exterior of where the ops team and other offices will be located. The concrete wall to the far right is where the vaults are located.

Partitioned walls

The interior partition framing is in progress. 

The prefabricated exterior walls of the building have been shipped in and installed. By using prefabricated exterior walls we are able to simultaneously build out the interior speeding up timelines of construction. 


April 2020

CH1 Roof Installed

The roof has been set, next the precast walls will be shipped to the site and installed to the sides of the building. 


March 2020

Setting the Bean

The steel was officially topped out at the NTT CH1 project.  Honoring a long standing tradition, the final steel beam was painted white and affixed with the American flag.  The beam was then signed by the ironworkers and other tradesmen.  They also added the names of project team members from Clune, Verity and Linesight before it was hoisted into place. The topping out ceremony marks a very exciting milestone for this site!


November 2019

CH1 Construction has begun

The first 6MW of critical IT load is currently under construction and is available for pre-lease. “The first building on our Chicago Data Center campus is standing up its core and shell now,” said Doug Adams, President and CEO of NTT Global Data Centers Americas (formerly known as RagingWire Data Centers). “By using modular construction techniques, economies of scale, and carefully planned supply chain management, we have lowered construction costs and will be able to affordably offer all the space and critical IT power needed to help companies grow and scale their data center presence as their business needs evolve.”  


October 2019

CH1 Exterior

Introducing NTT’s Chicago Data Center campus. The 19-acre campus is located in Itasca, Illinois, which is in the prime area for wholesale data centers. The campus will have two buildings offering a total of 72 megawatts of critical IT load. Each of the two buildings will offer 36 megawatts and 250,000 sq. ft. of space spread over two stories. The first six megawatts of critical IT load will be available in late 2020.

Simplifying the Mystery of Tax Incentives for Data Center Clients



In 25 U.S. states today, colocation data center clients can save big money through various tax incentives. However, tax incentives can seem complicated to interpret. In the video above, two tax experts simplify the mysteries of tax incentives for data center clients.  

Stefanie Williams, research analyst at 451 Research (a part of S&P Global Market Intelligence), and Nahom Essaw, director and controller for NTT Global Data Centers Americas, share some straight facts about tax incentives that data center clients can use to save money and improve their ROI. 

Stefanie and Nahom discuss their answers to these two main questions: 

  1. What is the most straightforward tax incentive for data center customers? 
    Sales and use tax exemptions exist in states with prime data center locations such as Virginia, Oregon, and Arizona. Data center clients at colocation facilities in those states can save 6-9% on new equipment purchases by not paying sales tax. 

  1. What tax incentives do data center customers not know about?  
    One example is that data center clients can enjoy 100% property tax relief when their colocation operator has negotiated effectively with tax authorities on their behalf.  

Keep in mind that tax incentives for data center clients are continually evolving. Colocation providers are always working with state governments to get new bills introduced or revise existing legislation to help clients save on data center expenses.   

You can hear more about tax incentives in this video, and please feel free to contact us at with any questions you may have. 

Interconnection: A Better Way to Manage Data Demands

Due to the global pandemic, IT managers everywhere are trying to manage exponential demands for higher bandwidth. Increases in people working from home, distance learning, e-Medicine, video streaming, online shopping, and online gaming are creating unprecedented data demands. All of your constituents want high-availability, high-quality service.  

Colocation data centers can meet those data demands by providing redundant and diverse paths to transport network traffic across the globe. Whether it is an enterprise running mission-critical applications, or someone working or learning from home, business is expected to run flawlessly and without any interruption.  

Modernized data center network infrastructure is optimized for a wide range of complexity, such as hybrid IT infrastructure, public and private cloud connectivity, multi-cloud, private closed network, SD-WAN, the proximity and density of fiber providers, and the Internet. This infrastructure is especially well-suited for global businesses that want to have their primary and backup sites in different regions and countries with standardized services and operations.  

How can colocation data centers keep my constituents connected? 

The path to streamlined, reliable connectivity starts with a well-conceived topology, such as what we call “Data Center Interconnect.” This technology uses high-speed connectivity to connect two or more data centers over short, medium, or long distances.  

As a global data center company, we deploy a “switched fabric” topology to connect our data center network, cloud exchange, and internet exchange services to our service providers and enterprise customers. This fabric is a network connectivity platform that provides low latency, high availability, and secure connections between hybrid multi-clouds, enterprises, and your own digital assets. 

What are some problems that colocation data centers can solve? 

Data Center Interconnect is geared to help in several scenarios. The main motivation for data center interconnect has traditionally been the assurance of business continuity in a disaster. Companies could avoid network breakdowns by setting up their mission-critical infrastructure in more than one data center in the situation of a metro-wide catastrophe.  

But now, more companies are finding that they are constantly in need of increased bandwidth, as usage increases of laptops, smartphones, game consoles, in-vehicle navigation systems, and other devices. Slow response time to access the information is unacceptable. Data Center Interconnect provides connectivity links precisely when needed:  

  • Businesses relying on remote employees will not be a short-term trend. A geographically distributed workforce requires fast, reliable connectivity with the ability to scale up quickly. VPN usage has gone up more than 130% in the US during the global pandemic. Video communications have become a medium of choice for business, family, and online learning applications. Zoom, Microsoft Teams, and WebEx have seen a considerable increase in subscribers. Zoom reported $328 million in revenue during its February–April 2020 quarter.  

  • In online gaming, Electronic Arts (EA) reported tens of millions of new players dove into their online and mobile games during the pandemic. TDK Corporation sees a significant opportunity for its high-performance 6-axis MEMS motion sensor for gaming controllers and AR/VR applications. These millions of gamers need low latency connectivity, which Data Center Interconnect can facilitate. 

  • Online shopping has also exploded. Forbes reported that total online spending in May hit $82.5 billion, up 77% year-over-year. To consummate their purchases, customers must have an uninterrupted, smooth transaction that comes from the right amount of bandwidth availability. 

What are some use cases for Data Center Interconnect? 

Depending on a company’s goals, it could benefit from several different uses of Data Center Interconnect. Here are some examples: 

  1. A company connects to its own network infrastructure in a different data center within NTT Global Data Centers. For instance, a customer with a global presence across NTT Global Data Centers, say in the US (perhaps in California and Virginia), in APAC (perhaps Tokyo, Singapore, and Hong Kong) and Europe (perhaps London and Frankfurt) can interconnect their network assets.

  2. A company connects to Public Cloud (AWS / MSFT Azure / IBM Cloud / Google Cloud) in the same region and across the globe.

  3. A company connects to a vendor or partner over a private connection in the same region and across the globe.

  4. A company connects to its own infrastructure or third-party in a third-party data center. 

To sum up, the overarching benefits of the global network fabric found in Data Center Interconnect are that companies will avoid delays in connecting with their employees, customers, and business partners. This will result in more efficient external interactions with customers and prospects, and more effective internal interactions between employees. 

NTT Global Data Centers Americas deploys a “switched fabric” topology to connect our data center network, providing low latency, high availability, and secure connections between hybrid multi-clouds, enterprises, and your own digital assets.

Grow revenue and reduce costs by using NVIDIA-powered AI in our TX and VA data centers

Artificial intelligence is changing the landscape of business and the foundational beams of companies. Across many verticals, companies are competing not only for market share or revenue, but to survive. Some companies are scaling and innovating faster, creating new markets and business models to drive more business, and offering more customized and personalized services, not locally, but globally.  

We are moving towards “AI-first” companies as businesses rethink their strategy and operating models. Artificial intelligence, interconnection and networks are now the core tenets for businesses to compete and succeed.  

The Power of Artificial Intelligence 

When artificial intelligence experts Dave Copps and Ryan Fontaine spoke as guests on our podcast series, they shared valuable insights about how companies across all industries can use AI to generate revenue or reduce costs.  

“For businesses, if you have access to good AI and good machine learning, you’re going to be all right,” said Fontaine, the CEO of Citadel Analytics. “Data is the new ‘oil’. It is the most valuable commodity in the world, more valuable than silver or gold. But you still have to do something with it. AI helps find the connections in data, and that’s what makes data – and AI – so valuable.” 

Copps, a legend in the AI community who is currently the CEO of Worlds, illustrated the value of AI with several memorable stories. First, he described how a company he was previously with helped the Department of Justice close a case they had been working on for months by using AI to find crucial info in just 30 minutes.  

Another example from early in his career was perhaps even more profound. Copps’ company was helping an intelligence agency in Europe that had been working on a case involving hundreds of millions of documents over 7 years. Mining the data through traditional search engines was not getting the job done. 

So Copps’ company built an AI platform that enabled people to see and navigate information visually, almost like using a Google Earth map. The reaction from the European intelligence agency could be considered … euphoric. 

 “The guy that was working on the data cried – in a good way,” Copps said. “He had been looking at this for so long, and the answers were right there. That points to the power of AI.” 

For more entertaining AI insight from Copps and Fontaine, you can listen to the entire podcast here.  

But what can I do to leverage AI? 

After listening to the podcast, you might think “That’s great, AI really sounds like it could help my company grow in an efficient and profitable way. But what’s my first step? How do I access and use AI technology?” 

Good question. Actually, no … that’s a great question. 

Luckily the answer to that question has just changed.  

Clients at our Dallas TX1 Data Center and our Ashburn VA3 Data Center can talk to us about accessing AI without installing their own infrastructure. That’s because we have become qualified as NVIDIA DGX-Ready data centers. DGX is NVIDIA’s flagship appliance for AI computation. 

This qualification allows us to house NVIDIA’s DGX AI appliances in our data centers, where they can be used “as-a-service” by clients demanding cutting-edge AI infrastructure.  

NVIDIA has plenty of case studies showing how companies across a broad array of industries have already seen significant results from accessing their deep learning AI technology, including:  

  • Baker Hughes has reduced the cost of finding, extracting, processing, and delivering oil. 

  • Accenture Labs is quickly detecting security threats by analyzing anomalies in large-scale network graphs. 

  • Graphistry is protecting some of the largest companies in the world by visually alerting them of attacks and big outages. 

  • Oak Ridge National Laboratory is creating a more accurate picture of the Earth’s population -- including people who have never been accounted for before -- to predict future resource requirements. 

  • Princeton University is predicting disruptions in a tokamak fusion reactor, paving the way to clean energy. 

Other companies (including some startups who you may hear a lot more about soon) have shared their inspiring stories on this page:

What will your story be? There’s only one way to find out – by harnessing the power of AI for your enterprise. With NVIDIA in our data centers, we can help you get there. Contact me at to find out more. 


Cleaner Air Oregon program clears new Hillsboro Data Center for construction

As a new member of Oregon's business community, we're proud to announce that our new data center in Hillsboro, Oregon, has completed its evaluation by the Cleaner Air Oregon program and has been pronounced fit to proceed with construction. 

Oregon began a new era in 2018 by creating the Cleaner Air Oregon program, which makes sure that all new and existing commercial and industrial facilities cannot emit toxic air contaminants at a level that could potentially harm people.  The Oregon Department of Environmental Quality (DEQ) sees this new program as helping to ensure that industrial progress will not cause a regression in health.  

"DEQ is excited to see Cleaner Air Oregon meet the ongoing challenge of maintaining clean and healthy air in Oregon communities," said Lauren Wirtis, Public Affairs Specialist for the Oregon DEQ. 

The requirements of the Cleaner Air Oregon program apply to paper mills, steel rolling mills, oil refining companies, microchip manufacturers, lumber companies, glass manufacturers – the list goes on and on – and includes data centers.  

While smaller data centers have also gained permits from the Cleaner Air Oregon program, our Hillsboro data center is the only data center to have completed a Level 3 risk assessment. Level 3 is nearly the most rigorous on a scale that goes from Level 1 to Level 4, with Level 4 being the most complex.  

To illustrate the level of examination that takes place during a Level 3 risk assessment, and why it can take up to a year to complete, take a look at the efforts needed to gain the Cleaner Air Oregon certification

To complete a Level 2 or 3 Risk Assessment, facilities need to develop a detailed list of model inputs, including site-specific stack characteristics, building information (to account for building downwash), terrain data, specific exposure locations, and site-specific land use parameters. The quantity and complexity of parameters add up quickly and can easily become overwhelming. 

What also gets complicated fast is the amount of data that needs to be managed.  On average, facilities could be reporting anywhere from 10-50 toxic air contaminants per emissions source.  Multiply that by the number of emissions sources, the number of exposure locations, and 43,824 hours (the number of hours in the required 5-year meteorological dataset), and very quickly your Cleaner Air Oregon Risk Assessment includes over a million data points. 

Therefore, it's not only necessary to have a trained air quality modeler involved, but you also need to be able to manage a large amount of data.  This becomes increasingly important when you need to start analyzing the modeling results to determine what sources and what toxic air contaminants may be driving risks and therefore require refinement. 

Why is this level of scrutiny needed? Before the Cleaner Air Oregon rules were adopted, Oregon's based the existing rules on federal law. Those rules allowed industrial facilities to release potentially harmful amounts of air toxics, but still operate within legal requirements. The Cleaner Air Oregon rules closed the regulatory gaps left after the implementation of federal air toxics regulations.  

Change is hardly ever easy, particularly when it involves new processes and invariably new costs. But this kind of change is well worth it. We applaud the state of Oregon for doing not what is easy, but what is right. And that's why we're proud to help keep Oregon's air clean and healthy for generations to come. 



Why Enterprises Should Use Hyperscale Data Center Techniques

When contemplating what data center customers will need over the next one to three years, several things come to mind.

First, hybrid cloud will continue to be a popular trend, and with good reason. Uncontrollable costs from public cloud service providers are driving people to pull workloads from those applications and into a more economical hybrid cloud environment. Some customers have also reported performance issues when demand on public cloud is high.

Next, many customers are asking for larger footprints and increased power density. In fact, it’s not uncommon to see densities hit 20kW. These higher power densities are a real problem for legacy data center providers that designed their buildings to serve 4-5kW per rack installations, back in the days when a high-density load was considered to be 10kW. We’re long past that now. Data center operators who can build-to-suit can handle these new 20kW and higher requirements, which is really what customers need to run their mission-critical applications.

The bottom line is: to get the most cost-effective, efficient use of a data center, enterprises need to use hyperscale techniques. But how?

Let’s start with utilization rates. Enterprises typically get about a 30 percent utilization rate of their infrastructure when measured on a 24x7x365 basis, whereas hyperscalers get 70-80 percent – more than double that of enterprises. If enterprises can double their utilization rate, it means that they can buy half of what they normally buy and still serve the same workload demand. That will save a lot of money. 

But to improve their utilization rate, enterprises have a choice. They can do it on their own, or buy a hyperconverged system that essentially does the same thing. That hyperconverged system will give them public cloud economics in a private cloud environment. There are also quite a few composable systems from major OEMs that leverage similar techniques.

A few years ago, I sponsored an infrastructure TCO study that still rings true today. The study highlighted the point that most of the cost of running a server is not the cost of the server itself. The TCO of running a server consists of three major components: 1) the cost of the server, 2) administration and management, and 3) space, power and cooling. The actual server represents about 20% of the total, 70% is administration and management, and the remaining 10% is space, power, and cooling. 

So, enterprises that want to reduce costs should look closely at the fact that 70% of their server costs are tied up in administration and management. Hyperscalers have done exactly that. Their investments in software, machine learning, and automation drive utilization rates to 4X that of the average enterprise, creating world-class TCO and programmability of their data center infrastructure.  

Can Growing CDN Providers Overcome These 3 Challenges?

As the COVID-19 pandemic swept across the globe, content delivery network (CDN) providers were quickly thrust into the world’s spotlight. People everywhere depended on CDNs to quickly and smoothly connect them to news, entertainment, education, social media, training, virtual events, videoconferencing, telemedicine … the list goes on and on. 
That’s why it’s no surprise that the global CDN market that was valued at $9.9 billion in 2018 is now expected to reach $57.15 billion by 2025, according to Brand Essence Market Research
But to turn those lofty revenue projections into revenue growth, the smartest CDN providers must find ways to overcome significant challenges, such as these three mentioned below. 
Challenge #1: Deliver high-performance with low latency 
People everywhere are demanding high quality content and video, without any speed bumps due to latency issues. Although software, hardware, networks, and bandwidth all affect the level of latency, the single biggest factor that slows down content is the distance that light has to travel. That’s because for all our mind-blowing achievements in technology, one thing we haven’t yet figured out is how to speed up the speed of light. 
Light travels at about 125,000 miles per second through optical fibers, which is roughly two-thirds of the speed of light in a vacuum (186,000 miles per second). So for every 60 miles of distance a packet has to travel, about a half a millisecond of time is added to the one-way latency trip, and thus 1 millisecond to the round-trip time.  
They say money makes the world go ‘round. So in essence, latency can stop the world from turning, as shown in these examples: 

  • In finance, for years firms have offered premium “ultra-low latency” services to investors who want to receive key data about two seconds before the general public does. What can happen in two seconds? In the stock market, quite a lot. Research by the Tabb Group estimated that if a broker’s platform is even 5 milliseconds behind the competition, it could lose at least 1% of its flow, equating to about $4 million in revenues per millisecond. 
  •  In retail, according to research by Akamai, a 100 ms delay in web site load time leads to a decrease in conversion rates of up to 7%. Conversely, Akamai reported that Walmart noticed that every 100 ms of improvement in load time resulted in up to a 1% increase in revenue.  
  • In the cloud, latency looms as a major hindrance. Among the findings in a research paper by the University of Cambridge Computer Laboratory are that 10µs (those are microseconds, or one-millionth of a second) latency in each direction is enough to have a noticeable effect, and 50µs latency in each direction is enough to significantly affect performance. For data centers connected by additional hops between servers, latency increases further. This has ramifications for workload placement and physical host sharing when trying to reach performance targets. 

Every CDN wants to provide high-performance, but predicting the performance of CDNs can be an imprecise exercise. CDNs use different methodologies to measure performance, and have various types of underlying architecture. However, one universal truth is that the geographic locations of CDN data centers play a big role in performance measurements. 
This is one reason why NTT’s Global Data Centers division has strategically chosen certain locations for their data center campuses. For example, our data centers in Sacramento give companies based in San Francisco a low-latency experience as compared to other locations. Those customers experience round-trip latency of only 3 milliseconds to go out and back the 88 miles from Sacramento to San Francisco. That compares well to round-trip latency of 4.2 milliseconds from San Francisco to Reno (218 miles away), 15.3 milliseconds from San Francisco to Las Vegas (570 miles away), or 18.1 milliseconds from San Francisco to Phoenix (754 miles away). 
In Chicago, where NTT is building a new 72MW data center campus, customers at that Central U.S. location will enjoy low latency to both U.S. coasts. According to AT&T, IP network latency from Chicago to New York is 17 milliseconds, and from Chicago to Los Angeles is 43 milliseconds. 
Reducing latency is a huge point of emphasis at NTT. At our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center. 
Challenge #2: Scale up to meet a growing base 
Every business wants more customers, but for CDNs, they need to be careful what they wish for. Huge bursts in Internet traffic can bring an overwhelming amount of peak usage. Videoconferencing historians will long remember the third week of March 2020, when a record 62 million downloads of videoconferencing apps were recorded. Once those apps were downloaded, they were quickly put to use – and have only increased in usage time since then. 
The instant reaction to those stats and trends would be for CDNs to add on as much capacity as possible. But building to handle the peak demand can be costly, as a CDN also needs to economically account for lower-usage times where there are huge amounts of capacity that will not utilized. 
These massive spikes and valleys bring a significant traffic engineering challenge. A well-prepared CDN will minimize downtime by utilizing load balancing to distribute network traffic evenly across several servers, making it easier to scale up or down for rapid changes in traffic. 
Technology such as intelligent failover provides uninterrupted service even if one or more of the CDN servers go offline due to hardware malfunction. The failover can redistribute the traffic to the other operational servers.  
Appropriate routing protocols will transfer traffic to other available data centers, ensuring that no users lose access to a website. This is what NTT’s Global Data Centers division had in mind when we deployed a fully redundant point-to-point connection, via multiple carriers, between all our U.S. data centers.  
We make critical functions such as disaster recovery, load balancing, backup, and replication easy and secure. Our services support any Layer 3 traffic for functions such as web traffic, database calls, and any other TCP/IP based functions. Companies that leverage our coast-to-coast cross connect save significant money over installing and managing your own redundant, firewall-protected, multi-carrier connection. 
Challenge #3: Plan for ample redundancy, disaster recovery, and risk reduction 
Sure, most of the time disasters don’t happen. But I suppose that depends on your definition of a disaster. It doesn’t have to be the type that you see in movies – with earth-cracking gaps opening up in the middle of major cities. Even events as routine as thunderstorms can have ripple effects that could interrupt service provided by a CDN. 
That’s why smart CDNs are reducing their risk of outages by not having all their assets and content in one geographic area. The good thing is that by enacting one fairly simple strategy, CDNs can check the boxes for ample redundancy, disaster recovery, and risk reduction. 
That strategy is to have a data center presence in multiple geographic locations. The three sections of the U.S. – East, Central, West – make for a logical mix. 
In the East region, well, Ashburn is the capital of the world as far as data centers are concerned. No other markets on the planet have as much deployed data center space as Ashburn, and construction of new data centers is ongoing in order to keep up with demand. Known as “Data Center Alley”, Ashburn is a perfect home for a data center for many reasons, including its dense fiber infrastructure and low risk of natural disasters. Those factors alone make Ashburn a great location as part of a redundancy and disaster recovery strategy.  
In the Central region, Dallas has a very low risk of dangerous weather conditions. According to data collected from 1950-2010, no earthquakes of 3.5 magnitude or above have occurred in or near Dallas. No hurricanes within 50 miles of Dallas have been recorded either. And while tornadoes can occur, some data centers such as NTT’s Dallas TX1 Data Center are rated to withstand those conditions. Another appealing aspect of Dallas is that Texas’s independent power grid, managed by ERCOT (the Electric Reliability Council of Texas), is one of three main power grids that feed the continental United States. By maintaining a presence in each of the three grids, companies can make sure that their data centers are as reliable as could be. 
In the West, several appealing options are located along the Pacific coast. In the Northwest, the Oregon suburb of Hillsboro is a popular choice for an economical disaster recovery location. Hillsboro has a mild climate which translates to low heating and cooling costs, a minimal natural disaster risk and strong tax incentives. As a bonus, a variety of submarine cables deliver low-latency connections between Hillsboro and high-growth Asian markets.  
In Northern California, Sacramento offers a safe data center option as that city is out of the seismic risk area that includes Bay Area cities. Sacramento is also considered preferable to other Western data center markets such as Reno, Nevada. At least 30 seismic faults are in the Reno-Carson City urban corridor, and some studies say that two of those faults in particular appear primed to unleash a moderate to major earthquake. 
And then there’s Silicon Valley, which of course is a terrific place to have data center space. However, no one would say that Silicon Valley is a truly seismically stable area. But, that risk can be mitigated if the data center is protected with technology such as a base isolation system, which NTT uses in its new four-story Santa Clara data center. That base isolation system has been proven to enable multi-story buildings to withstand historic earthquakes with no damage to the structure or the IT equipment inside. 

CDN Customer Coverage from NTT U.S. Data Center Locations 

This map shows how NTT’s U.S. data centers can give CDNs the level of low latency, load balancing, and redundancy that they are looking for. 

  • White lines indicate NTT customer base outreach per location 
  • Blue lines indicate the interconnection between the locations 


Subscribe to RSS - blogs