Blogs

Grow revenue and reduce costs by using NVIDIA-powered AI in our TX and VA data centers

Artificial intelligence is changing the landscape of business and the foundational beams of companies. Across many verticals, companies are competing not only for market share or revenue, but to survive. Some companies are scaling and innovating faster, creating new markets and business models to drive more business, and offering more customized and personalized services, not locally, but globally.  

We are moving towards “AI-first” companies as businesses rethink their strategy and operating models. Artificial intelligence, interconnection and networks are now the core tenets for businesses to compete and succeed.  

The Power of Artificial Intelligence 

When artificial intelligence experts Dave Copps and Ryan Fontaine spoke as guests on our podcast series, they shared valuable insights about how companies across all industries can use AI to generate revenue or reduce costs.  

“For businesses, if you have access to good AI and good machine learning, you’re going to be all right,” said Fontaine, the CEO of Citadel Analytics. “Data is the new ‘oil’. It is the most valuable commodity in the world, more valuable than silver or gold. But you still have to do something with it. AI helps find the connections in data, and that’s what makes data – and AI – so valuable.” 

Copps, a legend in the AI community who is currently the CEO of Worlds, illustrated the value of AI with several memorable stories. First, he described how a company he was previously with helped the Department of Justice close a case they had been working on for months by using AI to find crucial info in just 30 minutes.  

Another example from early in his career was perhaps even more profound. Copps’ company was helping an intelligence agency in Europe that had been working on a case involving hundreds of millions of documents over 7 years. Mining the data through traditional search engines was not getting the job done. 

So Copps’ company built an AI platform that enabled people to see and navigate information visually, almost like using a Google Earth map. The reaction from the European intelligence agency could be considered … euphoric. 

 “The guy that was working on the data cried – in a good way,” Copps said. “He had been looking at this for so long, and the answers were right there. That points to the power of AI.” 

For more entertaining AI insight from Copps and Fontaine, you can listen to the entire podcast here.  

But what can I do to leverage AI? 

After listening to the podcast, you might think “That’s great, AI really sounds like it could help my company grow in an efficient and profitable way. But what’s my first step? How do I access and use AI technology?” 

Good question. Actually, no … that’s a great question. 

Luckily the answer to that question has just changed.  

Clients at our Dallas TX1 Data Center and our Ashburn VA3 Data Center can talk to us about accessing AI without installing their own infrastructure. That’s because we have become qualified as NVIDIA DGX-Ready data centers. DGX is NVIDIA’s flagship appliance for AI computation. 

This qualification allows us to house NVIDIA’s DGX AI appliances in our data centers, where they can be used “as-a-service” by clients demanding cutting-edge AI infrastructure.  

NVIDIA has plenty of case studies showing how companies across a broad array of industries have already seen significant results from accessing their deep learning AI technology, including:  

  • Baker Hughes has reduced the cost of finding, extracting, processing, and delivering oil. 

  • Accenture Labs is quickly detecting security threats by analyzing anomalies in large-scale network graphs. 

  • Graphistry is protecting some of the largest companies in the world by visually alerting them of attacks and big outages. 

  • Oak Ridge National Laboratory is creating a more accurate picture of the Earth’s population -- including people who have never been accounted for before -- to predict future resource requirements. 

  • Princeton University is predicting disruptions in a tokamak fusion reactor, paving the way to clean energy. 

Other companies (including some startups who you may hear a lot more about soon) have shared their inspiring stories on this page: https://www.nvidia.com/en-us/deep-learning-ai/customer-stories/

What will your story be? There’s only one way to find out – by harnessing the power of AI for your enterprise. With NVIDIA in our data centers, we can help you get there. Contact me at syunus@ragingwire.com to find out more. 
 

 

Cleaner Air Oregon program clears new Hillsboro Data Center for construction

As a new member of Oregon's business community, we're proud to announce that our new data center in Hillsboro, Oregon, has completed its evaluation by the Cleaner Air Oregon program and has been pronounced fit to proceed with construction. 

Oregon began a new era in 2018 by creating the Cleaner Air Oregon program, which makes sure that all new and existing commercial and industrial facilities cannot emit toxic air contaminants at a level that could potentially harm people.  The Oregon Department of Environmental Quality (DEQ) sees this new program as helping to ensure that industrial progress will not cause a regression in health.  

"DEQ is excited to see Cleaner Air Oregon meet the ongoing challenge of maintaining clean and healthy air in Oregon communities," said Lauren Wirtis, Public Affairs Specialist for the Oregon DEQ. 

The requirements of the Cleaner Air Oregon program apply to paper mills, steel rolling mills, oil refining companies, microchip manufacturers, lumber companies, glass manufacturers – the list goes on and on – and includes data centers.  

While smaller data centers have also gained permits from the Cleaner Air Oregon program, our Hillsboro data center is the only data center to have completed a Level 3 risk assessment. Level 3 is nearly the most rigorous on a scale that goes from Level 1 to Level 4, with Level 4 being the most complex.  

To illustrate the level of examination that takes place during a Level 3 risk assessment, and why it can take up to a year to complete, take a look at the efforts needed to gain the Cleaner Air Oregon certification

To complete a Level 2 or 3 Risk Assessment, facilities need to develop a detailed list of model inputs, including site-specific stack characteristics, building information (to account for building downwash), terrain data, specific exposure locations, and site-specific land use parameters. The quantity and complexity of parameters add up quickly and can easily become overwhelming. 

What also gets complicated fast is the amount of data that needs to be managed.  On average, facilities could be reporting anywhere from 10-50 toxic air contaminants per emissions source.  Multiply that by the number of emissions sources, the number of exposure locations, and 43,824 hours (the number of hours in the required 5-year meteorological dataset), and very quickly your Cleaner Air Oregon Risk Assessment includes over a million data points. 

Therefore, it's not only necessary to have a trained air quality modeler involved, but you also need to be able to manage a large amount of data.  This becomes increasingly important when you need to start analyzing the modeling results to determine what sources and what toxic air contaminants may be driving risks and therefore require refinement. 

Why is this level of scrutiny needed? Before the Cleaner Air Oregon rules were adopted, Oregon's based the existing rules on federal law. Those rules allowed industrial facilities to release potentially harmful amounts of air toxics, but still operate within legal requirements. The Cleaner Air Oregon rules closed the regulatory gaps left after the implementation of federal air toxics regulations.  

Change is hardly ever easy, particularly when it involves new processes and invariably new costs. But this kind of change is well worth it. We applaud the state of Oregon for doing not what is easy, but what is right. And that's why we're proud to help keep Oregon's air clean and healthy for generations to come. 

 

 

Why Enterprises Should Use Hyperscale Data Center Techniques

When contemplating what data center customers will need over the next one to three years, several things come to mind.

First, hybrid cloud will continue to be a popular trend, and with good reason. Uncontrollable costs from public cloud service providers are driving people to pull workloads from those applications and into a more economical hybrid cloud environment. Some customers have also reported performance issues when demand on public cloud is high.

Next, many customers are asking for larger footprints and increased power density. In fact, it’s not uncommon to see densities hit 20kW. These higher power densities are a real problem for legacy data center providers that designed their buildings to serve 4-5kW per rack installations, back in the days when a high-density load was considered to be 10kW. We’re long past that now. Data center operators who can build-to-suit can handle these new 20kW and higher requirements, which is really what customers need to run their mission-critical applications.

The bottom line is: to get the most cost-effective, efficient use of a data center, enterprises need to use hyperscale techniques. But how?

Let’s start with utilization rates. Enterprises typically get about a 30 percent utilization rate of their infrastructure when measured on a 24x7x365 basis, whereas hyperscalers get 70-80 percent – more than double that of enterprises. If enterprises can double their utilization rate, it means that they can buy half of what they normally buy and still serve the same workload demand. That will save a lot of money. 

But to improve their utilization rate, enterprises have a choice. They can do it on their own, or buy a hyperconverged system that essentially does the same thing. That hyperconverged system will give them public cloud economics in a private cloud environment. There are also quite a few composable systems from major OEMs that leverage similar techniques.

A few years ago, I sponsored an infrastructure TCO study that still rings true today. The study highlighted the point that most of the cost of running a server is not the cost of the server itself. The TCO of running a server consists of three major components: 1) the cost of the server, 2) administration and management, and 3) space, power and cooling. The actual server represents about 20% of the total, 70% is administration and management, and the remaining 10% is space, power, and cooling. 

So, enterprises that want to reduce costs should look closely at the fact that 70% of their server costs are tied up in administration and management. Hyperscalers have done exactly that. Their investments in software, machine learning, and automation drive utilization rates to 4X that of the average enterprise, creating world-class TCO and programmability of their data center infrastructure.  

Can Growing CDN Providers Overcome These 3 Challenges?

As the COVID-19 pandemic swept across the globe, content delivery network (CDN) providers were quickly thrust into the world’s spotlight. People everywhere depended on CDNs to quickly and smoothly connect them to news, entertainment, education, social media, training, virtual events, videoconferencing, telemedicine … the list goes on and on. 
 
That’s why it’s no surprise that the global CDN market that was valued at $9.9 billion in 2018 is now expected to reach $57.15 billion by 2025, according to Brand Essence Market Research
 
But to turn those lofty revenue projections into revenue growth, the smartest CDN providers must find ways to overcome significant challenges, such as these three mentioned below. 
 
Challenge #1: Deliver high-performance with low latency 
 
People everywhere are demanding high quality content and video, without any speed bumps due to latency issues. Although software, hardware, networks, and bandwidth all affect the level of latency, the single biggest factor that slows down content is the distance that light has to travel. That’s because for all our mind-blowing achievements in technology, one thing we haven’t yet figured out is how to speed up the speed of light. 
 
Light travels at about 125,000 miles per second through optical fibers, which is roughly two-thirds of the speed of light in a vacuum (186,000 miles per second). So for every 60 miles of distance a packet has to travel, about a half a millisecond of time is added to the one-way latency trip, and thus 1 millisecond to the round-trip time.  
 
They say money makes the world go ‘round. So in essence, latency can stop the world from turning, as shown in these examples: 

  • In finance, for years firms have offered premium “ultra-low latency” services to investors who want to receive key data about two seconds before the general public does. What can happen in two seconds? In the stock market, quite a lot. Research by the Tabb Group estimated that if a broker’s platform is even 5 milliseconds behind the competition, it could lose at least 1% of its flow, equating to about $4 million in revenues per millisecond. 
  •  In retail, according to research by Akamai, a 100 ms delay in web site load time leads to a decrease in conversion rates of up to 7%. Conversely, Akamai reported that Walmart noticed that every 100 ms of improvement in load time resulted in up to a 1% increase in revenue.  
  • In the cloud, latency looms as a major hindrance. Among the findings in a research paper by the University of Cambridge Computer Laboratory are that 10µs (those are microseconds, or one-millionth of a second) latency in each direction is enough to have a noticeable effect, and 50µs latency in each direction is enough to significantly affect performance. For data centers connected by additional hops between servers, latency increases further. This has ramifications for workload placement and physical host sharing when trying to reach performance targets. 

Every CDN wants to provide high-performance, but predicting the performance of CDNs can be an imprecise exercise. CDNs use different methodologies to measure performance, and have various types of underlying architecture. However, one universal truth is that the geographic locations of CDN data centers play a big role in performance measurements. 
 
This is one reason why NTT’s Global Data Centers division has strategically chosen certain locations for their data center campuses. For example, our data centers in Sacramento give companies based in San Francisco a low-latency experience as compared to other locations. Those customers experience round-trip latency of only 3 milliseconds to go out and back the 88 miles from Sacramento to San Francisco. That compares well to round-trip latency of 4.2 milliseconds from San Francisco to Reno (218 miles away), 15.3 milliseconds from San Francisco to Las Vegas (570 miles away), or 18.1 milliseconds from San Francisco to Phoenix (754 miles away). 
 
In Chicago, where NTT is building a new 72MW data center campus, customers at that Central U.S. location will enjoy low latency to both U.S. coasts. According to AT&T, IP network latency from Chicago to New York is 17 milliseconds, and from Chicago to Los Angeles is 43 milliseconds. 
 
Reducing latency is a huge point of emphasis at NTT. At our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center. 
 
Challenge #2: Scale up to meet a growing base 
 
Every business wants more customers, but for CDNs, they need to be careful what they wish for. Huge bursts in Internet traffic can bring an overwhelming amount of peak usage. Videoconferencing historians will long remember the third week of March 2020, when a record 62 million downloads of videoconferencing apps were recorded. Once those apps were downloaded, they were quickly put to use – and have only increased in usage time since then. 
 
The instant reaction to those stats and trends would be for CDNs to add on as much capacity as possible. But building to handle the peak demand can be costly, as a CDN also needs to economically account for lower-usage times where there are huge amounts of capacity that will not utilized. 
 
These massive spikes and valleys bring a significant traffic engineering challenge. A well-prepared CDN will minimize downtime by utilizing load balancing to distribute network traffic evenly across several servers, making it easier to scale up or down for rapid changes in traffic. 
 
Technology such as intelligent failover provides uninterrupted service even if one or more of the CDN servers go offline due to hardware malfunction. The failover can redistribute the traffic to the other operational servers.  
 
Appropriate routing protocols will transfer traffic to other available data centers, ensuring that no users lose access to a website. This is what NTT’s Global Data Centers division had in mind when we deployed a fully redundant point-to-point connection, via multiple carriers, between all our U.S. data centers.  
 
We make critical functions such as disaster recovery, load balancing, backup, and replication easy and secure. Our services support any Layer 3 traffic for functions such as web traffic, database calls, and any other TCP/IP based functions. Companies that leverage our coast-to-coast cross connect save significant money over installing and managing your own redundant, firewall-protected, multi-carrier connection. 
 
Challenge #3: Plan for ample redundancy, disaster recovery, and risk reduction 
 
Sure, most of the time disasters don’t happen. But I suppose that depends on your definition of a disaster. It doesn’t have to be the type that you see in movies – with earth-cracking gaps opening up in the middle of major cities. Even events as routine as thunderstorms can have ripple effects that could interrupt service provided by a CDN. 
 
That’s why smart CDNs are reducing their risk of outages by not having all their assets and content in one geographic area. The good thing is that by enacting one fairly simple strategy, CDNs can check the boxes for ample redundancy, disaster recovery, and risk reduction. 
 
That strategy is to have a data center presence in multiple geographic locations. The three sections of the U.S. – East, Central, West – make for a logical mix. 
 
In the East region, well, Ashburn is the capital of the world as far as data centers are concerned. No other markets on the planet have as much deployed data center space as Ashburn, and construction of new data centers is ongoing in order to keep up with demand. Known as “Data Center Alley”, Ashburn is a perfect home for a data center for many reasons, including its dense fiber infrastructure and low risk of natural disasters. Those factors alone make Ashburn a great location as part of a redundancy and disaster recovery strategy.  
 
In the Central region, Dallas has a very low risk of dangerous weather conditions. According to data collected from 1950-2010, no earthquakes of 3.5 magnitude or above have occurred in or near Dallas. No hurricanes within 50 miles of Dallas have been recorded either. And while tornadoes can occur, some data centers such as NTT’s Dallas TX1 Data Center are rated to withstand those conditions. Another appealing aspect of Dallas is that Texas’s independent power grid, managed by ERCOT (the Electric Reliability Council of Texas), is one of three main power grids that feed the continental United States. By maintaining a presence in each of the three grids, companies can make sure that their data centers are as reliable as could be. 
 
In the West, several appealing options are located along the Pacific coast. In the Northwest, the Oregon suburb of Hillsboro is a popular choice for an economical disaster recovery location. Hillsboro has a mild climate which translates to low heating and cooling costs, a minimal natural disaster risk and strong tax incentives. As a bonus, a variety of submarine cables deliver low-latency connections between Hillsboro and high-growth Asian markets.  
 
In Northern California, Sacramento offers a safe data center option as that city is out of the seismic risk area that includes Bay Area cities. Sacramento is also considered preferable to other Western data center markets such as Reno, Nevada. At least 30 seismic faults are in the Reno-Carson City urban corridor, and some studies say that two of those faults in particular appear primed to unleash a moderate to major earthquake. 
 
And then there’s Silicon Valley, which of course is a terrific place to have data center space. However, no one would say that Silicon Valley is a truly seismically stable area. But, that risk can be mitigated if the data center is protected with technology such as a base isolation system, which NTT uses in its new four-story Santa Clara data center. That base isolation system has been proven to enable multi-story buildings to withstand historic earthquakes with no damage to the structure or the IT equipment inside. 

CDN Customer Coverage from NTT U.S. Data Center Locations 

This map shows how NTT’s U.S. data centers can give CDNs the level of low latency, load balancing, and redundancy that they are looking for. 
 
Legend:  

  • White lines indicate NTT customer base outreach per location 
  • Blue lines indicate the interconnection between the locations 

Data at the Center: How data centers can shape the future of AI

In today’s world, we see data anywhere and everywhere. Data comes in different shapes and sizes, such as video, voice, text, photos, objects, maps, charts, and spreadsheets. Can you imagine life without a smartphone, social apps, GPS, ride-hailing, or e-commerce? Data is at the center of how we consume and experience all these services.  
 
Beyond the gathering of data, we need to determine how to use it. That’s where artificial intelligence and machine learning (AI/ML) come in. In services like ride-hailing, navigation/wayfinding, video communications and many others, AI/ML has been designed in.  For example: 
 
    •    Today, a luxury car is built with 100 microprocessors to control various functions 
    •    An autonomous vehicle (AV) may have 200 or more microprocessors and generates 10 GB of data per mile 
    •    Tens of thousands of connected cars will create a massive distributed computing system at the edge of the network 
    •    3D body scanners and cameras generate 80 Gbps of massive raw data for the streaming games 
    •    A Lytro camera, equipped with light field technology, generates 300 GB of data per second 
 
Computer systems now perform tasks requiring human intelligence – from visual perception to speech recognition, from decision-making to pattern recognition. As more data is generated, better algorithms get developed. When better services are offered, the usage of the services goes up. Think of it as a fly wheel that keeps moving.  
 
The AI solutions are only as limited as the number of high-quality datasets you have. Real-world scenarios showing how the technology is used include:    

    •    Autonomous Driving: Data storage on vehicle versus in the data center for neuro map analysis and data modeling 
    •    Autonomous Business Planning: Decision and business models for manufacturing and distribution  
    •    Data Disaggregation: Uncover hidden patterns in shifting consumer taste and buying behavior in retail space 
    •    Video Games: Real-time player level adjustment using low-latency data for enhanced experience 
 
Enabling AI infrastructure in data centers  
 
Because data centers sit right in the middle of compute, storage, networking, and AI, they are the hub that those other technologies revolve around. So as a data center company, how can we make AI/ML computation affordable and accessible for enterprises to keep their innovation engines running? 
 
At NTT Global Data Centers, enabling AI infrastructure is an important pillar of our growth plans. GPUs, memory, storage, and the network are the key components of ‘AI Compute’ infrastructure. Our goal is to make AI Infrastructure-as-a-Service accessible and affordable to forward-looking small, medium, and large businesses.  
 

Modern enterprises must have an AI engine at the core of their business decision-making, operations, and P&L to stay competitive in the marketplace.  But AI projects can be an expensive undertaking … from hiring talent to hardware procurement to deployment your budget may skyrocket. However, if the upfront costs of AI infrastructure can be eliminated, the entire value proposition shifts in a positive way.   So how does NTT Global Data Centers help our customers build AI solutions? 

    1.    We de-construct our customer’s problem statement and design the solution 
    2.    Our AI experts build tailored model for the computation 
    3.    Our customers have full access to AI-ready infrastructure and talent 
 
AI is transforming the way the enterprises conduct their business. With data centers in the middle, GPUs, hardware infrastructure, algorithms, and networks, will change the DNA of enterprises.  

Tax breaks are perfect topping on Chicago pie

For data centers, Chicago had it all… almost.

Great connectivity. Low latency to both U.S. coasts. Hundreds of temperate days without the need to cool the data floor. Affordable power. Deep dish pizza. Sorry, did that last item make you hungry? Well, keep reading, you’ll be glad you did. 

While Chicago offered a deep, broad list of benefits for data center customers, something was missing. And other data center markets had it. 

So close to perfect 

The missing topping on Chicago’s data center pie? Tax incentives. And that left a noticeable blank spot on an otherwise delicious dish. 

Chicago’s situation came to a somewhat dramatic head on Jan. 27, 2019, when Ally Marotti of the Chicago Tribune wrote an article describing what was at stake if Illinois could not provide the same data center tax incentives that were offered in 30 other states. And of those 30 states, it was nice, quiet Iowa that figuratively sounded the alarm that Illinois could not help but hear. 

Back in 2017, Iowa enticed Apple to start building a 400,000 square-foot data center near Des Moines by offering $20 million in tax incentives. A comprehensive report paid for by the Illinois Chamber of Commerce Foundation found that if Apple had chosen to build its project in Illinois, the state could have added 3,360 jobs, $203.9 million in labor income and $521.7 million in economic output.

Let’s do something about it  

Illinois did not want to watch more major players become drawn to Iowa as a top Midwestern magnet for data centers while forsaking Chicago and all its great benefits.

Tyler Diers, executive director of the Illinois chamber’s technology council, made that point clear in Marotti’s article, saying “We hear the war stories all the time, and [data center operators] do too. We’re increasingly losing our desirability and our competitiveness. Even though we’re still relatively high, we want to stop the bleeding before we no longer [are] a desirable location.”

Those are strong words, and they made a powerful impression. Instead of standing by idly as other states flushed their economic coffers full of data center-driven cash, Illinois bore down and did something about the situation. After all, this state is personified by Chicago, the city of the big shoulders where “dem Grabowskis” root for “Da Bears”. The Windy City was not about to get blown away by Iowa.

In June 2019, Chicago essentially called dibs on the Midwest data center market when Illinois added data center tax incentives to the state’s capital spending budget. Now qualifying data centers -- and their customers -- are exempt from paying state and local sales taxes on equipment in the data centers, including everything from cooling and heating equipment to servers and storage racks. To qualify, a data center must spend at least $250 million on the facility and employ at least 20 full-time employees over a five-year period. In addition, it must prove it meets a green building standard, such as  LEED  or  Energy Star.

NTT’s Global Data Centers Americas division will meet all of those qualifications as we build two 36MW data centers on our 19-acre campus in Itasca, Illinois, which is about 27 miles outside of the city of Chicago and right near airports, restaurants, hotels, beautiful lakes, and other amenities.

Here’s what’s in it for you 

So how does that tax break benefit data center colocation customers? Well, the sales tax rate in Itasca is 7.5%. For businesses that are new to colocation, they may have to invest money in new equipment. Say they invest $500,000, that’s an extra $37,500 in sales tax they don’t have to pay because of this benefit.

And this Illinois sales tax exemption is locked in for 20 years, so data center customers will pay no sales tax when they refresh their equipment down the road too.

So that’s the Chicago story, and it’s a darn good one. This is a town that saw it was falling behind, so it lowered its broad shoulders, and charged forward to take the lead again. Now data center customers can enjoy significant tax savings for decades to come in a market that already had everything else they could possibly need. 

Oh, still thinking about that deep dish pizza? I understand. Here’s a great tip: you can get an authentic Chicago deep dish pizza shipped frozen to you from Giordanos or Lou Malnati’s. Either way you can’t go wrong. You’re welcome -- enjoy!

P.S. You can talk the talk – Chicago-style! To learn more about the flavor of Chicago through the city’s unique jargon, check out this page to find out the meanings of frunchroom, the bean, grachki, sammich and more.

Earthquake Protection Systems partners with NTT to keep our customer's IT equipment safe.

 

 

Tokyo, where NTT is headquartered, is known to experience some of the most powerful earthquakes on the planet. That’s one of the reasons why NTT has dedicated much research and resources toward building seismically stable data centers that can protect clients’ mission-critical, sensitive equipment in the event of a strong earthquake. 

Now, NTT is bringing our proven construction model for earthquake-resistant data centers to Silicon Valley by building a four-story, 16MW data center in Santa Clara. The building will be set on a base isolation system that will be the first of its kind for data centers in the region. 

Opening in early 2021, this new Silicon Valley SV1 data center will provide clients with a state-of-the-art presence in a global tech hub, while also connecting them to NTT's platform of 160 data centers across 20 countries and regions around the world.

The Secret to Data Center Expansion: Modular Construction

 

 

It is no surprise that the demand for data centers has increased since March 2020. With so many people turning to digital entertainment, a work from home lifestyle, and remote schooling environments, this increase in network traffic is leaving online technology-based companies looking for data center space beyond the scope of their on-premises resources. 
 
Luckily, colocation data centers can take the burden of running a data center off your shoulders. NTT Ltd.’s Global Data Centers Americas division offers access to data centers all over the world. One of the key strategies that enabled us to expand our platform to 160 data centers across 20 countries and regions is the concept of modular construction, which lowers production costs, speeds up timelines, and provides customizable spaces for customers.  
 
The concept of modular construction is simple. By simultaneously constructing the data center shell onsite while the mechanical and electrical components are manufactured at an offsite factory, our team can speed up the timeline of production. The modules are shipped to the site and installed for just in time commissioning.  
 
Not only does manufacturing offer a more efficient solution, but it also increases the integrity of the product. Often, building large components onsite has limitations. Whereas building in a manufacturing warehouse allows for the use of better materials, more thorough testing, and specialized labor.  
 
There are many benefits to a modular construction approach: 
 
    1    Lower costs from buying material in large volumes for quantity discounts. 
    2    Better scheduling of resources from using repeatable processes . 
    3    Reduction of on-site manpower from using prefabricated components.  
    4    Lower labor workforce rates from using factory assembly staff vs. tradesmen on-site.  
    5    Fewer costly weather-related delays by shifting prefab components indoors.  
    6    Lower costs associated with QA/QC through factory controlled repeatable processes . 
 
Overall, modular construction is a smart and efficient way of constructing data centers, leading to faster delivery at a lower price point for data center customers. 

How to Avoid Falling Through the Cloud

Undoubtedly, cloud computing is on the rise. Enterprises are adopting hybrid multi-cloud strategies to find a balance between what to keep on-premises (or in a data center) versus moving to the public cloud.   

Approximately 70% of enterprise applications have moved to the cloud. We are entering an era where centralized processing will be decentralized. Enterprises are adopting a hybrid model where some functions will run on edge nodes. Infrastructure is becoming highly distributed and dynamic in nature. Cloud infrastructure is consumed ‘as a Service’. You can take a third-party API along with compute, storage and networking resources from on-prem to any of the public clouds, stitch them together to make them appear seamless, and deploy it in the market. 

Despite all these developments, several myths remain about enterprises running the business in the cloud. Here is a look at three such myths. 

 

Myth 1: Cloud is more affordable than data centers 

 

It is true that due to its elastic nature, the cloud can be more cost-efficient. However, in order to fully benefit from those savings, a business may need to upgrade applications and its base computer infrastructure – all of which can be costly. Legacy applications do not seamlessly migrate over to the cloud. Applications need to be architected to be consumed ‘as-a-Service’ and deployed for global consumption. 

To gauge your total cost output, you need to consider your entire IT deployment in public, private and hybrid clouds. Some workloads and processes are more easily shifted to public cloud than others. And regulatory or business requirements may further complicate the financial aspects of cloud migration. Those factors can lead to the conclusion that sometimes leaving applications on-premises is the right decision. For many companies, colo is the new on-prem, as that option enables companies to keep their data on their own servers.  

Don’t fall through the cloud 

Then there are those poor companies that get sticker shock when they see the costs of cloud compute after a few months. To avoid falling through the cloud and plummeting into a land of unplanned expenses, companies must do the arithmetic required to analyze compute cycles, volume of data to be processed, data sizing, network bandwidth assessment, data egress and ingress locally and globally, as well as the geographic deployment of the applications and data at the edge. Using storage in the cloud may generate a huge bill – if not monitored properly.  

Consider licensing, for example. If you're migrating an application from the data center to the cloud, your operating system licenses probably won't transfer with it. It's great to be able to spin up a Linux server on AWS at the push of a button, but few take the time in advance to find out whether that action includes the hidden cost of having to pay for a license for the operating system on top of the cloud service fees. Even though you've already paid for that Linux license once, you may well find yourself paying for the same software again.  

Understand the fine print. Cloud service fees are rarely all-inclusive, as hidden fees lurk under almost every action you can take. If you spin up virtual servers for compute cycles and increase network bandwidth capacity for a given task, you must remember to tear down the services to avoid unwanted accrued bills. As far as software licensing goes, you might be able to save money by installing software you've already paid for on a cloud platform, rather than using the service's (admittedly more convenient) one-button install.  

When is cloud worth the cost? 

It may be worth an increase in cost to run workloads in the cloud if it enables the realization of a business goal. If business growth depends on the ability to scale up very rapidly, then even if cloud is more expensive than on-prem, it could be a business growth enabler and could be justified as an investment. We believe that the companies that do not exist today and will be created in next five years will be created on the cloud. It would be prudent for the new businesses to have a cloud presence along with their own footprint in the data centers. 

 

Myth 2: Cloud is more secure than data centers 

 

In the past, cloud computing was perceived as less secure than on-premises capabilities. However, security breaches in the public cloud are rare, and most breaches involve misconfiguration of the cloud service. Today, major cloud service providers invest significantly in security. But this doesn’t mean that security is guaranteed in the cloud. 

Data privacy and data protection policies remain a top concern on the public clouds. Due to the COVID-19 pandemic, videoconferencing applications have experienced a sudden surge. For example, a lot of people have been working from home, students have been using distance learning tools, and people have been using the same tools for group chats.  

You’ve probably heard that the Zoom videoconferencing service experienced a security breach where intruders hacked in and disrupted calls. Such incidents have been dubbed as ‘zoombombing’. Various similar incidents have been reported, including from classroom sessions and business calls, where intruders disrupted what was supposed to be a closed group call. 

Myth 3: Moving to the cloud means I don’t need a data center 
 

While cloud is highly suitable for some use cases, such as variable or unpredictable workloads or when self-service provisioning is key, not all applications and workloads are a good fit for cloud. For example, unless clear cost savings can be realized, moving a legacy application to a public cloud is generally not a good use case.  There are many different paths to the cloud, ranging from simple rehosting, typically via Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), to a complete changeover, to an application implemented by a Software as a Service (SaaS) provider.  

To take advantage of cloud capabilities, it is essential to understand the model and have realistic expectations. Once a workload has been moved, the work is, in many ways, just beginning. Further refactoring or re-architecting is necessary to take advantage of the cloud. Ongoing cost and performance management will also be key for success. CIOs and IT leaders must be sure to include post-migration activities as part of the cloud implementation plan. 

Coronavirus (COVID-19) Response Plan

Our ongoing response to employees, clients and partners during COVID-19.

Updated: May 6, 2020

NTT Global Data Centers Americas is continuing to monitor and respond to the Coronavirus (COVID-19) outbreak that is currently impacting communities across our service areas and the nation, and our hearts go out to all those affected. Our teams are working diligently to maintain mission-critical data center operations while prioritizing the safety and well-being of our employees, clients and partners. As the situation regarding the ongoing Coronavirus (Covid-19) outbreak continues to evolve, we want to reassure you that we have plans in place to keep our data center facilities open and serviced with critical operations continuing at any hazard threat level.

Specific actions

NTT has taken the following actions with our employees and to maintain the data centers.

Specific actions we are taking with our employees and teams:

  • Suspension of non-essential business travel for all employees globally.
  • Self-quarantine for employees who recently travelled to a high risk area and/or came into prolonged exposure or close contact with relatives/friends/others who recently returned from one.
  • Deferring face-to-face meetings which are non-essential and/or involve large numbers of employees. Avoiding public events wherever possible.
  • Employees who are feeling unwell are being advised to seek medical advice and work from home until they are fully recovered.
  • Observing good personal hygiene practices including regular hand washing with soap and water, and the use of hand sanitizers when soap and water are not easily accessible.

Specific actions we are taking to maintain the data centers:

  • Implementing new health screening protocols by asking all entrants to the data centers about their recent travel and any illnesses. We ask for compliance from all of our customers and partners, and patience from your staff as we implement these additional steps.
  • New building access protocols in which we are now asking all customers and staff to enter through the main lobby of each building and to be cleared by security before entering internal spaces. We will not allow any personnel to travel between buildings, including our own internal employees, to help avoid cross contamination across physical sites.
  • Added shipping & receiving department precautionary measures such as all packages will be received with care and placed in quarantine for a period of no less than 24 hours.
  • Increased cleaning and sterilization throughout the data center to help limit potential exposure to customers and employees. Includes conducting sanitizing activities daily in our workplaces and providing hand sanitizer throughout the facilities for all building occupants.
  • Requesting customers, contractors and visitors to self-disclose if they have visited a risk area and the timeframe of the incubation period since leaving the risk area is not yet over. 
  • Ensuring frequent contact surfaces in our offices are cleaned thoroughly, multiple times daily and providing easily accessible hand sanitizer in our workplaces.
  • Ensuring our next-level Business Continuity Plans and protocols are ready to be implemented if/when necessary in future.

Accessing our data center facilities

While the data centers will not be closed to customers at this time, we ask that you help us in considering the following measures:

  • Health screening protocols: We are implementing new health screening protocols by asking all entrants to the data centers about their recent travel and any illnesses. We ask for compliance from all of our customers and partners, and patience from your staff as we implement these additional steps.
  • New  building  access protocols: We are changing access protocols to help avoid cross contamination across physical sites: 
    • We are now asking all customers and staff to enter through the main lobby of each building and to be cleared by security before entering internal spaces. 
    • We will not allow any personnel to travel between buildings, including our own internal employees. Any client, guest, or contractor that is permitted or authorized to come onsite (essential personnel) will be allowed to enter a building but will be refused entry to another building should they request to, or if they show up at another building.
  • We are asking customers to limit non-essential personnel access to the data centers during this time; additionally, please limit meetings and gatherings of employees within the data center.  Note: the training rooms in VA3 and TX1 will be closed for all use over the next two weeks (as of March 15, 2020).
  • If your staff is feeling unwell, or has traveled internationally in the past two weeks, do not let them come into the data center
  • While working in the data center, all customers should adhere to the latest guidelines on social distancing and sanitization to limit the spread to our staff and your teams
  • Our operations team asks that all customers limit or delay any non-essential work requests that involve our operations and security staff; we want to maintain as much availability of our teams to respond quickly if more critical issues arise during this challenging time

Customer Communications

If you would like to read the latest customer communications in more detail, please click on any of the links below.

Frequently Asked Questions

  1. Will I have access to my data center space during this time? 

    • Yes. Our buildings will remain open for our customers under the current foreseeable circumstances. To minimize the number of people in each building, all of our non-essential employees are required to work from home, effective immediately and continuing through at least April 30, 2020. Our personnel in charge of Critical Facility Operations, Security, and Network Operations Center functions will still be present as needed to perform their job duties.

  2. Have any people who work at your data centers tested positive for Coronavirus? 

    • No. As of March 26, there have been no positive cases of COVID-19 within the company.  By mandating that non-mission critical personnel work remotely, we are making every effort to prevent and slow the spread of the virus out of an abundance of caution and care for both our employees and customers.

  3. How is it ensured that events at service providers or suppliers do not lead to disruptions in data center operations?

    • Critical service providers are part of the Business Continuity Management System. These are included in the pandemic preparedness plans with appropriate communication. This includes a query of the current situation and the measures implemented to protect business processes. In addition, the providers were requested to provide information immediately if there was a risk that the services could no longer be provided. Should it nevertheless occur that a service cannot be provided, the mission-critical infrastructure of the data centers is designed to remain operational even in the event of longer downtimes of service providers. 

For more frequently asked questions, please click here

We appreciate your support and flexibility during these difficult times.  We will all work together to help minimize the spread of this virus, all while ensuring the data center environment continues to operate and stay online for our customers. If you have questions or requests, please reach out to your account manager or to the NOC here: 916-286-3000​.

Pages

Subscribe to RSS - blogs