Shoieb Yunus's blog

Grow revenue and reduce costs by using NVIDIA-powered AI in our TX and VA data centers

Artificial intelligence is changing the landscape of business and the foundational beams of companies. Across many verticals, companies are competing not only for market share or revenue, but to survive. Some companies are scaling and innovating faster, creating new markets and business models to drive more business, and offering more customized and personalized services, not locally, but globally.  

We are moving towards “AI-first” companies as businesses rethink their strategy and operating models. Artificial intelligence, interconnection and networks are now the core tenets for businesses to compete and succeed.  

The Power of Artificial Intelligence 

When artificial intelligence experts Dave Copps and Ryan Fontaine spoke as guests on our podcast series, they shared valuable insights about how companies across all industries can use AI to generate revenue or reduce costs.  

“For businesses, if you have access to good AI and good machine learning, you’re going to be all right,” said Fontaine, the CEO of Citadel Analytics. “Data is the new ‘oil’. It is the most valuable commodity in the world, more valuable than silver or gold. But you still have to do something with it. AI helps find the connections in data, and that’s what makes data – and AI – so valuable.” 

Copps, a legend in the AI community who is currently the CEO of Worlds, illustrated the value of AI with several memorable stories. First, he described how a company he was previously with helped the Department of Justice close a case they had been working on for months by using AI to find crucial info in just 30 minutes.  

Another example from early in his career was perhaps even more profound. Copps’ company was helping an intelligence agency in Europe that had been working on a case involving hundreds of millions of documents over 7 years. Mining the data through traditional search engines was not getting the job done. 

So Copps’ company built an AI platform that enabled people to see and navigate information visually, almost like using a Google Earth map. The reaction from the European intelligence agency could be considered … euphoric. 

 “The guy that was working on the data cried – in a good way,” Copps said. “He had been looking at this for so long, and the answers were right there. That points to the power of AI.” 

For more entertaining AI insight from Copps and Fontaine, you can listen to the entire podcast here.  

But what can I do to leverage AI? 

After listening to the podcast, you might think “That’s great, AI really sounds like it could help my company grow in an efficient and profitable way. But what’s my first step? How do I access and use AI technology?” 

Good question. Actually, no … that’s a great question. 

Luckily the answer to that question has just changed.  

Clients at our Dallas TX1 Data Center and our Ashburn VA3 Data Center can talk to us about accessing AI without installing their own infrastructure. That’s because we have become qualified as NVIDIA DGX-Ready data centers. DGX is NVIDIA’s flagship appliance for AI computation. 

This qualification allows us to house NVIDIA’s DGX AI appliances in our data centers, where they can be used “as-a-service” by clients demanding cutting-edge AI infrastructure.  

NVIDIA has plenty of case studies showing how companies across a broad array of industries have already seen significant results from accessing their deep learning AI technology, including:  

  • Baker Hughes has reduced the cost of finding, extracting, processing, and delivering oil. 

  • Accenture Labs is quickly detecting security threats by analyzing anomalies in large-scale network graphs. 

  • Graphistry is protecting some of the largest companies in the world by visually alerting them of attacks and big outages. 

  • Oak Ridge National Laboratory is creating a more accurate picture of the Earth’s population -- including people who have never been accounted for before -- to predict future resource requirements. 

  • Princeton University is predicting disruptions in a tokamak fusion reactor, paving the way to clean energy. 

Other companies (including some startups who you may hear a lot more about soon) have shared their inspiring stories on this page: https://www.nvidia.com/en-us/deep-learning-ai/customer-stories/

What will your story be? There’s only one way to find out – by harnessing the power of AI for your enterprise. With NVIDIA in our data centers, we can help you get there. Contact me at syunus@ragingwire.com to find out more. 
 

 

Data at the Center: How data centers can shape the future of AI

In today’s world, we see data anywhere and everywhere. Data comes in different shapes and sizes, such as video, voice, text, photos, objects, maps, charts, and spreadsheets. Can you imagine life without a smartphone, social apps, GPS, ride-hailing, or e-commerce? Data is at the center of how we consume and experience all these services.  
 
Beyond the gathering of data, we need to determine how to use it. That’s where artificial intelligence and machine learning (AI/ML) come in. In services like ride-hailing, navigation/wayfinding, video communications and many others, AI/ML has been designed in.  For example: 
 
    •    Today, a luxury car is built with 100 microprocessors to control various functions 
    •    An autonomous vehicle (AV) may have 200 or more microprocessors and generates 10 GB of data per mile 
    •    Tens of thousands of connected cars will create a massive distributed computing system at the edge of the network 
    •    3D body scanners and cameras generate 80 Gbps of massive raw data for the streaming games 
    •    A Lytro camera, equipped with light field technology, generates 300 GB of data per second 
 
Computer systems now perform tasks requiring human intelligence – from visual perception to speech recognition, from decision-making to pattern recognition. As more data is generated, better algorithms get developed. When better services are offered, the usage of the services goes up. Think of it as a fly wheel that keeps moving.  
 
The AI solutions are only as limited as the number of high-quality datasets you have. Real-world scenarios showing how the technology is used include:    

    •    Autonomous Driving: Data storage on vehicle versus in the data center for neuro map analysis and data modeling 
    •    Autonomous Business Planning: Decision and business models for manufacturing and distribution  
    •    Data Disaggregation: Uncover hidden patterns in shifting consumer taste and buying behavior in retail space 
    •    Video Games: Real-time player level adjustment using low-latency data for enhanced experience 
 
Enabling AI infrastructure in data centers  
 
Because data centers sit right in the middle of compute, storage, networking, and AI, they are the hub that those other technologies revolve around. So as a data center company, how can we make AI/ML computation affordable and accessible for enterprises to keep their innovation engines running? 
 
At NTT Global Data Centers, enabling AI infrastructure is an important pillar of our growth plans. GPUs, memory, storage, and the network are the key components of ‘AI Compute’ infrastructure. Our goal is to make AI Infrastructure-as-a-Service accessible and affordable to forward-looking small, medium, and large businesses.  
 

Modern enterprises must have an AI engine at the core of their business decision-making, operations, and P&L to stay competitive in the marketplace.  But AI projects can be an expensive undertaking … from hiring talent to hardware procurement to deployment your budget may skyrocket. However, if the upfront costs of AI infrastructure can be eliminated, the entire value proposition shifts in a positive way.   So how does NTT Global Data Centers help our customers build AI solutions? 

    1.    We de-construct our customer’s problem statement and design the solution 
    2.    Our AI experts build tailored model for the computation 
    3.    Our customers have full access to AI-ready infrastructure and talent 
 
AI is transforming the way the enterprises conduct their business. With data centers in the middle, GPUs, hardware infrastructure, algorithms, and networks, will change the DNA of enterprises.  

How to Avoid Falling Through the Cloud

Undoubtedly, cloud computing is on the rise. Enterprises are adopting hybrid multi-cloud strategies to find a balance between what to keep on-premises (or in a data center) versus moving to the public cloud.   

Approximately 70% of enterprise applications have moved to the cloud. We are entering an era where centralized processing will be decentralized. Enterprises are adopting a hybrid model where some functions will run on edge nodes. Infrastructure is becoming highly distributed and dynamic in nature. Cloud infrastructure is consumed ‘as a Service’. You can take a third-party API along with compute, storage and networking resources from on-prem to any of the public clouds, stitch them together to make them appear seamless, and deploy it in the market. 

Despite all these developments, several myths remain about enterprises running the business in the cloud. Here is a look at three such myths. 

 

Myth 1: Cloud is more affordable than data centers 

 

It is true that due to its elastic nature, the cloud can be more cost-efficient. However, in order to fully benefit from those savings, a business may need to upgrade applications and its base computer infrastructure – all of which can be costly. Legacy applications do not seamlessly migrate over to the cloud. Applications need to be architected to be consumed ‘as-a-Service’ and deployed for global consumption. 

To gauge your total cost output, you need to consider your entire IT deployment in public, private and hybrid clouds. Some workloads and processes are more easily shifted to public cloud than others. And regulatory or business requirements may further complicate the financial aspects of cloud migration. Those factors can lead to the conclusion that sometimes leaving applications on-premises is the right decision. For many companies, colo is the new on-prem, as that option enables companies to keep their data on their own servers.  

Don’t fall through the cloud 

Then there are those poor companies that get sticker shock when they see the costs of cloud compute after a few months. To avoid falling through the cloud and plummeting into a land of unplanned expenses, companies must do the arithmetic required to analyze compute cycles, volume of data to be processed, data sizing, network bandwidth assessment, data egress and ingress locally and globally, as well as the geographic deployment of the applications and data at the edge. Using storage in the cloud may generate a huge bill – if not monitored properly.  

Consider licensing, for example. If you're migrating an application from the data center to the cloud, your operating system licenses probably won't transfer with it. It's great to be able to spin up a Linux server on AWS at the push of a button, but few take the time in advance to find out whether that action includes the hidden cost of having to pay for a license for the operating system on top of the cloud service fees. Even though you've already paid for that Linux license once, you may well find yourself paying for the same software again.  

Understand the fine print. Cloud service fees are rarely all-inclusive, as hidden fees lurk under almost every action you can take. If you spin up virtual servers for compute cycles and increase network bandwidth capacity for a given task, you must remember to tear down the services to avoid unwanted accrued bills. As far as software licensing goes, you might be able to save money by installing software you've already paid for on a cloud platform, rather than using the service's (admittedly more convenient) one-button install.  

When is cloud worth the cost? 

It may be worth an increase in cost to run workloads in the cloud if it enables the realization of a business goal. If business growth depends on the ability to scale up very rapidly, then even if cloud is more expensive than on-prem, it could be a business growth enabler and could be justified as an investment. We believe that the companies that do not exist today and will be created in next five years will be created on the cloud. It would be prudent for the new businesses to have a cloud presence along with their own footprint in the data centers. 

 

Myth 2: Cloud is more secure than data centers 

 

In the past, cloud computing was perceived as less secure than on-premises capabilities. However, security breaches in the public cloud are rare, and most breaches involve misconfiguration of the cloud service. Today, major cloud service providers invest significantly in security. But this doesn’t mean that security is guaranteed in the cloud. 

Data privacy and data protection policies remain a top concern on the public clouds. Due to the COVID-19 pandemic, videoconferencing applications have experienced a sudden surge. For example, a lot of people have been working from home, students have been using distance learning tools, and people have been using the same tools for group chats.  

You’ve probably heard that the Zoom videoconferencing service experienced a security breach where intruders hacked in and disrupted calls. Such incidents have been dubbed as ‘zoombombing’. Various similar incidents have been reported, including from classroom sessions and business calls, where intruders disrupted what was supposed to be a closed group call. 

Myth 3: Moving to the cloud means I don’t need a data center 
 

While cloud is highly suitable for some use cases, such as variable or unpredictable workloads or when self-service provisioning is key, not all applications and workloads are a good fit for cloud. For example, unless clear cost savings can be realized, moving a legacy application to a public cloud is generally not a good use case.  There are many different paths to the cloud, ranging from simple rehosting, typically via Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), to a complete changeover, to an application implemented by a Software as a Service (SaaS) provider.  

To take advantage of cloud capabilities, it is essential to understand the model and have realistic expectations. Once a workload has been moved, the work is, in many ways, just beginning. Further refactoring or re-architecting is necessary to take advantage of the cloud. Ongoing cost and performance management will also be key for success. CIOs and IT leaders must be sure to include post-migration activities as part of the cloud implementation plan. 

The Power of a Strong Data Center Network Ecosystem

What should you look for in a strong network ecosystem? The most crucial aspect is that data center clients should be able to connect to cloud service providers (CSPs), their vendors and partners, and their own network assets through a private connection, without going over public Internet, on a global level.

According to a Gartner report “By 2022, 60% of enterprise IT infrastructures will focus on centers of data, rather than traditional data centers.”

We live in an ever-changing world of information technology that is impacting every aspect of our life. Gartner writes that infrastructures of the future will not be architected based on existing topologies, rather they will be deployed on a global scale, driven by business requirements and unspecific IT vendors. The end result will be an environment that is focused on enabling the rapid deployment of business services (by the business) and deploying workloads to the right locations, for the right reasons, at the right price. That means you need a data center that has a scalable infrastructure connected to a robust global ecosystem. 

Strong network fabrics connecting one data center to another -- locally, nationally or globally, enable you to connect your IT deployments across disparate data centers. These networks can be short-haul and cross a metropolitan area or long haul to connect across the country or even overseas. Whether you are planning a data center migration, disaster recovery or workload distribution, you need a strong network fabric to future-proof your IT strategy. Infrastructure must allow the enterprises to do what they need to do, when they need to do it, anywhere in the world. 

On the other hand, interconnection offers scalability and cost savings for the growing needs of enterprise customers. With an interconnection platform, retail and wholesale colocation environments can be connected to multiple cloud providers (multi-cloud) and multiple cloud locations (availability zones). This design opens the door to unique options for companies to architect their IT environments to optimize resiliency and availability while minimizing cost and complexity. Virtualization via Software-Defined Network (SDN), Network Function Virtualization (NFV) and Software-Defined WAN (SD-WAN) enables new services and capabilities to be created in minutes, not in days or weeks. The way Gartner says, we need to create an environment where the role of IT is to deliver the right service, at the right pace, from the right provider, at the right price. And data centers, as the hub of all things critical, become the critical delivery vehicle for these services. 

As an example of how important a strong data center ecosystem is, let’s take a look at Dallas, the #3 data center location in the world. Dallas is a destination market for data centers, meaning enterprises and cloud companies want to include Dallas as part of their global data center footprint. These companies need to distribute applications around the world for maximum performance and reliability. In a recent market report, Cushman & Wakefield ranked Dallas as #2 for global fiber connectivity, right behind Silicon Valley. 

For this strategy to work, a strong network ecosystem is key. Our Dallas TX1 Data Center is carrier neutral with a number of onsite carriers as well as dark fiber connections to the local carrier hotels, providing access to over 70 carriers and global interconnectivity.  For global, secure networks, our clients can use NTT’s Arcstar Universal One virtual private network (VPN), which offers high-quality, global network coverage in over 190 countries. In addition, TX1 is connected with our campuses in Sacramento, California; Ashburn, Virginia; and data center campuses under development in Hillsboro, Oregon; Silicon Valley; and Chicago, so workloads can be distributed, balanced and backed up across the country. Lastly, we offer secure, dedicated connections to the world’s largest cloud providers and a number of SaaS and content providers.

To sum up, it’s clear that the hybrid computing model combining data centers and clouds with a global, seamless, and secure network is the direction that corporate IT is heading. To support hybrid computing, data centers have evolved beyond space, power, telecommunications, and security. Data centers have become a critical infrastructure platform for both cloud providers and enterprises. And locations like Dallas have become integral parts of every data center’s network strategy.

Subscribe to RSS - Shoieb Yunus's blog