Amazon today announced a multi-part collaboration focused on building out the world’s most scalable, on-demand artificial intelligence (AI) infrastructure optimized for training increasingly complex large language models (LLMs) and developing generative AI applications.
The joint work features next-generation Amazon Elastic Compute Cloud (Amazon EC2) P5 instances powered by NVIDIA H100 Tensor Core GPUs and AWS’s state-of-the-art networking and scalability that will deliver up to 20 exaFLOPS of compute performance for building and training the largest deep learning models.
P5 instances will be the first GPU-based instance to take advantage of AWS’s second-generation Elastic Fabric Adapter (EFA) networking, which provides 3,200 Gbps of low-latency, high bandwidth networking throughput, enabling customers to scale up to 20,000 H100 GPUs in EC2 UltraClusters for on-demand access to supercomputer-class performance for AI.
“AWS and NVIDIA have collaborated for more than 12 years to deliver large-scale, cost-effective GPU-based solutions on demand for various applications such as AI/ML, graphics, gaming, and HPC,” said Adam Selipsky, CEO at AWS.
“AWS has unmatched experience delivering GPU-based instances that have pushed the scalability envelope with each successive generation, with many customers scaling machine learning training workloads to more than 10,000 GPUs today.
With second-generation EFA, customers will be able to scale their P5 instances to over 20,000 NVIDIA H100 GPUs, bringing supercomputer capabilities on demand to customers ranging from startups to large enterprises.”
Jensen Huang, founder, and CEO of NVIDIA said,
“Accelerated computing and AI have arrived, and just in time. Accelerated computing provides step-function speed-ups while driving down cost and power as enterprises strive to do more with less. Generative AI has awakened companies to reimagine their products and business models and to be the disruptor and not the disrupted.”
“AWS is a long-time partner and was the first cloud service provider to offer NVIDIA GPUs. We are thrilled to combine our expertise, scale, and reach to help customers harness accelerated computing and generative AI to engage the enormous opportunities ahead.”

New Supercomputing Clusters
New P5 instances are built on more than a decade of collaboration between AWS and NVIDIA delivering the AI and HPC infrastructure and building on four previous collaborations across P2, P3, P3dn, and P4d(e) instances.
P5 instances are the fifth generation of AWS offerings powered by NVIDIA GPUs and come almost 13 years after its initial deployment of NVIDIA GPUs, beginning with CG1 instances.
P5 instances are ideal for training and running inference for increasingly complex LLMs and computer vision models behind the most-demanding and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.
Specifically built for both enterprises and startups racing to bring AI-fueled innovation to market in a scalable and secure way, P5 instances feature eight NVIDIA H100 GPUs capable of 16 petaFLOPs of mixed-precision performance, 640 GB of high-bandwidth memory, and 3,200 Gbps networking connectivity (8x more than the previous generation) in a single EC2 instance.
The increased performance of P5 instances accelerates the time-to-train machine learning (ML) models by up to 6x (reducing training time from days to hours), and the additional GPU memory helps customers train larger, more complex models.
P5 instances are expected to lower the cost to train ML models by up to 40% over the previous generation, providing customers greater efficiency over less flexible cloud offerings or expensive on-premises systems.
Amazon EC2 P5 instances are deployed in hyper-scale clusters called EC2 UltraClusters that are comprised of the highest-performance computing, networking, and storage in the cloud.
Each EC2 UltraCluster is one of the most powerful supercomputers in the world, enabling customers to run their most complex multi-node ML training and distributed HPC workloads.
They feature petabit-scale non-blocking networking, powered by AWS EFA, a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS.
EFA’s custom-built OS and hardware interface optimize the performance of ML applications running on P5 instances, allowing customers to take advantage of virtually limitless scalability up to 20,000 H100 GPUs while benefitting from low latency communications.
By leveraging AWS services like S3 object storage or FSx high-performance file systems along with powerful computing capabilities enabled by EFA and NVIDIA GPUDirect RDMA technology, organizations can enjoy a robust cloud environment that offers both elasticity and flexibility – just as if they had an in house HPC cluster.
With the new EC2 P5 instances, customers like Anthropic, Cohere, Hugging Face, Pinterest, and Stability AI will be able to build and train the largest ML models at scale. The collaboration through additional generations of EC2 instances will help startups, enterprises, and researchers seamlessly scale to meet their ML needs.
Anthropic builds reliable, interpretable, and steerable AI systems that will have many opportunities to create value commercially and for public benefit.
Tom Brown, co-founder of Anthropic, said,
“At Anthropic, we are working to build reliable, interpretable, and steerable AI systems. While the large, general AI systems of today can have significant benefits, they can also be unpredictable, unreliable, and opaque. Our goal is to make progress on these issues and deploy systems that people find useful.”
“Our organization is one of the few in the world that is building foundational models in deep learning research. These models are highly complex, and to develop and train these cutting-edge models, we need to distribute them efficiently across large clusters of GPUs.”
“We are using Amazon EC2 P4 instances extensively today, and we are excited about the upcoming launch of P5 instances. We expect them to deliver substantial price-performance benefits over P4d instances, and they’ll be available at the massive scale required for building next-generation large language models and related products.”
Cohere, a leading pioneer in language AI, empowers every developer and enterprise to build incredible products with world-leading natural language processing (NLP) technology while keeping their data private and secure.
“Cohere leads the charge in helping every enterprise harness the power of language AI to explore, generate, search for, and act upon information naturally and intuitively, deploying across multiple cloud platforms in the data environment that works best for each customer,” said Aidan Gomez, CEO at Cohere. “NVIDIA H100-powered Amazon EC2 P5 instances will unleash the ability of businesses to create, grow, and scale faster with its computing power combined with Cohere’s state-of-the-art LLM and generative AI capabilities.”
Hugging Face is on a mission to democratize good machine learning.
Julien Chaumond, CTO and co-founder at Hugging Face said,
“As the fastest growing open source community for machine learning, we now provide over 150,000 pre-trained models and 25,000 datasets on our platform for NLP, computer vision, biology, reinforcement learning, and more.”
“With significant advances in large language models and generative AI, we’re working with AWS to build and contribute the open source models of tomorrow. We’re looking forward to using Amazon EC2 P5 instances via Amazon SageMaker at scale in UltraClusters with EFA to accelerate the delivery of new foundation AI models for everyone.”
Today, more than 450 million people around the world use Pinterest as a visual inspiration platform to shop for products personalized to their taste, find ideas to do offline, and discover the most inspiring creators.
“We use deep learning extensively across our platform for use-cases such as labeling and categorizing billions of photos that are uploaded to our platform, and visual search that provides our users the ability to go from inspiration to action,” said David Chaiken, Chief Architect at Pinterest.
“We have built and deployed these use cases by leveraging AWS GPU instances such as P3 and the latest P4d instances. We are looking forward to using Amazon EC2 P5 instances featuring H100 GPUs, EFA, and Ultraclusters to accelerate our product development and bring new Empathetic AI-based experiences to our customers.”
As the leader in multimodal, open-source AI model development and deployment, Stability AI collaborates with public- and private-sector partners to bring this next-generation infrastructure to a global audience.
Emad Mostaque, CEO of Stability AI. said,
“We initially partnered with AWS in 2021 to build Stable Diffusion, a latent text-to-image diffusion model, using Amazon EC2 P4d instances that we employed at scale to accelerate model training time from months to weeks.
As we work on our next generation of open-source generative AI models and expand into new modalities, we are excited to use Amazon EC2 P5 instances in second-generation EC2 UltraClusters. We expect P5 instances will further improve our model training time by up to 4x, enabling us to deliver breakthrough AI more quickly and at a lower cost.”

New Server Designs for Scalable, Efficient AI
NVIDIA and AWS have come together to revolutionize Artificial Intelligence (AI) computation with the introduction of H100.
This impressive collaboration has enabled a significant boost in energy efficiency for AI workloads, up to 300 times more efficient than CPUs alone – harnessing the power of GPUs within their state-of-the art infrastructure while delivering integrated security through hardware accelerated hypervisors and custom EFA networks optimized by GPUDirect™ technology.
Building on AWS and NVIDIA’s work focused on server optimization, the companies have begun collaborating on future server designs to increase the scaling efficiency with subsequent-generation system designs, cooling technologies, and network scalability.

About Amazon Web Services
AWS has revolutionized the way organizations access and operate cloud services. Leveraging 15 years of experience, Amazon Web Services now offers over 200 functionalities to businesses from 99 Availability Zones in 31 geographic regions worldwide.
This includes support for computing, storage databases, networking solutions as well as analytics machine learning & AI engines – enabling users a comprehensive suite of tools with scalability that can meet their most demanding workloads.
With plans to expand its reach even further into 5 new geographies across Canada, Israel Malaysia , New Zealand and Thailand; this service is indisputably at the forefront of cloud innovation today powering startups through enterprise level corporations operating globally.

About NVIDIA
NVIDIA has been a leading force in the computing world since it was established in 1993. They are widely credited for inventing the GPU in 1999, revolutionizing computer graphics and furthering the development of modern AI capabilities.
NVIDIA’s commitment to data-center-scale advancements have had incredible impacts on the industry and given NVIDIA the opportunity to reshape what it means to be a computing company.
This is clearly evident with their role as pioneers of accelerated computing and their place in igniting the era of metaverse creation.
NVIDIA’s impressive thought leadership and successes can be attributed to their ambition and sheer dedication to continued innovation.