OpenAI & AWS partner in USD $38 billion AI infrastructure deal
Amazon Web Services and OpenAI have entered into a strategic partnership valued at USD $38 billion to support the scaling of OpenAI's advanced artificial intelligence workloads.
Under the agreement, OpenAI will immediately begin utilising AWS infrastructure, including Amazon EC2 UltraServers equipped with hundreds of thousands of NVIDIA GPUs. The partnership, extending over multiple years, offers OpenAI rapid expansion of compute capacity and aims to fully deploy all resources by the end of 2026, with further expansion possible in subsequent years.
Compute scale and deployment
The collaboration centres on AWS providing compute power with the ability to scale to tens of millions of CPUs for OpenAI's expanding agentic workloads. AWS's infrastructure features clusters of NVIDIA GPUs, including GB200 and GB300 models, interconnected via Amazon EC2 UltraServers on the same network. This arrangement is designed to deliver low-latency, high-efficiency processing capabilities crucial to AI applications.
The infrastructure is optimised to accommodate varying workloads ranging from inference tasks, such as those supporting ChatGPT, to the training of new models. It also features flexibility to adapt as OpenAI's needs evolve through the duration of the partnership.
Industry perspectives
Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.
OpenAI Co-Founder and Chief Executive Sam Altman outlined the importance of expansive, robust infrastructure to the company's operations and ambitions.
As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads.
Matt Garman, Chief Executive of AWS, emphasised the suitability of AWS's infrastructure to meet OpenAI's requirements as they develop new AI models and solutions.
Broader collaboration
The current agreement builds on previous collaboration between the companies. Earlier this year, OpenAI's open weight foundation models became available on Amazon Bedrock, AWS's service providing access to a range of AI foundation models. This development enabled millions of AWS customers worldwide to make use of OpenAI technology for a range of applications.
OpenAI's models have become widely used on Amazon Bedrock, serving customers including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health. The applications cited encompass agentic workflows, coding, scientific analysis, mathematical problem-solving, and related use cases.
AI demand and infrastructure
The partnership comes as rapid advancements in artificial intelligence increase the demand for high-performance computing infrastructure. Providers of frontier models are seeking scalable, secure, and efficient platforms to support their projects. AWS says its experience in building and managing large-scale clusters-sometimes exceeding 500,000 chips-places it in a strong position to meet these requirements.
With the agreement, AWS anticipates ongoing growth in compute utilisation by OpenAI through the next seven years, indicating an evolving relationship as AI capabilities continue to progress. The companies expect the collaboration to contribute to the further development and delivery of generative AI technology for a broad base of users.