San Francisco: OpenAI and Broadcom have announced a multi-year strategic partnership aimed at jointly designing and developing advanced AI chips and systems to meet the rapidly expanding computational demands of artificial intelligence. The collaboration will focus on creating 10 gigawatts of custom AI accelerators, marking OpenAI’s second official foray into hardware following its under-development AI device, developed in collaboration with designer Jony Ive.
Under the agreement, OpenAI will lead the design of the chipsets, leveraging its expertise in developing large language models (LLMs) and AI-driven applications such as ChatGPT and Sora, while Broadcom will handle fabrication, deployment, and scaling, including providing Ethernet solutions to ensure the AI infrastructure can efficiently operate across multiple facilities and partner data centers. The chips and systems will be optimized specifically for the high-intensity AI workloads that OpenAI’s models require, allowing the company to better manage its compute needs and offer capacity to third-party enterprises.
“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” said Sam Altman, CEO of OpenAI. “Developing our own accelerators adds to the broader ecosystem of partners, all building the capacity required to push the frontier of AI and provide benefits to all humanity.”
The financial specifics of the deal were not disclosed. However, Broadcom will hold exclusive rights to provide the AI racks and Ethernet solutions for the custom chips. This partnership represents OpenAI’s fourth major collaboration aimed at scaling AI compute, alongside existing relationships with Oracle, Nvidia, and AMD.
OpenAI highlighted that ChatGPT now boasts more than 800 million weekly active users, with its tools increasingly adopted by enterprises, small businesses, and developers alike. This unprecedented growth has created an urgent need for additional servers and AI infrastructure, a demand the new partnership seeks to address.
By designing chips tailored to its unique AI workloads, OpenAI aims not only to improve performance and efficiency but also to accelerate the deployment of AI solutions globally, providing scalable and optimized systems to support innovation and development across industries.
This strategic collaboration signals a growing trend of AI companies moving into hardware design, aiming to control the end-to-end performance of AI systems and ensure that emerging models can operate at unprecedented scales while maintaining efficiency and reliability.