Wednesday, July 24, 2024

ChatGPT maker OpenAI Explores Making Its Own AI Chips Amid Shortages and Rising Costs


As the creator of ChatGPT, OpenAI has experienced a surge in demand for its artificial intelligence services. However, the organization faces two key challenges – a shortage of advanced AI chips and the high costs of running its infrastructure. According to a Reuters report, OpenAI is now considering moving into the manufacturing of customized AI chips.

OpenAI has not yet made a final decision to proceed with manufacturing its own chips, as per recent internal discussions mentioned in the report. The organization has been actively discussing various options to address the shortage of expensive AI chips that it relies on.

OpenAI has contemplated several options to tackle the chip shortage, including Building its own AI chip, Collaborating more closely with chipmakers like Nvidia and Diversifying its suppliers beyond Nvidia, which currently dominates the market.

Current Reliance on Limited Suppliers

At present, OpenAI relies heavily on Graphics Processing Units (GPUs) provided by the chipmaker Nvidia. Nvidia holds over 80% of the global market share for GPUs, the specialized processors ideally suited for AI applications. This near-monopoly makes OpenAI vulnerable to supply chain constraints. CEO Sam Altman has publicly voiced concerns regarding the lack of diversity among suppliers of such crucial hardware.

The company currently uses a supercomputer developed by Microsoft incorporating 10,000 Nvidia GPUs. With Nvidia dominating production, OpenAI aims to diversify its sources of advanced AI chips required for projects like ChatGPT.

Surging Costs to Run Operations

As per analysis by Bernstein analyst Stacy Rasgon, each individual query on ChatGPT costs OpenAI approximately 4 cents when considering the infrastructure costs. With OpenAI’s services scaling rapidly, operational expenses are set to rise exponentially.

Extrapolating the costs, Rasgon estimates that if ChatGPT volumes reach just one-tenth the scale of Google’s search queries, AI would need around $48.1 billion in GPUs and $16 billion in new chips every year to support operations.

Venturing Into Custom Chip Manufacturing

To tackle these dual challenges of chip shortage and prohibitive costs, OpenAI is exploring the complex process of designing and manufacturing customized AI chips tailored to its computational needs.

The company has evaluated potential acquisition targets and collaborations as part of its roadmap. Any efforts would need significant investment, with annual costs likely amounting to hundreds of millions of dollars.

Developing proprietary chips offers OpenAI greater control over chip design and supply. However, the process could take several years given the timelines for chip fabrication. In the interim, AI would still depend on suppliers like Nvidia and AMD.

Success is also not guaranteed in chip manufacturing. Meta and Intel have faced setbacks in their attempts at custom chip production. Acquiring an established chip firm could help fast-track OpenAI’s plans.

Wider Implications OpenAI

OpenAI’s potential entry as a chipmaker has broader implications beyond the organization’s own operations. It represents a pushback against the dominance of incumbents like Nvidia in the specialized AI accelerator market.


With advanced AI chips in short supply even for giants like AI, barriers exist for smaller organizations to deploy cutting-edge AI models. AI’s move could spur a rise in new chip suppliers and manufacturers tailored for AI applications.

As one of the leading AI research institutions, AI’s strategies also influence how other organizations approach product development and managing supply chain risks. Its initiatives may set the stage for new paradigms where tech giants exert direct control over their AI infrastructure stack.

Amid surging demand and supply limitations, OpenAI finds itself at a strategic inflection point with regards to AI chips. By manufacturing its own processors, the company could reshape its cost structure, secure its access to key components, and reduce over-reliance on incumbents like Nvidia. However, the path ahead is complex, capital-intensive and fraught with uncertainties.

Please, also have a look into : ChatGPT can now do real-time internet browsing and multimodal interactions

Read more

Local News