OpenAI has committed to spending $1.15 trillion on hardware & cloud infrastructure between 2025 & 2035.1
The spending breaks down across seven major vendors: Broadcom ($350B), Oracle ($300B), Microsoft ($250B), Nvidia ($100B), AMD ($90B), Amazon AWS ($38B), & CoreWeave ($22B).2
Using some assumptions, we can generate a basic spending plan through contract completion.3
| Year | MSFT | ORCL | AVGO | NVDA | AMD | AWS | CRWE | Annual Total |
|---|---|---|---|---|---|---|---|---|
| 2025 | $2 | $0 | $0 | $0 | $0 | $2 | $2 | $6 |
| 2026 | $3 | $0 | $2 | $2 | $1 | $3 | $3 | $14 |
| 2027 | $5 | $25 | $4 | $6 | $3 | $4 | $3 | $50 |
| 2028 | $10 | $60 | $10 | $12 | $8 | $5 | $7 | $112 |
| 2029 | $20 | $60 | $25 | $31 | $24 | $6 | $7 | $173 |
| 2030 | $60 | $60 | $64 | $49 | $54 | $8 | $0 | $295 |
| TOTAL | $250 | $300 | $350 | $100 | $90 | $38 | $22 | $1,150 |
Across these vendors, estimated annual compute spending grows from $6B in 2025 to $173B in 2029, reaching $295B in 2030. We built a constrained allocation model with the boundary conditions defined in the appendix below, but this is just a guess. The actual growth rates are 124% (2027→2028), 54% (2028→2029), & 70% (2029→2030).
Coincidentally, OpenAI announced today they would hit $100B in 2027, earlier than expected.4 This gives us another data point to help us understand the business’ trajectory.
OpenAI projects a 48% gross profit margin in 2025, improving to 70% by 2029.5 If we assume all infrastructure spending flows through cost of goods sold (COGS), we can calculate the implied revenue needed to support these spending levels at OpenAI’s target margins.
| Year | Annual Spending (COGS) | Gross Margin | Implied Revenue |
|---|---|---|---|
| 2025 | $6B | 48% | $12B |
| 2026 | $14B | 48% | $27B |
| 2027 | $50B | 55% | $111B |
| 2028 | $112B | 62% | $295B |
| 2029 | $173B | 70% | $577B |
| 2030 | $295B | 70% | $983B |
The calculation assumes linear margin improvement from 48% (2025) to 70% (2029), then holds at 70% for 2030-2032. Revenue is calculated as: Spending / (1 - Gross Margin).
These implied revenue figures suggest OpenAI would need to grow from ~$10B in 2024 revenue to $577B by 2029, roughly the size of Google’s revenue in the same year (assuming Google grows from $350B in 2024 at ~12% annually).
If nothing else, the estimated annual spending & commitments convey an absolutely enormous level of potential & ambition.
Appendix: Estimation Methodology
Deal Duration Summary
| Vendor | Total Value | Contract Duration | Source |
|---|---|---|---|
| Broadcom | $350B | 7 years (2026-2032, estimated) | 10 GW deployment, financial terms not disclosed |
| Oracle | $300B | 6 years (2027-2032, estimated) | $60B annually for 5 years plus ramp-up |
| Microsoft | $250B | 7 years (2025-2031, estimated) | Based on cloud service contract structure |
| Nvidia | $100B | Not disclosed | Deployment begins H2 2026 |
| AMD | $90B | Not disclosed | Deployment begins H2 2026 |
| Amazon AWS | $38B | 7 years (2025-2031) | Explicitly stated in announcement |
| CoreWeave | $22.4B | ~5 years (2025-2029) | Based on contract expansions |
Average duration of disclosed deals: 5.7 years (rounded to 6 years for estimation purposes)
Deal Structure
Broadcom : OpenAI commits to deploying 10 gigawatts of custom AI accelerators designed by OpenAI & developed in partnership with Broadcom. Estimated value of $350B based on industry benchmarks ($35B per gigawatt). Deployment begins H2 2026. We estimate a 7-year deployment timeline (2026-2032) consistent with the scale & complexity of custom chip manufacturing & data center buildout. The systems will be deployed across OpenAI’s facilities & partner data centers.6
Microsoft : OpenAI commits to purchasing an incremental $250 billion in Azure cloud services over an estimated 6 years (2025-2030).7
Nvidia : Nvidia invests up to $100 billion in OpenAI for non-voting shares. OpenAI commits to spending on Nvidia chips across at least 10 gigawatts of AI data centers. First deployment begins in H2 2026 using the Nvidia Vera Rubin platform.8
AMD : AMD provides OpenAI with warrants to purchase up to 160 million AMD shares (approximately 10% of the company) at one cent per share. In exchange, OpenAI commits to purchasing 6 gigawatts of AMD Instinct GPUs, representing $90 billion in cumulative hardware revenue potential. The first 1 gigawatt deployment starts in H2 2026.9
Oracle : OpenAI commits to paying Oracle $60 billion annually for five years (2027-2031) for cloud infrastructure, totaling $300 billion. The contract is part of Oracle’s $500 billion Stargate data center buildout. Larry Ellison stated in Oracle’s earnings call : “The capability we have is to build these huge AI clusters with technology that actually runs faster & more economically than our competitors.”10
Amazon AWS : OpenAI commits to $38 billion over seven years (2025-2031) for cloud infrastructure & compute capacity on Amazon Web Services. The agreement, signed November 3, 2025, provides immediate access to hundreds of thousands of Nvidia GB200 & GB300 GPUs running on Amazon EC2 UltraServers. All planned capacity is targeted to come online by the end of 2026, with room to expand through 2027 & beyond. Sam Altman stated : “Scaling frontier AI requires massive, reliable compute.”11 OpenAI’s first major partnership with AWS, adding to its multi-cloud infrastructure.
CoreWeave : $22.4 billion in committed spending for data center usage rights through 2029, consisting of $11.9B initial contract, $4B expansion, & $6.5B September 2025 expansion.12
Estimation Method
The year-by-year breakdowns above are estimates based on publicly announced deal terms & deployment schedules. Here’s how we calculated them :
It’s hard to model the payments because some of the contracts are hardware spending (Nvidia, AMD, Broadcom) while others are cloud services (Microsoft Azure, Oracle Cloud, AWS), each with different payment structures & deployment timelines. Additionally, some contracts include chip design costs (like Broadcom’s custom AI accelerators), further complicating the spending distribution.
Contract Structures : The estimates reflect accelerating deployment starting after 2027, with 2025-2027 representing the ramp-up period & 2028-2030 showing peak deployment with growth rates of 124%, 54%, & 70% respectively. Oracle’s $300B contract : We assume a ramp-up period in 2027 ($25B) as infrastructure comes online, reaching full $60B annual run rate in 2028-2031, then completing with $35B in 2032. This assumption reflects realistic deployment timelines : Oracle’s massive data center buildout requires initial site preparation & infrastructure scaling before reaching full capacity. All other vendors follow deployment-based patterns starting from small initial commitments ($2B-$4B) & accelerating as large-scale infrastructure deployments come online. The spending curves reflect physical & financial realities : you can’t deploy 10 gigawatts of infrastructure overnight.
Microsoft ($250B total) : Based on incremental Azure services commitment announced in October 2025. Contract duration not disclosed. We estimated 7 years (2025-2031) consistent with AWS’s 7-year contract structure. Spending starts at $2B in 2025 & accelerates after 2027 : $10B (2028), $20B (2029), $60B (2030), with the remaining spend allocated to 2031 as large-scale deployments peak.7
Nvidia ($100B total) : Nvidia invests up to $100 billion in OpenAI for non-voting shares. OpenAI commits to spending on Nvidia chips across at least 10 gigawatts of AI data centers. First deployment begins in H2 2026 using the Nvidia Vera Rubin platform.8
AMD ($90B total) : Based on 6 gigawatt commitment & H2 2026 deployment start. AMD’s partnership announcement explicitly states “$90 billion in cumulative hardware revenue potential” from this agreement.9
Oracle ($300B total) : The most concrete, $60B annually for five years, as stated in multiple Oracle earnings calls & confirmed by CEO Safra Catz. We model this as a ramp-up period in 2027 ($25B) as infrastructure comes online, reaching full $60B annual rate in 2028-2031, then $35B in 2032 to reach the $300B total. This reflects Oracle’s Stargate data center buildout timeline & realistic deployment constraints.13
Amazon AWS ($38B total) : Based on announced 7-year agreement signed November 3, 2025. OpenAI commits to $38B over seven years for access to hundreds of thousands of Nvidia GB200 & GB300 GPUs on Amazon EC2 UltraServers. Deployment begins immediately with all capacity targeted for end of 2026.11 We estimated deployment spending with geometric growth : $2B in 2025 (partial year starting November), ramping through 2027-2030 ($4B → $6B → $10B → $11B), then completing with $2B in 2031.
CoreWeave ($22.4B total) : Based on reported $11.9B initial contract, $4B expansion in May 2025, plus $6.5B expansion in September 2025, bringing total contract value to $22.4B.14 Note : CoreWeave also provides compute capacity to Google Cloud, creating an interesting three-way dynamic where Google resells CoreWeave’s Nvidia-powered infrastructure.15
These estimates carry ±30-50% error margins. Actual spending depends on deployment pace, hardware costs, & contract amendments.
Accounting Treatment: Design vs. Manufacturing
A critical complication in estimating OpenAI’s cost structure is determining how much of chip-maker deals like Broadcom represent design services versus manufactured hardware, & how each flows through the income statement.
The Broadcom Deal Structure :
OpenAI & Broadcom collaborated for 18 months designing custom AI accelerators optimized for inference. OpenAI designs the chips, Broadcom provides IP licensing & engineering services, & TSMC manufactures using 3nm process technology. The $350B estimated value represents deployment through 2029, but financial terms weren’t disclosed.
Two Different Accounting Treatments :
Phase 1 : Design & Development (R&D Expense)
- Chip design costs, IP licensing, & engineering services from Broadcom
- Under US GAAP, all internal R&D must be expensed as incurred
- Industry benchmarks : Advanced AI chip design ranges $200M-500M for NRE (non-recurring engineering) at 3nm process nodes
- This hits the income statement immediately as R&D expense, not COGS
- Likely represents <5-10% of total deal value
Phase 2 : Manufacturing & Deployment (Capitalized → COGS)
- TSMC wafer fabrication, assembly, testing, packaging
- Broadcom networking & rack integration
- Once manufactured, chips are capitalized as Property, Plant & Equipment
- Depreciation over useful life (typically 3-5 years) flows to Cost of Revenues (COGS)
- Likely represents 90-95% of total deal value (~$35B per gigawatt based on industry benchmarks)
Why This Matters for Gross Margins :
The table showing implied revenue at OpenAI’s target margins assumes all infrastructure spending flows through COGS. This simplification works reasonably well because:
- Hardware manufacturing dominates total costs (90-95%)
- Cloud service deals (AWS, Oracle, Microsoft) are immediate COGS expenses
- Design NRE, while expensive ($200M-500M), is relatively small versus $1T+ in total spending
However, the true accounting is more complex : upfront design costs hit R&D immediately (worsening near-term operating margins), while manufactured chips depreciate over 3-5 years (smoothing COGS impact). Without disclosed contract terms splitting design services from hardware purchases, precise gross margin modeling remains challenging.
Comparison to Cloud Deals :
AWS ($38B/7 years) & Oracle ($60B/year) are cloud services, immediate COGS expenses with no capitalization benefit. The AWS deal alone represents ~$5.4B/year in direct COGS, making it particularly impactful for gross margins despite being a smaller absolute dollar commitment than hardware contracts.
-
Calculated from announced deals with Broadcom, Microsoft, Nvidia, AMD, Oracle, Amazon AWS & CoreWeave. CNBC, “A guide to the $1 trillion-worth of AI deals between OpenAI, Nvidia & others,” October 15, 2025. https://www.cnbc.com/2025/10/15/a-guide-to-1-trillion-worth-of-ai-deals-between-openai-nvidia.html ↩︎
-
Deal breakdowns : Broadcom ($350B estimated for 10 GW), Oracle ($300B contract), Microsoft ($250B Azure commitment), Nvidia ($100B commitment), AMD ($90B for 6 GW), Amazon AWS ($38B), CoreWeave ($22B). The American Prospect, “The AI Ouroboros,” October 15, 2025. https://prospect.org/power/2025-10-15-nvidia-openai-ai-oracle-chips/ ↩︎
-
See Appendix : Estimation Methodology section below for detailed assumptions & methodology. ↩︎
-
The Information, “OpenAI’s Revenue Could Reach $100 Billion in 2027, Altman Suggests,” November 3, 2025. https://www.theinformation.com/briefings/openais-revenue-reach-100-billion-2027-altman-suggests Sam Altman said on a podcast with Brad Gerstner that OpenAI’s revenue could reach $100B in 2027, earlier than the company’s previous 2028 projection. ↩︎
-
OpenAI projects a 48% gross profit margin in 2025, improving to 70% by 2029. The Information, “Investors Float Deal Valuing Anthropic at $100 Billion,” November 2025. https://www.theinformation.com/articles/investors-float-deal-valuing-anthropic-100-billion For comparison, Google Q3 2024 gross margin was 57.7% & Meta Q3 2024 was 81%. https://abc.xyz/assets/94/0e/637c7ab7438fab95911fdc9c2517/2024q3-alphabet-earnings-release.pdf https://investor.fb.com/investor-news/press-release-details/2024/Meta-Reports-Third-Quarter-2024-Results/default.aspx ↩︎
-
OpenAI & Broadcom, “OpenAI & Broadcom announce strategic collaboration to deploy 10 gigawatts of OpenAI-designed AI accelerators,” October 13, 2025. https://openai.com/index/openai-and-broadcom-announce-strategic-collaboration/ Financial terms not disclosed; estimated value of $350B based on industry benchmarks of $35B per gigawatt. We estimate 7-year deployment (2026-2032) based on custom chip manufacturing timelines & data center buildout complexity. ↩︎
-
OpenAI, “The next chapter of the Microsoft–OpenAI partnership,” October 2025. https://openai.com/index/next-chapter-of-microsoft-openai-partnership/ ↩︎ ↩︎
-
NVIDIA Newsroom, “OpenAI & NVIDIA Announce Strategic Partnership to Deploy 10 Gigawatts of NVIDIA Systems,” October 2025. https://nvidianews.nvidia.com/news/openai-and-nvidia-announce-strategic-partnership-to-deploy-10gw-of-nvidia-systems ↩︎ ↩︎
-
AMD Press Release, “AMD & OpenAI Announce Strategic Partnership to Deploy 6 Gigawatts of AMD GPUs,” October 6, 2025. https://www.amd.com/en/newsroom/press-releases/2025-10-6-amd-and-openai-announce-strategic-partnership-to-d.html ↩︎ ↩︎
-
Qz, “Oracle’s massive AI power play,” September 2025. https://qz.com/oracle-earnings-ai-openai-cloud-power-larry-ellison Larry Ellison earnings call quote on technology & economic advantages. ↩︎
-
Amazon Web Services, “AWS announces new partnership to power OpenAI’s AI workloads,” November 3, 2025. https://www.aboutamazon.com/news/aws/aws-open-ai-workloads-compute-infrastructure OpenAI signs $38 billion deal with Amazon AWS over seven years for hundreds of thousands of Nvidia GB200 & GB300 GPUs. ↩︎ ↩︎
-
Bloomberg, “CoreWeave Expands OpenAI Deals to as Much as $22.4 Billion,” September 25, 2025. https://www.bloomberg.com/news/articles/2025-09-25/coreweave-expands-deals-with-openai-to-as-much-as-22-4-billion ↩︎
-
CNBC, “‘We’re all kind of in shock.’ Oracle’s revenue projections leave analysts slack-jawed,” September 9, 2025. https://www.cnbc.com/2025/09/09/were-all-kind-of-in-shock-oracle-projections-analysts-slackjawed.html Oracle CEO Safra Catz confirmed multiple large cloud contracts including $60B annual starting FY2028. ↩︎
-
CoreWeave expansions with OpenAI : $11.9B initial contract (March 2025), $4B expansion (May 2025), $6.5B expansion (September 2025), totaling $22.4B. Bloomberg, “CoreWeave Expands OpenAI Deals to as Much as $22.4 Billion,” September 2025. ↩︎
-
Reuters, “CoreWeave to offer compute capacity in Google’s new cloud deal with OpenAI,” June 2025. CoreWeave signed Google as customer in Q1 2025, creating three-way infrastructure arrangement. ↩︎