November 2025 — OpenAI has entered a multi-year cloud partnership with Amazon Web Services valued at roughly $38 billion over seven years. The agreement gives OpenAI access to large fleets of high-end accelerators to train and serve upcoming models, with operations on AWS beginning immediately and more capacity coming online through late 2026.

Why it matters

  • Compute security: Locks in predictable GPU capacity for model training and high-throughput inference.
  • Faster release cadence: More reliable access to infrastructure should shorten cycles between major model updates.
  • Provider diversification: Expands beyond a single-cloud footprint, reducing operational concentration risk.

The timeline

  • Now: Workloads begin running on AWS.
  • By end of 2026: Additional capacity phases go live to support larger training runs and global serving.
  • Beyond 2026: Further expansion planned as demand and model sizes grow.

What to watch

  • Training scale: Signs of bigger context windows, longer-horizon planning, and more robust tool-use.
  • Inference latency: Whether added capacity lowers tail latencies for consumer and enterprise traffic.
  • Ecosystem impact: Competitive responses from other clouds and chip vendors as AI infrastructure scales up.

Bottom line: The deal signals multi-year confidence in frontier AI and the infrastructure required to power it. Expect faster iteration on model capabilities alongside continued investment in safety and reliability.