Anthropic shipped a context-window extension for Claude 4.7 Opus this week, taking the API limit from 200K to 1M tokens — matching Google’s Gemini 2.5 Pro. Pricing is unchanged: $15 per million input tokens, $75 per million output. Cache hits remain at $1.50/M, which is where the real cost story lives for long-context workloads.

What we tested

We loaded a 597-page commercial litigation brief (about 425K tokens after stripping formatting) and asked Opus to identify the three weakest arguments and suggest counter-evidence. Single shot, no chunking, no RAG. Total cost with caching: $0.84.

The model returned a structured response in 47 seconds that correctly identified two of the three arguments our outside counsel had also flagged. The third was a different angle than counsel chose, but defensible. Two minor hallucinations — both citing real cases but misattributing the rulings.

Comparison with Gemini 2.5 Pro

We ran the same brief through Gemini 2.5 Pro at 1M context. Cost: $0.31 (Gemini is ~3x cheaper per token). Time: 38 seconds. Quality of analysis: comparable, slightly less depth on the third weak-argument angle. Both made factual errors of similar severity.

For long-document analysis where cost matters, Gemini wins on price. Where the prose quality of the response matters — for a client-facing memo, say — Opus still edges ahead. For everyone else, the calculus comes down to which API your stack already integrates.