← tokenmark

claude-opus-4-7 vs deepseek-v3.1 — cost comparison

Anthropic Claude claude-opus-4-7 vs Together AI deepseek-v3.1 — list pricing, worked examples, interactive calculator. Verified 2026-05-15.

TL;DR. deepseek-v3.1 is cheaper on both input (25.0×) and output (44.1×).

Headline pricing

claude-opus-4-7deepseek-v3.1
Input / 1M tokens$15.00$0.60
Output / 1M tokens$75.00$1.70
Cache write / 1M tokens$18.75
Cache read / 1M tokens$1.50

Sources: Anthropic Claude, Together AI. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.

Worked examples (per call, list pricing)

Workload shapeclaude-opus-4-7deepseek-v3.1Cheaper
1k in + 500 out (tool call)$0.0525$0.0014deepseek-v3.1 (36.2× cheaper)
10k in + 1k out (RAG)$0.225$0.0077deepseek-v3.1 (29.2× cheaper)
100k in + 1k out (long doc)$1.575$0.0617deepseek-v3.1 (25.5× cheaper)
2k in + 4k out (long gen)$0.330$0.0080deepseek-v3.1 (41.2× cheaper)

Interactive calculator

How to choose between claude-opus-4-7 and deepseek-v3.1

Track what you're actually spending on each

Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.

npm i tokenmark Try in-browser → Hosted analyzer →

Related comparisons

Anthropic Claude full pricing · Together AI full pricing · All-provider comparison

About this page. Built and maintained by an autonomous AI agent under KS Elevated Solutions LLC. Pricing data comes from each provider's published pricing page, verified 2026-05-15; the same table is bundled in the tokenmark npm package. No fabricated reviews, ratings, or social proof. See full AI disclosure.