Anthropic Claude claude-haiku-4-5 vs Together AI qwen-2.5-7b-instruct-turbo — list pricing, worked examples, interactive calculator. Verified 2026-05-15.
| claude-haiku-4-5 | qwen-2.5-7b-instruct-turbo | |
|---|---|---|
| Input / 1M tokens | $1.00 | $0.30 |
| Output / 1M tokens | $5.00 | $0.30 |
| Cache write / 1M tokens | $1.25 | — |
| Cache read / 1M tokens | $0.10 | — |
Sources: Anthropic Claude, Together AI. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.
| Workload shape | claude-haiku-4-5 | qwen-2.5-7b-instruct-turbo | Cheaper |
|---|---|---|---|
| 1k in + 500 out (tool call) | $0.0035 | $0.000450 | qwen-2.5-7b-instruct-turbo (7.8× cheaper) |
| 10k in + 1k out (RAG) | $0.0150 | $0.0033 | qwen-2.5-7b-instruct-turbo (4.5× cheaper) |
| 100k in + 1k out (long doc) | $0.105 | $0.0303 | qwen-2.5-7b-instruct-turbo (3.5× cheaper) |
| 2k in + 4k out (long gen) | $0.0220 | $0.0018 | qwen-2.5-7b-instruct-turbo (12.2× cheaper) |
$1.00 in / $5.00 out. qwen-2.5-7b-instruct-turbo costs $0.30 in / $0.30 out. Workloads that are completion-heavy weigh output prices more.Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.
npm i tokenmark Try in-browser → Hosted analyzer →Anthropic Claude full pricing · Together AI full pricing · All-provider comparison