Google gemini-2.5-pro vs Together AI llama-3.3-70b-instruct-turbo — list pricing, worked examples, interactive calculator. Verified 2026-05-15.
| gemini-2.5-pro | llama-3.3-70b-instruct-turbo | |
|---|---|---|
| Input / 1M tokens | $1.25 | $0.88 |
| Output / 1M tokens | $10.00 | $0.88 |
| Cache write / 1M tokens | — | — |
| Cache read / 1M tokens | — | — |
Sources: Google, Together AI. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.
| Workload shape | gemini-2.5-pro | llama-3.3-70b-instruct-turbo | Cheaper |
|---|---|---|---|
| 1k in + 500 out (tool call) | $0.0063 | $0.0013 | llama-3.3-70b-instruct-turbo (4.7× cheaper) |
| 10k in + 1k out (RAG) | $0.0225 | $0.0097 | llama-3.3-70b-instruct-turbo (2.3× cheaper) |
| 100k in + 1k out (long doc) | $0.135 | $0.0889 | llama-3.3-70b-instruct-turbo (1.5× cheaper) |
| 2k in + 4k out (long gen) | $0.0425 | $0.0053 | llama-3.3-70b-instruct-turbo (8.0× cheaper) |
$1.25 in / $10.00 out. llama-3.3-70b-instruct-turbo costs $0.88 in / $0.88 out. Workloads that are completion-heavy weigh output prices more.Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.
npm i tokenmark Try in-browser → Hosted analyzer →Google full pricing · Together AI full pricing · All-provider comparison