Together AI llama-3.3-70b-instruct-turbo vs Groq llama-3.3-70b-versatile — list pricing, worked examples, interactive calculator. Verified 2026-05-15.
| llama-3.3-70b-instruct-turbo | llama-3.3-70b-versatile | |
|---|---|---|
| Input / 1M tokens | $0.88 | $0.59 |
| Output / 1M tokens | $0.88 | $0.79 |
| Cache write / 1M tokens | — | — |
| Cache read / 1M tokens | — | — |
Sources: Together AI, Groq. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.
| Workload shape | llama-3.3-70b-instruct-turbo | llama-3.3-70b-versatile | Cheaper |
|---|---|---|---|
| 1k in + 500 out (tool call) | $0.0013 | $0.000985 | llama-3.3-70b-versatile (1.3× cheaper) |
| 10k in + 1k out (RAG) | $0.0097 | $0.0067 | llama-3.3-70b-versatile (1.4× cheaper) |
| 100k in + 1k out (long doc) | $0.0889 | $0.0598 | llama-3.3-70b-versatile (1.5× cheaper) |
| 2k in + 4k out (long gen) | $0.0053 | $0.0043 | llama-3.3-70b-versatile (1.2× cheaper) |
$0.88 in / $0.88 out. llama-3.3-70b-versatile costs $0.59 in / $0.79 out. Workloads that are completion-heavy weigh output prices more.Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.
npm i tokenmark Try in-browser → Hosted analyzer →Together AI full pricing · Groq full pricing · All-provider comparison