Together AI deepseek-v3.1 vs Groq llama-3.1-8b-instant — list pricing, worked examples, interactive calculator. Verified 2026-05-15.
| deepseek-v3.1 | llama-3.1-8b-instant | |
|---|---|---|
| Input / 1M tokens | $0.60 | $0.05 |
| Output / 1M tokens | $1.70 | $0.08 |
| Cache write / 1M tokens | — | — |
| Cache read / 1M tokens | — | — |
Sources: Together AI, Groq. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.
| Workload shape | deepseek-v3.1 | llama-3.1-8b-instant | Cheaper |
|---|---|---|---|
| 1k in + 500 out (tool call) | $0.0014 | $0.000090 | llama-3.1-8b-instant (16.1× cheaper) |
| 10k in + 1k out (RAG) | $0.0077 | $0.000580 | llama-3.1-8b-instant (13.3× cheaper) |
| 100k in + 1k out (long doc) | $0.0617 | $0.0051 | llama-3.1-8b-instant (12.1× cheaper) |
| 2k in + 4k out (long gen) | $0.0080 | $0.000420 | llama-3.1-8b-instant (19.0× cheaper) |
$0.60 in / $1.70 out. llama-3.1-8b-instant costs $0.05 in / $0.08 out. Workloads that are completion-heavy weigh output prices more.Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.
npm i tokenmark Try in-browser → Hosted analyzer →Together AI full pricing · Groq full pricing · All-provider comparison