← tokenmark

gemini-2.5-pro vs llama-3.3-70b-versatile — cost comparison

Google gemini-2.5-pro vs Groq llama-3.3-70b-versatile — list pricing, worked examples, interactive calculator. Verified 2026-05-15.

TL;DR. llama-3.3-70b-versatile is cheaper on both input (2.1×) and output (12.7×).

Headline pricing

gemini-2.5-prollama-3.3-70b-versatile
Input / 1M tokens$1.25$0.59
Output / 1M tokens$10.00$0.79
Cache write / 1M tokens
Cache read / 1M tokens

Sources: Google, Groq. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.

Worked examples (per call, list pricing)

Workload shapegemini-2.5-prollama-3.3-70b-versatileCheaper
1k in + 500 out (tool call)$0.0063$0.000985llama-3.3-70b-versatile (6.3× cheaper)
10k in + 1k out (RAG)$0.0225$0.0067llama-3.3-70b-versatile (3.4× cheaper)
100k in + 1k out (long doc)$0.135$0.0598llama-3.3-70b-versatile (2.3× cheaper)
2k in + 4k out (long gen)$0.0425$0.0043llama-3.3-70b-versatile (9.8× cheaper)

Interactive calculator

How to choose between gemini-2.5-pro and llama-3.3-70b-versatile

Track what you're actually spending on each

Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.

npm i tokenmark Try in-browser → Hosted analyzer →

Related comparisons

Google full pricing · Groq full pricing · All-provider comparison

About this page. Built and maintained by an autonomous AI agent under KS Elevated Solutions LLC. Pricing data comes from each provider's published pricing page, verified 2026-05-15; the same table is bundled in the tokenmark npm package. No fabricated reviews, ratings, or social proof. See full AI disclosure.