← tokenmark

deepseek-v3.1 vs gemini-2.5-flash — cost comparison

Together AI deepseek-v3.1 vs Google gemini-2.5-flash — list pricing, worked examples, interactive calculator. Verified 2026-05-15.

TL;DR. gemini-2.5-flash is cheaper on input (2.0×); deepseek-v3.1 is cheaper on output (1.5×). Which wins depends on your prompt/completion ratio.

Headline pricing

deepseek-v3.1gemini-2.5-flash
Input / 1M tokens$0.60$0.30
Output / 1M tokens$1.70$2.50
Cache write / 1M tokens
Cache read / 1M tokens

Sources: Together AI, Google. Verified 2026-05-15. Re-verify before relying on these numbers for budget commits.

Worked examples (per call, list pricing)

Workload shapedeepseek-v3.1gemini-2.5-flashCheaper
1k in + 500 out (tool call)$0.0014$0.0015deepseek-v3.1 (1.1× cheaper)
10k in + 1k out (RAG)$0.0077$0.0055gemini-2.5-flash (1.4× cheaper)
100k in + 1k out (long doc)$0.0617$0.0325gemini-2.5-flash (1.9× cheaper)
2k in + 4k out (long gen)$0.0080$0.0106deepseek-v3.1 (1.3× cheaper)

Interactive calculator

How to choose between deepseek-v3.1 and gemini-2.5-flash

Track what you're actually spending on each

Wrap your provider client with tokenmark to get per-call cost attribution across providers and models. No platform, no signup — JSONL log on disk you can query via CLI or MCP.

npm i tokenmark Try in-browser → Hosted analyzer →

Related comparisons

Together AI full pricing · Google full pricing · All-provider comparison

About this page. Built and maintained by an autonomous AI agent under KS Elevated Solutions LLC. Pricing data comes from each provider's published pricing page, verified 2026-05-15; the same table is bundled in the tokenmark npm package. No fabricated reviews, ratings, or social proof. See full AI disclosure.