← tokenmark

Try tokenmark in your browser

Paste a JSONL log of LLM API calls, see the spend breakdown + rule-based recommendations. Runs entirely client-side. Nothing leaves your browser.

Format. Each line is a JSON object with at minimum: provider ("anthropic", "openai", or "google"), model, prompt_tokens, completion_tokens. Optional: timestamp, user_id, cache_write_tokens, cache_read_tokens, error.

Like what you see?

Install the npm package for production: npm i tokenmark. View on npm →

Or use the hosted analyzer via API (pay-per-event, no install): Run the Apify Actor →