Paste a JSONL log of LLM API calls, see the spend breakdown + rule-based recommendations. Runs entirely client-side. Nothing leaves your browser.
provider ("anthropic", "openai", or "google"), model, prompt_tokens, completion_tokens. Optional: timestamp, user_id, cache_write_tokens, cache_read_tokens, error.
Install the npm package for production: npm i tokenmark. View on npm →
Or use the hosted analyzer via API (pay-per-event, no install): Run the Apify Actor →