Live token counter, real cost tracking, and smart prompt suggestions for Claude, ChatGPT, Gemini, and Perplexity. Free forever for the basics.
35M people pay for ChatGPT Plus. Millions more for Claude, Gemini, Perplexity. Every one of them gets the exact same view: nothing.
Claude.ai, ChatGPT, and Gemini all hide token counts from you. You burn quota without ever knowing how much.
A well-written prompt saves 40-60% tokens. But nobody teaches you which patterns burn tokens — until you've burned them.
You use Claude for writing, ChatGPT for chat, Gemini for search. Three subscriptions. Zero unified view of where your money goes.
Everything most people don't know they need until they see it.
A small overlay shows tokens for your current draft, estimated cost, and your session running total — in real time, on every keystroke. Color-coded warnings at 4k and 8k.
Not an estimate. We tap the platform's response stream and read the exact usage.input_tokens Anthropic and OpenAI return. 100% accurate.
Before you hit send, get an efficiency score and a one-click rewrite that saves 30-50% tokens. Catches "you're using Opus for a yes/no question."
One view of your entire AI spending — Claude + ChatGPT + Gemini + Perplexity. Budget alerts, monthly reports, ROI tracker.
TokenEyez reads response metadata only — token counts, model names. Your messages, your AI's answers, your conversations: never read, never stored, never sent anywhere. All accounting is local to your browser. Code is open source. Privacy audit is on the roadmap.
The token counter is yours regardless. Pro adds coaching and the cross-platform dashboard.
If something's missing, email us at hello@tre-lab.com.
fetch in your browser to read response metadata only — the usage object that platforms return alongside their answer (which contains numbers like input_tokens: 1247). The actual response text is never inspected, stored, or sent anywhere. The first 100 characters of your prompt are stored locally in browser storage so you can see it in your history — that storage never leaves your machine.usage object in its response (Anthropic and OpenAI both do for most accounts), we use that — 100% accurate. While you're typing, we run a fast client-side approximation that's within ~10-15% of the real BPE tokenizer. The overlay's dot turns bright teal for ~4 seconds when real numbers from the API arrive.Join 500+ AI power users on the waitlist.