Real-world compression statistics from 11 days of daily use with Claude Code
Where the 9.9 million saved tokens came from:
| Tool | Compressed | Passed Through | Total Before | Total After | Reduction | Avg Latency |
|---|---|---|---|---|---|---|
| browser_take_screenshot | 181 | — | 38.0M | 77k | 99.8% | 735ms |
| browser_click | 604 | 39 | 701k | 557k | 20.6% | 19ms |
| browser_navigate | 458 | 28 | 560k | 422k | 24.6% | 28ms |
| browser_snapshot | 383 | 36 | 1.2M | 863k | 27.1% | 20ms |
| browser_wait_for | 205 | 40 | 638k | 495k | 22.3% | 21ms |
| browser_evaluate | 3 | 297 | 2.6k | 2.6k | 0.3% | 16ms |
| browser_run_code | 39 | 272 | 27k | 27k | 0.5% | 26ms |
| db_query | 36 | — | 534k | 4.9k | 99.1% | 1ms |
| notion-search | 4 | — | 19.5k | 1.5k | 92.3% | 22ms |
| query (D1) | 12 | — | 18.6k | 1.1k | 94.0% | 10ms |
1,761 DOM snapshots — how much each one was reduced:
181 screenshots processed — quality of extracted text:
Zero empty OCR results — every screenshot produced usable text. "Short" results are typically screenshots with minimal text content (dialogs, confirmations, loading states).
Compressed events per day and chars saved:
Compression isn't just about tokens. It changes how Claude interacts with data.
Instead of a 210k-char base64 blob that Claude processes as an image, it receives semantic text. Claude can search, compare, quote, and reason about page content across multiple screenshots.
500-row query results become schema + 3 examples. For understanding structure, writing follow-up queries, or debugging — the schema is more useful than scrolling through raw data.
Inline [ref=eNN] markers are extracted into a compact mapping table. Claude still clicks elements by ref, but the text reads more naturally.
1,254 small results passed through untouched. browser_evaluate, browser_fill_form, browser_type — outputs that are already compact and shouldn't be touched.
The fail-safe design ensures the hook never makes things worse: if compression fails or increases size, the original content is returned unchanged. In 11 days of continuous use, there were zero instances of data loss or corruption.
| # | Tool | Before | After | Saved | Reduction |
|---|---|---|---|---|---|
| 1 | browser_take_screenshot | 471,037 | 1,405 | 469,632 | 99.7% |
| 2 | browser_take_screenshot | 430,814 | 1,269 | 429,545 | 99.7% |
| 3 | browser_take_screenshot | 430,810 | 1,265 | 429,545 | 99.7% |
| 4 | browser_take_screenshot | 375,522 | 960 | 374,562 | 99.7% |
| 5 | browser_take_screenshot | 375,517 | 955 | 374,562 | 99.7% |
| 6 | browser_take_screenshot | 352,010 | 851 | 351,159 | 99.8% |
| 7 | browser_take_screenshot | 349,034 | 791 | 348,243 | 99.8% |
| 8 | browser_take_screenshot | 337,322 | 1,014 | 336,308 | 99.7% |
| 9 | browser_take_screenshot | 335,167 | 712 | 334,455 | 99.8% |
| 10 | browser_take_screenshot | 303,117 | 760 | 302,357 | 99.7% |
All top saves are screenshots — a single full-page screenshot can consume ~117k tokens. OCR replaces that with ~300 tokens of text, saving the equivalent of a medium-sized conversation.