Meta Employees Are Competing to Burn the Most AI Tokens and Calling the Winners “Token Legends”

Best for: Anyone trying to understand how the largest companies in the world are actually integrating AI into daily work (and what happens when they measure the wrong thing).

Not ideal for: If you’re looking for a technical breakdown of token economics or LLM pricing, this isn’t that. This is a story about corporate incentives gone sideways.

There’s now a leaderboard inside Meta where employees compete to see who can consume the most AI tokens.

It’s called Claudeonomics.

An employee built it on the company intranet. It tracks usage across 85,000+ employees, ranks the top 250 “super users,” and hands out titles like “Token Legend,” “Model Connoisseur,” and “Cache Wizard.” In the last 30 days alone, Meta employees burned through 60 trillion tokens. The top individual user consumed 281 billion tokens.

Let that number sit for a second. 281 billion tokens from one person in one month. That’s enough text to fill Wikipedia 33 times over.

According to The Information, which broke the story, some employees are gaming the system by leaving AI agents running for hours just to pad their numbers. The tokens aren’t producing anything. They’re just burning compute to climb a leaderboard.

And this is at the same company that laid off 15,000 people six months ago partly because AI was supposed to make everyone more productive.

This Isn’t Just a Meta Thing

The reason this story matters isn’t the leaderboard. It’s what the leaderboard represents.

“Tokenmaxxing” is becoming a real productivity metric across Silicon Valley. NVIDIA CEO Jensen Huang said he’d be “deeply alarmed” if an engineer making $500,000 a year wasn’t consuming at least $250,000 worth of tokens. Meta CTO Andrew Bosworth reportedly said one top engineer spends the equivalent of his salary on tokens and supposedly 10x’d his output.

There’s even a public leaderboard now. A tool called Tokscale tracks AI token usage across Claude Code, Cursor, Codex, Gemini, and 11 other coding assistants. The global leaderboard shows the top user at $151,000 in token costs. 471 developers competing worldwide.

The logic sounds clean on paper: more AI usage equals more productivity equals better output. Except nobody is measuring the output part. They’re measuring the input. That’s like judging a chef by how much food they throw away, not what they serve.

The $900 Million Question

At public API pricing, 60 trillion tokens would cost roughly $900 million. Meta obviously isn’t paying retail (they run internal models alongside Anthropic, OpenAI, and Google), but even at heavily discounted rates, this is an extraordinary amount of compute being consumed with no clear connection to measurable results.

Meta also runs internal tools called MyClaw (their version of OpenClaw) and Manus, which Meta recently acquired. So employees aren’t just chatting with Claude. They’re running autonomous agents, workflows, and multi-step automations that burn tokens in the background whether anyone is watching or not.

The uncomfortable parallel: Meta is simultaneously spending $135 billion on AI infrastructure while running a leaderboard that incentivizes employees to waste compute resources as fast as possible. One team is trying to build the most efficient AI infrastructure on earth. Another team is competing to see who can consume the most of it.

Why Measuring Tokens Is Measuring the Wrong Thing

The argument for tracking token usage isn’t stupid. If your best engineers aren’t using AI tools, something is broken. Either the tools don’t work, the integration is bad, or the culture hasn’t caught up. Measuring adoption makes sense.

But there’s a gap between measuring adoption and measuring consumption. Adoption means “are people using the tools.” Consumption means “are people using the most tokens.” Those aren’t the same thing. An engineer who writes one perfect prompt and gets a working solution uses fewer tokens than an engineer who runs the same broken prompt 40 times because they don’t understand what they’re asking for.

The second engineer looks better on the Claudeonomics leaderboard.

This is the exact problem Steven Kerr described in his 1975 paper “On the Folly of Rewarding A, While Hoping for B.” You hope for productivity. You reward consumption. You get waste dressed up as performance.

Meta isn’t alone in this. The entire industry is moving toward token consumption as a proxy for AI integration. But a proxy is only useful when it correlates with the thing you actually care about. And right now, nobody has proven that it does.

The Part Nobody Wants to Say Out Loud

The real story here isn’t about Meta. It’s about what happens when every company adopts this metric.

If token consumption becomes how companies measure AI productivity, then every employee has an incentive to maximize tokens, not outcomes. That means more agents running in the background doing nothing useful. More prompts fired without thinking. More compute burned for the appearance of productivity rather than the reality of it.

Meanwhile the people who actually lost their jobs to AI adoption aren’t being measured on any leaderboard. They’re just gone. Replaced by a system that now rewards whoever can burn through their replacement the fastest.

The companies spending hundreds of billions on AI infrastructure are simultaneously incentivizing their employees to waste it. And the employees who game the system most effectively get called “Token Legends.”

If that doesn’t perfectly capture where we are in 2026, nothing does.