OpenClaw Banned: Here's What Actually Happened
By Riz Pabani on 25-Feb-2026

If you've been using OpenClaw with your Claude or Google Antigravity subscription, you might want to read this before your next session.
Over the past two weeks, both Anthropic and Google have cracked down on users who connect flat-rate AI subscriptions to third-party agent tools. Accounts restricted. No warnings. In some cases, no refunds. And the timing tells a story all by itself.
Anthropic Moved First
On January 9th, Anthropic quietly deployed server-side blocks that stopped Claude subscription OAuth tokens from working outside their official apps. No announcement. No updated terms. Just a silent cutoff.
Six weeks later, on February 20th, they formalised it. The updated Consumer Terms of Service now explicitly state that OAuth tokens from Claude Free, Pro, or Max accounts can only be used in Claude Code and Claude.ai. Not OpenClaw. Not OpenCode. Not even Anthropic's own Agent SDK.
The language is blunt: using your subscription credentials in any other product "constitutes a violation of the Consumer Terms of Service."
To their credit, Anthropic clarified that existing accounts won't be cancelled. They're calling it a clarification of existing policy, not a new restriction. But the server-side blocks that went live on January 9th tell a different story. That was enforcement first, policy second.
Google Was Worse
Starting around February 12th, Google launched a ban wave against Antigravity users who'd been routing requests through OpenClaw. This one was harsher.
People paying $250 a month for AI Ultra subscriptions woke up to find their accounts suspended. No warning email. No grace period. No refund.
Varun Mohan, who leads Antigravity at Google, posted on X that they'd seen "a massive increase in malicious usage" that had "tremendously degraded the quality of service." He acknowledged that some users "were not aware that this was against our ToS" and promised a path back for eligible accounts.
But "eligible" is doing a lot of work in that sentence. And there's still no clear timeline or appeals process.
Peter Steinberger, OpenClaw's creator, called it "pretty draconian." He's now removing Google Antigravity support from OpenClaw entirely.
Some users reported losing access to linked services beyond just the AI tools. Google says the restrictions only affect Antigravity, Gemini CLI, and the Cloud Code Private API.
But when your entire Google identity is tied to one account, even a partial restriction feels like a threat to everything.
The Economics Behind It
Here's the part that doesn't get enough attention.
A Claude Max subscription costs $200 a month. That same volume of computation through the API would cost north of $1,000. Google's AI Ultra is $250 a month for similar reasons — it's a flat-rate bet that most users won't hit the ceiling.
OpenClaw breaks that bet. Agentic workloads are token-hungry by design. The agent doesn't just answer a question. It thinks, plans, executes, checks its work, revises, and loops. A single "how are you?" can burn through 30,000 tokens because the agent is running its full cognitive architecture on every interaction.
Scale that to users running four or five agents simultaneously and you're consuming millions of tokens in an afternoon. On a flat-rate plan designed for casual use.
This isn't malice from users. But it isn't sustainable for providers either. The subscription model assumes a bell curve of usage. OpenClaw flattened that curve into a wall.
The OpenAI Timing
Here's where it gets interesting.
On February 15th — one week before the Google ban wave — Sam Altman announced that Peter Steinberger was joining OpenAI to lead their "next generation of personal agents" initiative. OpenClaw would move to a foundation structure. OpenAI would sponsor it.
And here's the kicker: OpenAI isn't banning its own users for running OpenClaw. As of right now, ChatGPT and Codex subscribers can still route through third-party tools without penalty.
Two of the three largest AI providers locked down third-party access in the same week. The third one hired the guy who built the tool they're all worried about.
I don't think anyone planned this as a coordinated move. But the incentives all point in the same direction: the providers want you inside their walls, using their interfaces, generating their telemetry.
A third-party agent that sits between you and the model is a threat to all of that.
What the Developer Community Is Saying
DHH — creator of Ruby on Rails — called Anthropic's move "very customer hostile." George Hotz published a piece titled "Anthropic is making a huge mistake," arguing the restrictions "will not convert people back to Claude Code, you will convert people to other model providers."
AI engineer Mohan Prakash put it more bluntly: "Users paid for quota, used quota within limits, got banned. That's not malicious, that's using the product you sold them."
He's got a point. If you sell an all-you-can-eat buffet and then kick people out for eating too much, the problem isn't the customers.
But the providers have a point too. These aren't standard usage patterns. They're agentic workloads that were never priced into the subscription. If heavy users make the plan unprofitable, everyone's service degrades.
The honest answer is that both sides are right. The pricing model was broken. The enforcement was heavy-handed. The users caught in the middle deserved better communication.
What This Means for You (And What I Actually Did)
I'll tell you what happened in my own setup, because it's probably similar to yours.
I never bothered with Claude OAuth for OpenClaw. I'm on the Max plan and Claude Code is too useful to risk losing over a ToS grey area. So I started with OpenRouter instead. Ran Gemini 3 Pro for a while. The bill got painful fast. Dropped down to Gemini 3 Flash, which is cheaper but still surprisingly capable for most agentic tasks.
Then all this news hit. And I realised I could actually use OpenAI's Codex OAuth, which they're still allowing. So now I've got GPT 5.2 running my primary OpenClaw workflows. I hit rate limits regularly and fall back to OpenRouter Gemini Flash 3 when that happens. It's not elegant, but it works.
Here's the provider-by-provider picture if you're figuring out your own setup:
Anthropic: Your subscription OAuth tokens now explicitly can't be used in third-party tools. Switch to API access if you want to keep using OpenClaw with Claude models. Expect to pay significantly more — 5x or higher for heavy agentic workloads.
Google: If you haven't been banned yet, assume you're on borrowed time. Steinberger is pulling Antigravity support from OpenClaw. Move to API keys or a different provider.
OpenAI: Currently the safest bet for third-party agent use. But don't assume this lasts forever. They're being permissive now partly because they just hired Steinberger. Once OpenAI has its own consumer agent product, the calculus changes.
OpenRouter: This is the fallback most people are landing on. You pay per token, you pick your model, and nobody's banning you for using it. It's more expensive than a flat-rate subscription, but it's the honest pricing model for agentic workloads.
The real takeaway: the era of routing heavy agentic workloads through flat-rate consumer subscriptions is over. If you're building serious agent workflows, budget for API or pay-per-token access. It's more expensive, but it's the only path that won't disappear overnight.
The Bigger Picture
I wrote a few weeks ago about what OpenClaw tells us about the future of AI agents. The short version: whoever controls the agent layer controls the relationship between you and every service you use.
This ban wave is that thesis playing out in real time. Anthropic wants you in Claude Code. Google wants you in Antigravity. OpenAI wants you in whatever they're building with Steinberger. None of them want a third-party agent sitting in the middle, consuming their compute and capturing their user relationship.
It's the same story we've seen with every platform. Open in the growth phase. Closed once the margins matter.
But I think OpenAI is being smart here. People are going to want trillions of tokens to run their agents. Research, CRM, SEO, scheduling, correspondence — the use cases people are building with OpenClaw right now are exactly the workflows that consume enormous amounts of compute. Whoever provides access now, while the others are locking doors, builds loyalty that's hard to undo later. Especially if GPT 5.2 keeps improving at figuring out how to make these agentic workflows actually reliable.
For most people reading this, the practical impact is small. If you're using Claude or Gemini through the official apps, nothing changes. But if you've built workflows around OpenClaw or similar tools, it's time to rethink the architecture. API keys. Usage-based pricing. And the assumption that any flat-rate deal on AI compute is temporary.
If you're not sure what this means for the tools you're using, message me. I'll tell you honestly whether you need to change anything.
Related Articles

5 Lighter OpenClaw Alternatives Worth Trying in 2026
Five lighter OpenClaw alternatives for 2026 — from security-first containers to managed platforms an...

Manus Agents on Telegram: Meta's Playbook Strikes Again
Meta launches Manus Agents on Telegram — a polished clone of what OpenClaw built first. The Zuckerbe...

The Lobster That Molted Three Times: What OpenClaw Tells Us About the Future of AI Agents
From weekend project to OpenAI's agent strategy. How OpenClaw became the fastest-growing open-source...