AI

Anthropic temporarily banned OpenClaw’s creator from accessing Claude

“Yes folks, it will become more difficult in the future to ensure that OpenClaw still works with anthropic models,” OpenClaw creator Peter Steinberger posted on X early Friday morningalong with a photo of a message from Anthropic stating that his account had been suspended due to ‘suspicious’ activity.

The ban did not last long. A few hours later, after the post went viral, Steinberger said his account had been restored. Among the hundreds of comments – many in the land of conspiracy theories, given that Steinberger is now employed by anthropic rival OpenAI – was one from an anthropic engineer. The engineer told the famed developer that Anthropic has never banned anyone for using OpenClaw and has offered to help.

It’s not clear if that was the key that restored the account. (We asked Anthropic about it.) But the entire series of posts was illuminating on many levels.

To recap recent history, this ban followed last week’s news that subscriptions to Anthropic’s Claude would no longer cover “third-party armor, including OpenClaw,” according to the AI ​​modeling company.

OpenClaw users must now pay for that usage separately, on a per-consumption basis, via Claude’s API. Essentially, Anthropic, which offers its own agent Cowork, now charges a “claw tax.” Steinberger said he followed this new rule and used his API, but was still banned.

Anthropic said it made the price change because plans weren’t built to handle claws’ “usage patterns.” That could be claws more computationally intensive than prompts or simple scripts, because they can perform continuous reasoning loops, automatically repeat or retry tasks, and connect to a lot of other third-party tools.

See also  Google faces EU antitrust complaint over AI Overviews

Steinberger, however, did not believe that excuse. After Anthropic changed the prices, he posted“Funny how the timings match up: first they copy some popular features into their closed harness, and then they exclude open source.” Although he didn’t specify this, he may have been referring to features added to Claude’s Cowork agent, such as Claude Dispatch, which allows users to remotely control agents and assign tasks. Dispatch was rolled out a few weeks before Anthropic changed its OpenClaw pricing policy.

Steinberger’s frustration with Anthropic was on display again Friday.

One person suggested that some of this is due to him taking a job at OpenAI instead of Anthropic, posting: “You had a choice, but you went to the wrong one.” To which Steinberger responded, “One welcomed me, one sent legal threats.”

Ouch.

When several people asked him why he uses Claude instead of his employer’s models, he explained that he only uses it for testing, to ensure that OpenClaw updates don’t cause any problems for Claude users.

He explained: “You have to separate two things. My work at the OpenClaw Foundation, where we want to make OpenClaw work great for *any* model provider, and my job at OpenAI helping them with future product strategies.”

Several people also pointed out that the need to test Claude is because that model remains a popular choice for OpenClaw users over ChatGPT. He also heard this when Anthropic changed its prices, to which he replied: “We are working on that.” (So ​​that’s an idea of ​​what his job at OpenAI entails.)

See also  Anthropic reportedly upped its latest raise to $20B

Steinberger did not respond to a request for comment.

Source link

Back to top button