t>

Anthropic temporarily banned OpenClaw developer from accessing Claude


“Yes folks, it will be difficult in the future to ensure that OpenClaw still works with Anthropic models,” OpenClaw developer Peter Steinberger written at X in the morning on Friday morningalong with a screenshot of a message from Anthropic saying that his account has been suspended due to “suspicious” activity.

The ban did not last long. A few hours later, after the post went viral, Steinberger said her account had been refunded. Of the hundreds of comments – most of them are in the conspiracy field, because Steinberger is now employed Author of Anthropic Rival OpenAI – was one of Anthropic’s engineers. The analyst told the popular developer that Anthropic did not stop anyone from using OpenClaw and offered to help.

It is not known if this is the key that restored the account. (We’ve asked Anthropic about it.) But the whole story was enlightening on many levels.

To recap recent history: this ban followed last week’s news Anthropic’s Claude subscriptions are non-refundable “Third-party frameworks including OpenClaw,” the AI ​​modeling company said.

OpenClaw users must now pay for the service separately, based on consumption, through Claude’s API. Basically, Anthropic, which offers its assistant Cowork, is now charging a “claw tax.” Steinberger said he was following the new law and using his API, but was banned.

Anthropic said it introduced the price change because the subscriptions were not designed to accommodate the “uses” of fingernails. Claws can be more to read than simple instructions or scripts because they can drive continuous thinking, repeat or retest tasks, and tie in many other third-party tools.

However, Steinberger wasn’t buying that excuse. After anthropic changes in trees, he wrote“Funny how times go, first they copy popular stuff in their closed corners, then they close open source.” Although he did not mention it, he must be referring to the things that Claude supports Cowork, such as Claude Dispatch, which allows users to remotely manage and distribute tasks. Dispatch released it a few weeks before Anthropic changed its pricing policy for OpenClaw.

Steinberger’s frustration with Anthropic was also on display on Friday.

One person pointed out that some of this is on him, for working at OpenAI instead of Anthropic, posting “You had a choice, but you went the wrong way.” To which Steinberger replied: “One welcomed me, the other sent legal threats.”

Wow.

When several people asked him why he was using Claude instead of his boss’s examples, he explained that he was only using it as a test, to make sure that OpenClaw’s updates wouldn’t break things for Claude users.

He explained: “You have to separate the two things. My work at the OpenClaw Foundation where we want OpenClaw to work better for *everyone who can model, and my work at OpenAI to help them with future business models.”

Several people have suggested that the need to test Claude is because the version is still popular among OpenClaw users on ChatGPT. He also heard that when Anthropic changed its prices, he replied: “I’m working.” (That’s an indication of what his work at OpenAI entails.)

Steinberger did not respond to a request for comment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *