Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic said this week that limited the release of his new model, called Mythosbecause it can find security in trusted applications and users around the world.
Instead of releasing Mythos to the public, the frontier lab will share it is a group of large companies and organizations that serve the needs of the Internet, from Amazon Web Services to JPMorgan Chase.
OpenAI is he says considering a similar plan for its next cyber security tool. The smart idea is to allow these big businesses to guide bad actors who can use advanced LLMs into secure programs.
But the “e-word” in the sentence above is a hint that there may be more to this translation process than cybersecurity – or the hyping of model skills.
Dan Lahav, CEO of AI cybersecurity lab Unstabletold TechCrunch in March, before the release of Mythos, that although the discovery of vulnerability and AI tools is important, the true value of any weakness to an attacker depends on many factors, including how they can be used together.
“The question I have in my mind,” Lahav said, “is he ever found a product that is used in the best way, either individually or as part of a chain?”
Anthropic says that Mythos can use more attacks than its predecessor, Opus. But it’s not clear that Mythos is actually the be-all and end-all of cybersecurity. Aisle, an AI cybersecurity startup, he said it was able to replicate much of what Anthropic says Mythos achieved by using a small, open sample. Aisle’s team argues that these results show that there is no deep model of cybersecurity, but rather that it depends on the work at hand.
Since Opus was already seen as a game-changer in cybersecurity, there is another reason why border laboratories may want to limit their release to large organizations: It creates a flywheel for large contracts, while making it difficult for competitors to extract their samples using distillation, a method that enables edge samples to train new LLMs at a low cost.
“This is a commercial cover because the final models are now established by business partnerships and are no longer available for small labs to melt,” said David Crawshaw, developer and CEO of exe.dev, he encouraged in a social media post. “When the time comes when you and I can use Mythos, there will be a new high rev that is a business itself. “Treadmill helps to keep business money flowing (which is a lot of dollars) by removing the distillation industry from the second place, “said Crawshaw.
This analysis is consistent with what we see in the AI ​​universe: Competition between frontier labs that create the largest, most capable models, and companies like Aisle that rely on multiple models and see open LLMs, often from China and often said to be created through distillation, as a means of obtaining economic benefits.
Frontier labs have been making serious headway in distilling this year, with Anthropic publicly disclosing what it says is an attempt by a Chinese company to copy its samples, and three leading labs — Anthropic, Google, and OpenAI — are teaming up to detect artificial sweeteners and ban them, according to Bloomberg reports.
Distillation is a threat to the business model of frontier labs because it takes away the benefits that come with spending so much money to make it happen. Stopping distillation, then, is already important, but the alternative way of doing it also gives laboratories a way to differentiate their businesses in which team is the key to profitable deployment.
Whether Mythos or any new version poses a threat to Internet security remains to be seen, and careful release of the technology is the way to go.
Anthropic did not respond to our questions about whether the decision also relates to distillation concerns at the time of writing, but the company may have found a smart way to protect the Internet — and its values.