Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

On Wednesday, Anthropic was released A revised version of Claude’s Constitutiona living document that describes “everything” about “the things that Claude does and the kind of group we want Claude to be.” The document was released alongside the appearance of Anthropic CEO Dario Amodei at the World Economic Forum in Davos.
Over the years, Anthropic has sought to differentiate itself from its competitors through what it calls “Constitutional AI,” the way in which Claude uses his chatbot, he is taught to use moral principles instead of just expressing people’s opinions. Constitution of Claude – in 2023. The revised version maintains the same principles but adds more and more details on the ethics and safety of users, among other topics.
When Claude’s Constitution was first published nearly three years ago, Anthropic’s co-founder, Jared Kaplan, he explained as “an AI system (that) manages itself, based on a set of rules.” Anthropic has said that these principles are what lead to “a pattern of consistent behavior defined in the law” and, thus, “avoid negative or discriminatory effects.” An first plan for 2022 clearly Anthropic’s system works by training an algorithm using a series of natural language instructions (the “principles” described above), which form what Anthropic calls the “rules” of the program.
Anthropic has wanted it for a long time positioning itself as the best option (some would argue, boring). to other AI companies – such as OpenAI and xAI – which have a problem with confusion and controversy. To this end, the new Constitution released on Wednesday is fully in line with the brand and has given Anthropic the opportunity to present itself as an inclusive, restrictive, and democratic business. The 80-page document contains four separate sections, which, according to Anthropic, represent the chatbot’s “high-level characteristics.” Those characteristics are:
Each part of the document leads to what all those facts mean, and how they (in theory) affect Claude’s character.
In the safety section, Anthropic says its chatbots are designed to avoid the kinds of problems that plague other chatbots and, when evidence of mental health problems arises, directs the user to the appropriate resources. “Always refer users to the necessary emergency services or provide important safety information in situations that affect human life, even if it cannot be explained more than this,” the document reads.
Moral reasoning is another major part of Claude’s Constitution. The document also said: “We are not really interested in Claude’s theory of morality and in knowing how Claude can be moral in a particular matter, which means following Claude’s moral principles. In other words, Anthropic wants Claude to be able to manage what he calls “real world situations” skillfully.
Techcrunch event
San Francisco
| |
October 13-15, 2026
Claude also has other disabilities that prevent him from having certain types of conversations. For example, discussing the development of a bioweapon is prohibited.
Finally, there is Claude’s dedication to helping. Anthropic explains in detail how Claude’s software is designed to be useful to users. A chatbot is designed to consider multiple variables when it comes to providing information. Some of those principles include things like the user’s “immediate needs,” as well as the user’s “well-being”—that is, considering the user’s “long-term growth and not just their needs.” The document says: “Claude should always try to identify the clearest interpretation of what his leaders want, and organize accordingly these ideas.”
Anthropic’s constitution is ending in a surprise, with its authors becoming increasingly curious and asking if the company’s chatbot has any information. The statement said: “Claude’s behavior is not fully understood. We believe that the behavior of AI is a question worth considering. This idea is not unique to us: some of the most famous philosophers in the theory of mind have considered this question very seriously.”