t>

The coalition wants to block the federal Grok ban on non-consensual sex


The non-profit organization is urging the US government to immediately stop the deployment of Grok, a chatbot developed by Elon Musk’s xAI, in federal agencies, including the Department of Defense.

The open letter, which was shared exclusively by TechCrunch, follows a number of practices related to the big language in the past year, including recently that X users are asking Grok to turn images of real women, and sometimes children, in sex scenes without their consent. According to some reports, Grok did thousands Uncensored porn every hour, which is published on X, Musk’s social network owned by xAI.

“It is very concerning that the federal government continues to send AI products that contain failures that lead to inappropriate sexual images and child abuse,” the letter, signed by advocacy groups such as Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America, reads. “Based on the major government regulations, guidelines, and what has recently been passed Do it Down powered by White Houseit is appalling that (the Office of Management and Budget) did not tell the government agencies to fire Grok.”

xAI entered into an agreement last September with the General Services Administration (GSA), a government procurement agency, to selling Grok to federal agencies under the main branch. Two months before the event, xAI – along with Anthropic, Google, and OpenAI – achieved a comprehensive agreement $200 million and the Department of Defense.

Amid the X crisis in mid-January, Defense Secretary Pete Hegseth said Grok would join Google’s Gemini. work within the Pentagon networkhandling all classified and anonymous documents, which experts say is a threat to national security.

The authors of the letter argue that Grok has proven that it will not comply with the requirements of the AI ​​system. According to OMB guidelinessystems that present significant and obvious risks that cannot be adequately mitigated must be eliminated.

“Our biggest concern is that Grok has been displaying insecure language,” JB Branch, an attorney for Public Citizen Big Tech and one of the authors of the letter, told TechCrunch. “But there’s also a deep history of Grok with different challenges, including antisemitic obscene, sexual, pornographic images of women and children.”

Techcrunch event

Boston, MA
| |
June 23, 2026

Several governments have expressed reluctance to engage with Grok following his behavior in January, which centered on several incidents including releasing antisemitic articles on X and calling himself “MechaHitler.” Indonesia, Malaysiaand the entire Philippines closed access to Grok (he said then he removed the restrictions), and the European Union, UK, South Korea, and India are actively investigating xAI and X related to data privacy and illegal distribution.

The letter also comes a week after Common Sense Media, a nonprofit that focuses on family media and technology, published a terrible threat which found Grok to be one of the most dangerous for children and teenagers. One could argue that, based on the report’s findings — including Grok’s habit of giving unsafe advice, sharing drug information, creating violent and sexual images, generating conspiracy theories, and producing biased results — Grok is not necessarily safe for adults.

“If you know that a major language is or has been declared insecure by AI security experts, why on earth would you want to protect what we have?” Branch said. “In terms of national security, this makes no sense at all.”

Andrew Christianson, former National Security Agency contractor and founder Gobi AIThe AI ​​platform, which is not affiliated with the chosen field, says that the use of closed LLMs is often difficult, especially in the Pentagon.

“Closed weights mean you can’t see inside the model, you can’t explore how it makes decisions,” he said. “Locked code means you can’t look at the software or monitor its operation. The Pentagon will be locked out of both, which is a very bad combination for national security.”

“These AI assistants don’t just chat,” added Christianson. “They can take action, use systems, move information. You have to see what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.”

The risks of using malicious or insecure AI systems extend beyond national security. Branch said that an LLM that is perceived to be biased and biased can have negative consequences for the public, especially if it is used in the departments of housing, labor, or justice.

While the OMB has yet to publish its 2025 federal AI implementation plan, TechCrunch has reviewed the practices of several agencies — many of which do not use Grok or do not disclose their use of Grok. Apart from the DoD, the Department of Health and Human Services seems to be actively using Grok, especially for the preparation and management of public records and the creation of original documents, documents, or other communication materials.

Branch pointed to what he sees as the intellectual alignment between Grok and management as the reason for ignoring the chatbot’s weaknesses.

“Grok’s sign is ‘an example of a great anti-wake language,’ and it fits this leadership philosophy,” Branch said problems with people who has been accused of being Neo Nazis or white believersand then he is using a big language that is compatible with that behavior, I would think that he has a habit of using it.”

This is the third letter the union has written with the same concerns in August and October last year. In August, xAI has implemented “spice” in Grok Imagine, to introduce a large-scale production of serious propaganda about sex without consent. TechCrunch reported in August that private discussions with Grok had taken place indexed by Google Search.

Before the October letter, Grok was charged giving wrong information about the electionincluding fake vote-changing moments and political innuendos. x I pa founded Grokipediawhich the researchers found to be acceptable scientific prejudiceHIV/AIDS suspicion, and vaccine development.

In addition to immediately halting the government’s deployment of Grok, the letter calls for the OMB to investigate Grok’s security failures and whether proper monitoring procedures were performed on the chatbot. It also asks the agency to make public whether Grok was evaluated for compliance with President Trump’s order requiring LLMs to be fact-seeking and neutral and whether it met OMB’s risk mitigation standards.

“Management needs to pause and reassess whether Grok meets those limits,” Branch said.

TechCrunch has reached out to xAI and OMB for comment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *