Fri, 27 Mar 2026
A federal judge told the government it could not immediately enforce a ban on Anthropic’s tools.
A federal judge has temporarily blocked a Pentagon order that would have prohibited government agencies from using AI tools developed by Anthropic, citing concerns over "First Amendment retaliation". The company had been at odds with the US Department of Defense over new contract terms that could be used for mass surveillance and autonomous weapons. A $200 million contract was at stake, but Anthropic refused to accept the terms. Judge Rita Lin's ruling means that government agencies can continue using Anthropic's tools, including its popular AI model "Claude", until the lawsuit is resolved. The company has stated that it remains committed to working with the government to ensure safe and reliable use of AI technology.
Terms of Use | Privacy Policy | Manage Cookies+ | Ad Choices | Accessibility & CC | About | Newsletters | Transcripts
Business News Top © 2024-2025