A U.S. federal appeals court has declined to block the Pentagon from blacklisting artificial intelligence company Anthropic, marking a setback for the firm in its legal dispute with the Trump administration over the use of AI in military systems. The case centers on concerns about how Anthropic’s AI chatbot, Claude, could be used in autonomous weapons systems and surveillance applications. The court allowed the Pentagon’s decision to stand while further proceedings continue, meaning the company remains subject to restrictions affecting its government and defense-related activities. This ruling contrasts with a separate federal court decision in San Francisco, where a judge previously ordered the removal of a national security risk designation placed on the company. Together, the decisions highlight inconsistent judicial outcomes in a rapidly evolving area of AI governance and national security policy. Anthropic has argued that the government’s actions represent retaliation against its efforts to impose limits on military use of its technology. The Trump administration, in turn, has criticized the company, framing the dispute as a broader debate over the role of private AI firms in shaping U.S. defense capabilities.
Read more here






