ethics
💀075
Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems
Wired AI·4 days ago

The Justice Department has penalized Anthropic for attempting to restrict military use of its Claude AI models, arguing the company cannot be trusted with warfighting systems. This represents a significant government-corporate conflict over AI weapons deployment, where a company's attempts at ethical restraint are being legally challenged by federal authorities seeking unrestricted military AI access.
anthropicmilitary-aigovernment-overreachai-weaponscorporate-resistanceclaudedojlawsuit