The Trump administration has long trumpeted its goal to automate its operational capacity through artificial intelligence models. But as Secretary of Defense Pete Hegseth moves to offload certain human operations into the realm of the algorithm, one tech firm has emerged as a counterbalance to the White House’s vision for an artificially intelligent military: Anthropic, which “cannot in good conscience” allow Hegseth’s Pentagon to use its AI models without limitations, said CEO Dario Amodei.
‘Bold stand on ethical grounds’ Despite believing in the “existential importance” of using AI to protect the U.S. and “defeat our autocratic adversaries,” Anthropic has identified a “narrow set of cases” including mass domestic surveillance and “fully autonomous weapons” wherein AI can “undermine, rather than defend, democratic values,” Amodei said in a company statement. Hegseth’s allegedly retaliatory move to blacklist Anthropic is "inherently contradictory” for labeling the company a security risk and simultaneously “essential to national security.”
Hegseth's “heaviest-handed way you can regulate a business” marks a “landmark moment” for how the Pentagon “interacts with our cutting-edge technology developed on U.S. soil” in general, said Katie Sweeten, a former Justice Department official who coordinated the relationship between the DOJ and the Pentagon, at Politico. While Amodei's company faces a government ban, his “main rival,” OpenAI's Sam Altman, "struck his own deal” to fill Anthropic's Defense Department role, said CBS News.
By “refusing to bow” to a White House intent on “bullying private companies into submission,” Amodei is “taking a bold stand on ethical grounds,” said The Atlantic. While the company’s competitors “jockey for dominance” in the field, Anthropic has “distinguished itself by emphasizing safety.”
Negotiation vs. regulation Anthropic is “rightly concerned” that its products could be used for “unsafe or malicious” ends, said former Air Force Secretary Frank Kendall at The New York Times. But the company is wrong for trying to use “contractual terms” to either “prevent the misuse of its products” or at least “deflect responsibility.” But Anthropic also has the option to not sell to the government at all. The government, meanwhile, “cannot be expected to negotiate provisions” like Anthropic is asking for with all its partners, which would be a “nightmare to administer and unenforceable.” What, then, could be “appropriate” to address this debate? “Regulation by Congress.” |