Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Hegseth Gives Anthropic an Ultimatum

edited February 24 in Other Investing
Following are excerpts from a current report in The New York Times:

Anthropic insists on limits on how its technology is used and could be labeled a supply chain risk if it fails to accept the military’s demands.

image
The Pentagon delivered an ultimatum to Anthropic, the only artificial intelligence company currently operating on classified military systems, ordering the firm to bend to its demands by Friday. If the firm fails to agree by 5:01 p.m. on Friday, Defense Secretary Pete Hegseth said the Trump administration would invoke the Defense Production Act, compelling the use of its model by the military and labeling the company a supply chain risk, according to a senior Pentagon official. That step would put Anthropic’s government contracts at risk.

The two threats are fundamentally at odds: One would prevent the government from using the company’s products, while the other would force the company to let the government use the products. Despite the contradiction, the threats reflect the level of anger in the top ranks of the Pentagon toward Anthropic for resisting its demands and how important the company’s model has become to the military.

“The Pentagon knows they are issuing an extreme threat. They are using every button or lever they have,” said Jessica Tillipman, an associate dean at the George Washington University Law School. “The bigger issue here is that it waters down these designations. They are transforming what is designed to be national security tools into a point of leverage for business.”

Mr. Hegseth summoned Dario Amodei, the Anthropic chief executive, to the Pentagon on Tuesday for a morning meeting. The tone of the discussion was civil, but when Anthropic did not agree to Mr. Hegseth’s demands, he leveled the threats against it, according to people briefed on the meeting. The New York Times spoke to people on both sides of the debate over Anthropic’s work with the military, but they spoke on the condition that their names not be used to discuss the sensitive negotiations.

Anthropic has argued that it was asking for reasonable assurances that its model would not be used for surveillance of Americans or in autonomous weapons, such as drone operations, that did not involve human oversight.

Pentagon officials have said that using software and weapons lawfully is their responsibility, one they take seriously. But the officials say they cannot effectively allow all their contractors to specify how the equipment they sell to the Pentagon will be used, and that lawful use must be the only constraint. While the Defense Production Act gives the Pentagon wide-ranging powers, it is usually invoked in manufacturing contexts. It would be unusual for the act to be used on a software company, forcing Anthropic to make its product available for free.

An Anthropic spokesman said that the company had continued good-faith conversations in the meeting at the Pentagon. The spokesman said the company wanted to support the government but needed to ensure that its models were used in line with what they could “reliably and responsibly do.” But the senior Pentagon official rejected those demands and said the debate had nothing to do with those issues. The Pentagon wants all artificial intelligence contracts to stipulate that the military can use the models for any lawful purpose.

The official confirmed that the Pentagon has an agreement with Elon Musk’s company xAI to use its artificial intelligence model, Grok, on the classified system. But it will take time to integrate Grok onto classified cloud servers and into software from Palantir, a data analytics company that the military uses. More important, Anthropic’s Claude is considered a superior product to Grok, regularly yielding more accurate information.

The Pentagon also is close to an agreement with Google to bring its Gemini model onto the classified system, but the senior official said the deal was not complete. A person briefed on the meeting said Anthropic would continue to demand assurances that its models are not used for autonomous weapons programs or mass surveillance.

Pentagon officials took issue with Anthropic after Palantir reported a conversation that one of its employees had had with a counterpart at the artificial intelligence company regarding the U.S. military operation last month to capture President Nicolás Maduro of Venezuela. In the meeting on Tuesday, Mr. Amodei said there had been a misunderstanding and that his company had not reached out to Palantir or the Pentagon about the Maduro operation, according to a person briefed on the meeting. Mr. Amodei insisted his company had never objected to or interfered with legitimate military operations.

Comment:   "Pentagon officials have said that using software and weapons lawfully is their responsibility, one they take seriously".

Also, they have a large lot of somewhat damaged South American boat salvage they would like to sell. Any human residue has been steam cleaned.

Comments

  • "Pentagon officials have said that using software and weapons lawfully is their responsibility, one they take seriously."

    Best joke I've heard today.
  • “...The bigger issue here is that (the Pentagon) waters down these designations. They are transforming what is designed to be national security tools into a point of leverage for business...” Big surprise!!! LOL. Profit and loss is all the Orange Grease Stain knows. Government? Serving the PEOPLE? What's that?

  • "Lawfully" = "whatever our lawyers say we can do under their often-creative interpretations of 'law'"
  • edited February 25
    No need to even check the law. Just do it and see if you get away with it. After all, it takes a year to many years to finish taking it through the courts. If you do it right, when you lose, everyone except you pays the price. (e.g. a consequence of immunity)
  • edited February 25
    Yes. The actual intent is far from what is stated. That may had been discussed in that Pentagon meeting and the officials were exposed of their true intent.

    Noted that Anthropic is the only AI is used for classified military work, and not the other three - Open AI, Gemini, and Grok. Some reported that Claude was used in the operation to seize Maduro.
Sign In or Register to comment.