Retired Army Special Forces officer Mike Nelson criticized Hegseth's rhetoric, stating, 'That's a necessary end to achieve goals through military force - you have to kill people to achieve them. That's not the end. It's a weird obsession with death for the sake of it.'
The current collision between the Department of Defense and Anthropic over whether Anthropic's A.I. models should be bound by ethical constraints or made available for all uses the Pentagon might have in mind raises significant concerns about the future of AI governance.
I have been working in Ukraine since 2019, first as an active Green Beret advising in an official capacity, then after leaving that service, directing special operations on the ground and more recently carrying hard-won lessons back to NATO before they are forgotten or overtaken by the next news cycle.
I was giving these scenarios, these Golden Dome scenarios, and so on. And he's like, 'Just call me if you need another exception.' And I'm like, 'But what if the balloon's going up at that moment and it's like a decisive action we have to take? I'm not going to call you to do something. It's not rational.'
According to the Secretary of Defense Pete Hegseth's memorandum on the Strategy, this AI-first status is to be achieved through four broad aims: Incentivizing internal DOD experimentation with AI models. Identifying and eliminating bureaucratic obstacles in the way of model integration. Focusing the U.S.'s military investment to shore up the U.S.'s "asymmetric advantages" in areas including AI computing, model innovation, entrepreneurial dynamism, capital markets, and operational data.