When the Trump administration bombed Iran last weekend, it did so with the assistance of generative artificial intelligence. According to reporting from The Wall Street Journal and The Washington Post, the military heavily relied on Maven Smart System, a joint project that utilizes surveillance-gathering AI software from Palantir and the powerful generative-AI system Claude, built by Anthropic. The Maven Smart System was used throughout the planning of the attack, suggesting hundreds of targets. Maven offers commanders “video-game-like abilities to oversee battles,” outlining targets, suggesting priorities, drawing up exit strategies (and press releases), and assisting in the overall organization of the mission. Secretary of Defense Pete Hegseth has been open about his desire to use AI throughout the military.
Just a few days before, Anthropic CEO Dario Amodei made headlines for pushing back against Hegseth’s demands that military-grade AI should have no safety regulations whatsoever—including those that would keep a human “in the loop” on weapons systems and prevent widespread surveillance of US citizens. Amodei claimed that omitting such precautions would violate the core ethical principles of Anthropic, and that they “cannot in good conscience accede” to the Pentagon’s demands. Hegseth responded by banning the use of Anthropic in the federal government; the company protested and sued the government. OpenAI, a rival AI company, jumped in to accept Hegseth’s demands and scooped up the government contract, claiming that it had installed internal safeguards that wouldn’t allow the technology to be used unlawfully.
As an ethicist of technology, I am permanently skeptical of corporate claims about the ethics of their products. When ethics and profits are in competition, profits always win. For the briefest moment, I wanted to join the crowd and canonize Anthropic for holding the line. And then we went to war with Iran, and Anthropic is on the front lines anyway—having been embedded in every facet of military planning for years.
Generative-AI systems have become so ubiquitous that many assign a moral neutrality to the emerging technology. It is simply another tool to be used for good or ill, the argument goes, like a brick, a hammer, or a broom. You can plagiarize with it, or you can develop a ten-day workout routine. This presumption of neutrality allows us to ignore the well-documented ethical shortcomings of all AI systems, and positions AI more like a new operating system than a generative engine capable of convincing countless people to, for example, develop romantic relationships with a piece of software.
The presumed neutrality has enabled the newly aggressive approach on AI from the Trump administration. Within weeks of taking office, the Trump administration dismantled President Biden’s attempts to curtail unethical AI use and development. Elon Musk and his followers used AI systems to shrink large portions of the federal government, firing thousands of federal workers. They saved the government little money, but seriously curtailed the ability of federal agencies to do their jobs. The Trump administration simultaneously pursued a campaign of massive deregulation of AI and the tech industry more broadly. The Federal Trade Commission was reshaped, and its focus turned from enforcing antitrust law to enabling “pro-innovation” and “America first” policies. Tech mergers and acquisitions are allowed as long as the right people get a cut. The Biden-era AI Safety Institute at the National Institute of Standards and Technology (NIST), of which I was a member, has become the AI Consortium. In its initial development, it had the potential to serve as a regulatory arm of the federal government for AI development. Under the Trump administration, it has been stripped of any role in determining AI ethics and responsible use.
These federal policies have encouraged the rapid suffusion of AI products into all parts of life. AI use in military, police, and immigration enforcement has soared. Tech companies have fired thousands of workers, counting on the promise of a more efficient AI-enabled workforce.
In this world of ubiquitous AI usage, violence is always around the corner. We saw it in the crashes of autonomous-vehicle systems, in teen suicides after countless hours chatting with a sycophantic AI, and in the ICE agents who, assisted by AI surveillance and targeting, have brutally attacked thousands and murdered men, women, and children. Before the war in Iran, AI was already used in ongoing military conflicts around the world. And now the Trump administration is waging an AI-driven war upon a country of 90 million people.
This act of war is so morally bankrupt, without provocation or just cause, that the AI usage has barely registered. So many questions remain: How much was AI used in the planning of this military campaign? How integrated are AI systems in the targeting and firing of weapons? How are biases and hallucinations accounted for? Was AI to blame for the bombing of the school that killed over 165 people, most of them children? Did AI suggest this war in the first place? Did it support Hegseth’s visions of public rejoicing in the streets of Iran, and a perfect democratic government emerging from the ashes?
Such questions reveal how AI has made war even more remote than it was only a few years ago. The remoteness makes it easy to start and continue wars without much thought. For his part, President Trump ordered the attack from the plush confines of Mar-A-Lago before leaving the briefing to party with guests at a fundraising event.
In 1981, Harvard professor Robert Fisher proposed a unique protocol for nuclear deterrence: embed the nuclear launch codes in a capsule placed next to the heart of an innocent man who always walks with the president, carrying a large knife. In order to make the decision to use nuclear weapons, the president must take the knife, kill the innocent man, and extract the launch codes from his chest. Before making the decision to kill thousands of people, the president would have to spill blood on the polished floor of the West Wing. The protocol was rejected by the Pentagon because it would likely deter any president from using a nuclear weapon. Now, there is less deterrence than ever. All the leading AI models recommend the use of nuclear weapons in wargame simulations.
It feels right that Donald Trump is the one to bring AI into the mainstream of warmaking. Trump is susceptible to all forms of sycophancy, for which chatbots are infamous. His administration deplores accountability, ethics, and transparency. AI is an out-of-control technological marvel, but not in the way that tech CEOs want the public to fear: we aren’t in the Matrix, and we are not in danger of HAL taking command of the ship. AI is something else: a loaded gun in the hands of children. The children holding the gun happen to be in charge of the most powerful military the world has ever seen.
Welcome to the first AI war.
No comments:
Post a Comment