This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
War is a catalyst for change, an expert in AI and warfare told me in 2022. At the time, the war in Ukraine had just started, and the military AI business was booming. Two years later, things have only ramped up as geopolitical tensions continue to rise.
Silicon Valley players are poised to benefit. One of them is Palmer Luckey, the founder of the virtual-reality headset company Oculus, which he sold to Facebook for $2 billion. After Luckey’s highly public ousting from Meta, he founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion. My colleague James O’Donnell interviewed Luckey about his new pet project: headsets for the military.
Luckey is increasingly convinced that the military, not consumers, will see the value of mixed-reality hardware first: “You’re going to see an AR headset on every soldier, long before you see it on every civilian,” he says. In the consumer world, any headset company is competing with the ubiquity and ease of the smartphone, but he sees entirely different trade-offs in defense. Read the interview here.
The use of AI for military purposes is controversial. Back in 2018, Google pulled out of the Pentagon’s Project Maven, an attempt to build image recognition systems to improve drone strikes, following staff walkouts over the ethics of the technology. (Google has since returned to offering services for the defense sector.) There has been a long-standing campaign to ban autonomous weapons, also known as “killer robots,” which powerful militaries such as the US have refused to agree to.
But the voices that boom even louder belong to an influential faction in Silicon Valley, such as Google’s former CEO Eric Schmidt, who has called for the military to adopt and invest more in AI to get an edge over adversaries. Militaries all over the world have been very receptive to this message.
That’s good news for the tech sector. Military contracts are long and lucrative, for a start. Most recently, the Pentagon purchased services from Microsoft and OpenAI to do search, natural-language processing, machine learning, and data processing, reports The Intercept. In the interview with James, Palmer Luckey says the military is a perfect testing ground for new technologies. Soldiers do as they are told and aren’t as picky as consumers, he explains. They’re also less price-sensitive: Militaries don’t mind spending a premium to get the latest version of a technology.
But there are serious dangers in adopting powerful technologies prematurely in such high-risk areas. Foundation models pose serious national security and privacy threats by, for example, leaking sensitive information, argue researchers at the AI Now Institute and Meredith Whittaker, president of the communication privacy organization Signal, in a new paper. Whittaker, who was a core organizer of the Project Maven protests, has said that the push to militarize AI is really more about enriching tech companies than improving military operations.
Despite calls for stricter rules around transparency, we are unlikely to see governments restrict their defense sectors in any meaningful way beyond voluntary ethical commitments. We are in the age of AI experimentation, and militaries are playing with the highest stakes of all. And because of the military’s secretive nature, tech companies can experiment with the technology without the need for transparency or even much accountability. That suits Silicon Valley just fine.
Now read the rest of The Algorithm
Deeper Learning
How Wayve’s driverless cars will meet one of their biggest challenges yet
The UK driverless-car startup Wayve is headed west. The firm’s cars learned to drive on the streets of London. But Wayve has announced that it will begin testing its tech in and around San Francisco as well. And that brings a new challenge: Its AI will need to switch from driving on the left to driving on the right.
Full speed ahead: As visitors to or from the UK will know, making that switch is harder than it sounds. Your view of the road, how the vehicle turns—it’s all different. The move to the US will be a test of Wayve’s technology, which the company claims is more general-purpose than what many of its rivals are offering. Across the Atlantic, the company will now go head to head with the heavyweights of the growing autonomous-car industry, including Cruise, Waymo, and Tesla. Join Will Douglas Heaven on a ride in one of its cars to find out more.
Bits and Bytes
Kids are learning how to make their own little language models
Little Language Models is a new application from two PhD researchers at MIT’s Media Lab that helps children understand how AI models work—by getting to build small-scale versions themselves. (MIT Technology Review)
Google DeepMind is making its AI text watermark open source
Google DeepMind has developed a tool for identifying AI-generated text called SynthID, which is part of a larger family of watermarking tools for generative AI outputs. The company is applying the watermark to text generated by its Gemini models and making it available for others to use too. (MIT Technology Review)
Anthropic debuts an AI model that can “use” a computer
The tool enables the company’s Claude AI model to interact with computer interfaces and take actions such as moving a cursor, clicking on things, and typing text. It’s a very cumbersome and error-prone version of what some have said AI agents will be able to do one day. (Anthropic)
Can an AI chatbot be blamed for a teen’s suicide?
A 14-year-old boy committed suicide, and his mother says it was because he was obsessed with an AI chatbot created by Character.AI. She is suing the company. Chatbots have been touted as cures for loneliness, but critics say they actually worse isolation. (The New York Times)
Google, Microsoft, and Perplexity are promoting scientific racism in search results
The internet’s biggest AI-powered search engines are featuring the widely debunked idea that white people are genetically superior to other races. (Wired)