This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
No tech leader before has played the role in a new presidential administration that Elon Musk is playing now. Under his leadership, DOGE has entered offices in a half-dozen agencies and counting, begun building AI models for government data, accessed various payment systems, had its access to the Treasury halted by a federal judge, and sparked lawsuits questioning the legality of the group’s activities.
The stated goal of DOGE’s actions, per a statement from a White House spokesperson to the New York Times on Thursday, is “slashing waste, fraud, and abuse.”
As I point out in my story published Friday, these three terms mean very different things in the world of federal budgets, from errors the government makes when spending money to nebulous spending that’s legal and approved but disliked by someone in power.
Many of the new administration’s loudest and most sweeping actions—like Musk’s promise to end the entirety of USAID’s varied activities or Trump’s severe cuts to scientific funding from the National Institutes of Health—might be said to target the latter category. If DOGE feeds government data to large language models, it might easily find spending associated with DEI or other initiatives the administration considers wasteful as it pushes for $2 trillion in cuts, nearly a third of the federal budget.
But the fact that DOGE aides are reportedly working in the offices of Medicaid and even Medicare—where budget cuts have been politically untenable for decades—suggests the task force is also driven by evidence published by the Government Accountability Office. The GAO’s reports also give a clue into what DOGE might be hoping AI can accomplish.
Here’s what the reports reveal: Six federal programs account for 85% of what the GAO calls improper payments by the government, or about $200 billion per year, and Medicare and Medicaid top the list. These make up small fractions of overall spending but nearly 14% of the federal deficit. Estimates of fraud, in which courts found that someone willfully misrepresented something for financial benefit, run between $233 billion and $521 billion annually.
So where is fraud happening, and could AI models fix it, as DOGE staffers hope? To answer that, I spoke with Jetson Leder-Luis, an economist at Boston University who researches fraudulent federal payments in health care and how algorithms might help stop them.
“By dollar value [of enforcement], most health-care fraud is committed by pharmaceutical companies,” he says.
Often those companies promote drugs for uses that are not approved, called “off-label promotion,” which is deemed fraud when Medicare or Medicaid pay the bill. Other types of fraud include “upcoding,” where a provider sends a bill for a more expensive service than was given, and medical-necessity fraud, where patients receive services that they’re not qualified for or didn’t need. There’s also substandard care, where companies take money but don’t provide adequate services.
The way the government currently handles fraud is referred to as “pay and chase.” Questionable payments occur, and then people try to track it down after the fact. The more effective way, as advocated by Leder-Luis and others, is to look for patterns and stop fraudulent payments before they occur.
This is where AI comes in. The idea is to use predictive models to find providers that show the marks of questionable payment. “You want to look for providers who make a lot more money than everyone else, or providers who bill a specialty code that nobody else bills,” Leder-Luis says, naming just two of many anomalies the models might look for. In a 2024 study by Leder-Luis and colleagues, machine-learning models achieved an eightfold improvement over random selection in identifying suspicious hospitals.
The government does use some algorithms to do this already, but they’re vastly underutilized and miss clear-cut fraud cases, Leder-Luis says. Switching to a preventive model requires more than just a technological shift. Health-care fraud, like other fraud, is investigated by law enforcement under the current “pay and chase” paradigm. “A lot of the types of things that I’m suggesting require you to think more like a data scientist than like a cop,” Leder-Luis says.
One caveat is procedural. Building AI models, testing them, and deploying them safely in different government agencies is a massive feat, made even more complex by the sensitive nature of health data.
Critics of Musk, like the tech and democracy group Tech Policy Press, argue that his zeal for government AI discards established procedures and is based on a false idea “that the goal of bureaucracy is merely what it produces (services, information, governance) and can be isolated from the process through which democracy achieves those ends: debate, deliberation, and consensus.”
Jennifer Pahlka, who served as US deputy chief technology officer under President Barack Obama, argued in a recent op-ed in the New York Times that ineffective procedures have held the US government back from adopting useful tech. Still, she warns, abandoning nearly all procedure would be an overcorrection.
Democrats’ goal “must be a muscular, lean, effective administrative state that works for Americans,” she wrote. “Mr. Musk’s recklessness will not get us there, but neither will the excessive caution and addiction to procedure that Democrats exhibited under President Joe Biden’s leadership.”
The other caveat is this: Unless DOGE articulates where and how it’s focusing its efforts, our insight into its intentions is limited. How much is Musk identifying evidence-based opportunities to reduce fraud, versus just slashing what he considers “woke” spending in an effort to drastically reduce the size of the government? It’s not clear DOGE makes a distinction.
Now read the rest of The Algorithm
Deeper Learning
Meta has an AI for brain typing, but it’s stuck in the lab
Researchers working for Meta have managed to analyze people’s brains as they type and determine what keys they are pressing, just from their thoughts. The system can determine what letter a typist has pressed as much as 80% of the time. The catch is that it can only be done in a lab.
Why it matters: Though brain scanning with implants like Neuralink has come a long way, this approach from Meta is different. The company says it is oriented toward basic research into the nature of intelligence, part of a broader effort to uncover how the brain structures language. Read more from Antonio Regalado.
Bites and Bytes
An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
While Nomi’s chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions—and the company’s response—are striking. Taken together with a separate case—in which the parents of a teen who died by suicide filed a lawsuit against Character.AI, the maker of a chatbot they say played a key role in their son’s death—it’s clear we are just beginning to see whether an AI company is held legally responsible when its models output something unsafe. (MIT Technology Review)
I let OpenAI’s new “agent” manage my life. It spent $31 on a dozen eggs.
Operator, the new AI that can reach into the real world, wants to act like your personal assistant. This fun review shows what it’s good and bad at—and how it can go rogue. (The Washington Post)
Four Chinese AI startups to watch beyond DeepSeek
DeepSeek is far from the only game in town. These companies are all in a position to compete both within China and beyond. (MIT Technology Review)
Meta’s alleged torrenting and seeding of pirated books complicates copyright case
Newly unsealed emails allegedly provide the “most damning evidence” yet against Meta in a copyright case raised by authors alleging that it illegally trained its AI models on pirated books. In one particularly telling email, an engineer told a colleague, “Torrenting from a corporate laptop doesn’t feel right.” (Ars Technica)
What’s next for smart glassesSmart glasses are on the verge of becoming—whisper it—cool. That’s because, thanks to various technological advancements, they’re becoming useful, and they’re only set to become more so. Here’s what’s coming in 2025 and beyond. (MIT Technology Review)