Technology · July 22, 2025

Five things you need to know about AI right now

Last month I gave a talk at SXSW London called “Five things you need to know about AI”—my personal picks for the five most important ideas in AI right now. 

I aimed the talk at a general audience, and it serves as a quick tour of how I’m thinking about AI in 2025. I’m sharing it here in case you’re interested. I think the talk has something for everyone. There’s some fun stuff in there. I even make jokes!

The video is now available (thank you, SXSW London). Below is a quick look at my top five. Let me know if you would have picked different ones!

1. Generative AI is now so good it’s scary.

Maybe you think that’s obvious. But I am constantly having to check my assumptions about how fast this technology is progressing—and it’s my job to keep up. 

A few months ago, my colleague—and your regular Algorithm writer—James O’Donnell shared 10 music tracks with the MIT Technology Review editorial team and challenged us to pick which ones had been produced using generative AI and which had been made by people. Pretty much everybody did worse than chance.

What’s happening with music is happening across media, from code to robotics to protein synthesis to video. Just look at what people are doing with new video-generation tools like Google DeepMind’s Veo 3. And this technology is being put into everything.

My point here? Whether you think AI is the best thing to happen to us or the worst, do not underestimate it. It’s good, and it’s getting better.

2. Hallucination is a feature, not a bug.

Let’s not forget the fails. When AI makes up stuff, we call it hallucination. Think of customer service bots offering nonexistent refunds, lawyers submitting briefs filled with nonexistent cases, or RFK Jr.’s government department publishing a report that cites nonexistent academic papers. 

You’ll hear a lot of talk that makes hallucination sound like it’s a problem we need to fix. The more accurate way to think about hallucination is that this is exactly what generative AI does—what it’s meant to do—all the time. Generative models are trained to make things up.

What’s remarkable is not that they make up nonsense, but that the nonsense they make up so often matches reality. Why does this matter? First, we need to be aware of what this technology can and can’t do. But also: Don’t hold out for a future version that doesn’t hallucinate.

3. AI is power hungry and getting hungrier.

You’ve probably heard that AI is power hungry. But a lot of that reputation comes from the amount of electricity it takes to train these giant models, though giant models only get trained every so often.

What’s changed is that these models are now being used by hundreds of millions of people every day. And while using a model takes far less energy than training one, the energy costs ramp up massively with those kinds of user numbers. 

ChatGPT, for example, has 400 million weekly users. That makes it the fifth-most-visited website in the world, just after Instagram and ahead of X. Other chatbots are catching up. 

So it’s no surprise that tech companies are racing to build new data centers in the desert and revamp power grids.

The truth is we’ve been in the dark about exactly how much energy it takes to fuel this boom because none of the major companies building this technology have shared much information about it. 

That’s starting to change, however. Several of my colleagues spent months working with researchers to crunch the numbers for some open source versions of this tech. (Do check out what they found.)

4. Nobody knows exactly how large language models work.

Sure, we know how to build them. We know how to make them work really well—see no. 1 on this list.

But how they do what they do is still an unsolved mystery. It’s like these things have arrived from outer space and scientists are poking and prodding them from the outside to figure out what they really are.

It’s incredible to think that never before has a mass-market technology used by billions of people been so little understood.

Why does that matter? Well, until we understand them better we won’t know exactly what they can and can’t do. We won’t know how to control their behavior. We won’t fully understand hallucinations.

5. AGI doesn’t mean anything.

Not long ago, talk of AGI was fringe, and mainstream researchers were embarrassed to bring it up. But as AI has got better and far more lucrative, serious people are happy to insist they’re about to create it. Whatever it is.

AGI—or artificial general intelligence—has come to mean something like: AI that can match the performance of humans on a wide range of cognitive tasks.

But what does that mean? How do we measure performance? Which humans? How wide a range of tasks? And performance on cognitive tasks is just another way of saying intelligence—so the definition is circular anyway.

Essentially, when people refer to AGI they now tend to just mean AI, but better than what we have today.

There’s this absolute faith in the progress of AI. It’s gotten better in the past, so it will continue to get better. But there is zero evidence that this will actually play out. 

So where does that leave us? We are building machines that are getting very good at mimicking some of the things people do, but the technology still has serious flaws. And we’re only just figuring out how it actually works.

Here’s how I think about AI: We have built machines with humanlike behavior, but we haven’t shrugged off the habit of imagining a humanlike mind behind them. This leads to exaggerated assumptions about what AI can do and plays into the wider culture wars between techno-optimists and techno-skeptics.

It’s right to be amazed by this technology. It’s also right to be skeptical of many of the things said about it. It’s still very early days, and it’s all up for grabs.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

About The Author