Technology · May 30, 2023

How to talk about AI (even if you don’t know much about AI)

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Everyone is talking about AI, it seems. But if you feel overwhelmed or uncertain about what the hell people are talking about, don’t worry. I’ve got you.

I asked some of the best AI journalists in the business to share their top tips on how to talk about AI with confidence. My colleagues and I spend our days obsessing over the tech, listening to AI folks and then translating what they say into clear, relatable language with important context. I’d say we know a thing or two about what we’re talking about.

Here are seven things to pay attention to when talking about AI. 

1. Don’t worry about sounding dumb

“The tech industry is not great at explaining itself clearly, despite insisting that large language models will change the world. If you’re struggling, you aren’t alone,” says Nitasha Tiku, the Washington Post’s tech culture reporter. It doesn’t help that conversations about AI are littered with jargon, she adds. “Hallucination” is a fancy way of saying an AI system makes things up. And “prompt engineers” are just people who know how to talk to the AI to get what they want.

Tiku recommends watching YouTube explainers on concepts and AI models. “Skip the AI influencers for the more subdued hosts, like Computerphile,” she says. “IBM Technology is great if you’re looking for something short and simple. There’s no channel aimed at casual observers, but it can help demystify the process.” 

And however you talk about AI, some people will grumble. “It sometimes feels like the world of AI has splintered into fandoms with everyone talking past each other, clinging to pet definitions and beliefs,” says Will Douglas Heaven, MIT Technology Review’s senior editor for AI. “Figure out what AI means to you, and stick to it.”

2. Be specific about what kind of AI you’re talking about

“‘AI’” is often treated as one thing in public discourse, but AI is really a collection of a hundred different things,” says Karen Hao, the Wall Street Journal’s China tech and society reporter (and the creator of The Algorithm!).

Hao says that it’s helpful to distinguish which function of AI you are talking about so you can have a more nuanced conversation: are you talking about natural-language processing and language models, or computer vision? Or different applications, such as chatbots or cancer detection? If you aren’t sure, here are some good definitions of various practical applications of artificial intelligence. 

Talking about “AI” as a singular thing obscures the reality of the tech, says Billy Perrigo, a staff reporter at Time. 

“There are different models that can do different things, that will respond differently to the same prompts, and that each have their own biases, too,” he says. 

3. Keep it real

“The two most important questions for new AI products and tools are simply: What does it do and how does it do it?” says James Vincent, senior editor at The Verge. 

There is a trend in the AI community right now to talk about the long-term risks and potential of AI. It’s easy to be distracted by hypothetical scenarios and imagine what the technology could possibly do in the future, but discussions about AI are usually better served by being pragmatic and focusing on the actual, not the what-ifs, Vincent adds. 

The tech sector also has a tendency to overstate the capabilities of their products. “Be skeptical; be cynical,” says Douglas Heaven.

This is especially important when talking about AGI, or artificial general intelligence, which is typically used to mean software that is as smart as a person. (Whatever that means in itself.)

“If something sounds like bad science fiction, maybe it is,” he adds. 

4. Adjust your expectations

Language models that power AI chatbots such as ChatGPT often “hallucinate,” or make things up. This can be annoying and surprising to people, but it’s an inherent part of how they work, says Madhumita Murgia, artificial-intelligence editor at the Financial Times. 

It’s important to remember that language models aren’t search engines that are built to find and give the “right” answers, and they don’t have infinite knowledge. They are predictive systems that are generating the most likely words, given your question and everything they’ve been trained on, Murgia adds. 

“This doesn’t mean that they can’t write anything original … but we should always expect them to be inaccurate and fabricate facts. If we do that, then the errors matter less because our usage and their applications can be adjusted accordingly,” she says. 

5. Don’t anthropomorphize

AI chatbots have captured the public’s imagination because they generate text that looks like something a human could have written, and they give users the illusion they are interacting with something other than a computer program. But programs are in fact all they are.

It’s very important not to anthropomorphize the technology, or attribute human characteristics to it, says Chloe Xiang, a reporter at Motherboard. “Don’t give it a [gendered] pronoun, [or] say that it can feel, think, believe, et cetera.”

Doing this helps feed into the misconception that AI systems are more capable and sentient than they are. 

I’ve found it’s really easy to slip up with this, because our language has not caught up with ways to describe what AI systems are doing. When in doubt, I replace “AI” with “computer program.” Suddenly you feel really silly saying a computer program told someone to divorce his wife

6. It’s all about power

While hype and nightmare scenarios may dominate news headlines, when you talk about AI it is crucial to think about the role of power, says Khari Johnson, a senior staff writer at Wired.

“Power is key to raw ingredients for making AI, like compute and data; key to questioning ethical use of AI; and key to understanding who can afford to get an advanced degree in computer science and who is in the room during the AI model design process,” Johnson says. 

Hao agrees. She says it’s also helpful to keep in mind that AI development is very political and involves massive amounts of money and many factions of researchers with competing interests: “Sometimes the conversation around AI is less about the technology and more about the people.”

7. Please, for the love of God, no robots

Don’t picture or describe AI as a scary robot or an all-knowing machine. “Remember that AI is basically computer programming by humans—combining big data sets with lots of compute power and intelligent algorithms,” says Sharon Goldman, a senior writer at VentureBeat.

Deeper Learning

Catching bad content in the age of AI

In the last 10 years, Big Tech has become really good at some things: language, prediction, personalization, archiving, text parsing, and data crunching. But it’s still surprisingly bad at catching, labeling, and removing harmful content. One simply needs to recall the spread of conspiracy theories about elections and vaccines in the United States over the past two years to understand the real-world damage this causes. The ease of using generative AI could turbocharge the creation of more harmful online content. People are already using AI language models to create fake news websites

But could AI help with content moderation? The newest large language models are much better at interpreting text than previous AI systems. In theory, they could be used to boost automated content moderation. Read more from Tate Ryan-Mosley in her weekly newsletter, The Technocrat.

Bits and Bytes

Scientists used AI to find a drug that could fight drug-resistant infections
Researchers at MIT and McMaster University developed an AI algorithm that allowed them to find a new antibiotic to kill a type of bacteria responsible for many drug-resistant infections that are common in hospitals. This is an exciting development that shows how AI can accelerate and support scientific discovery. (MIT News

Sam Altman warns that OpenAI could quit Europe over AI rules
At an event in London last week, the CEO said OpenAI could “cease operating” in the EU if it cannot comply with the upcoming AI Act. Altman said his company found much to criticize in how the AI Act was worded, and that there were “technical limits to what’s possible.” This is likely an empty threat. I’ve heard Big Tech say this many times before about one rule or another. Most of the time, the risk of losing out on revenue in the world’s second-largest trading bloc is too big, and they figure something out. The obvious caveat here is that many companies have chosen not to operate, or to have a restrained presence, in China. But that’s also a very different situation. (Time)

Predators are already exploiting AI tools to generate child sexual abuse material
The National Center for Missing and Exploited Children has warned that predators are using generative AI systems to create and share fake child sexual abuse material. With powerful generative models being rolled out with safeguards that are inadequate and easy to hack, it was only a matter of time before we saw cases like this. (Bloomberg)

Tech layoffs have ravaged AI ethics teams 
This is a nice overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their teams focused on internet trust and safety as well as AI ethics. Meta, for example, ended a fact-checking project that had taken half a year to build. While companies are racing to roll out powerful AI models in their products, executives like to boast that their tech development is safe and ethical. But it’s clear that Big Tech views teams dedicated to these issues as expensive and expendable. (CNBC

About The Author