Technology · October 22, 2024

Would you trust AI to mediate an argument?

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’ve recently been feeling heartbroken. A very close friend recently cut off contact with me. I don’t really understand why, and my attempts at fixing the situation have backfired. Situations like this are hurtful and confusing. So it’s no wonder that people are increasingly turning to AI chatbots to help solve them. And there’s good news: AI might actually be able to help. 

Researchers from Google DeepMind recently trained a system of large language models to help people come to agreement over complex but important social or political issues. The AI model was trained to identify and present areas where people’s ideas overlapped. With the help of this AI mediator, small groups of study participants became less divided in their positions on various issues. You can read more from Rhiannon Williams here.   

One of the best uses for AI chatbots is for brainstorming. I’ve had success in the past using them to draft more assertive or persuasive emails for awkward situations, such as complaining about services or negotiating bills. This latest research suggests they could help us to see things from other people’s perspectives too. So why not use AI to patch things up with my friend? 

I described the conflict, as I see it, to ChatGPT and asked for advice about what I should do. The response was very validating, because the AI chatbot supported the way I had approached the problem. The advice it gave was along the lines of what I had thought about doing anyway. I found it helpful to chat with the bot and get more ideas about how to deal with my specific situation. But ultimately, I was left dissatisfied, because the advice was still pretty generic and vague (“Set your boundary calmly” and “Communicate your feelings”) and didn’t really offer the kind of insight a therapist might. 

And there’s another problem: Every argument has two sides. I started a new chat, and described the problem as I believe my friend sees it. The chatbot supported and validated my friend’s decisions, just as it did for me. On one hand, this exercise helped me see things from her perspective. I had, after all, tried to empathize with the other person, not just win an argument. But on the other hand, I can totally see a situation where relying too much on the advice of a chatbot that tells us what we want to hear could cause us to double down, preventing us from seeing things from the other person’s perspective. 

This served as a good reminder: An AI chatbot is not a therapist or a friend. While it can parrot the vast reams of internet text it’s been trained on, it doesn’t understand what it’s like to feel sadness, confusion, or joy. That’s why I would tread with caution when using AI chatbots for things that really matter to you, and not take what they say at face value. 

An AI chatbot can never replace a real conversation, where both sides are willing to truly listen and take the other’s point of view into account. So I decided to ditch the AI-assisted therapy talk and reached out to my friend one more time. Wish me luck! 


Now read the rest of The Algorithm

Deeper Learning

OpenAI says ChatGPT treats us all the same (most of the time)

Does ChatGPT treat you the same whether you’re a Laurie, Luke, or Lashonda? Almost, but not quite. OpenAI has analyzed millions of conversations with its hit chatbot and found that ChatGPT will produce a harmful gender or racial stereotype based on a user’s name in around one in 1,000 responses on average, and as many as one in 100 responses in the worst case.

Why this matters: Bias in AI is a huge problem. Ethicists have long studied the impact of bias when companies use AI models to screen résumés or loan applications, for example. But the rise of chatbots, which enable individuals to interact with models directly, brings a new spin to the problem. Read more from Will Douglas Heaven

Bits and Bytes

Intro to AI: a beginner’s guide to artificial intelligence from MIT Technology Review
There is an overwhelming amount of AI news, and it is a lot to keep up with. Do you wish someone would just take a step back and explain some of the basics? Look no further. Intro to AI is MIT Technology Review’s first newsletter that also serves as a mini-course. You’ll get one email a week for six weeks, and each edition will walk you through a different topic in AI. Sign up here

The race to find new materials with AI needs more data. Meta is giving massive amounts away for free.
Meta is releasing a massive data set and models, called Open Materials 2024, that could help scientists use AI to discover new materials much faster. OMat24 tackles one of the biggest bottlenecks in the discovery process: a lack of data. (MIT Technology Review

Cracks are starting to appear in Microsoft’s “bromance” with OpenAI 
As part of OpenAI’s transition from a research lab to a for-profit company, it has tried to renegotiate its deal with Microsoft to secure more computing power and funding. Meanwhile, Microsoft has started to invest in other AI projects, such as DeepMind cofounder Mustafa Suleyman’s Inflection AI, to reduce its reliance on OpenAI—much to Sam Altman’s chagrin. 
(The New York Times

Millions of people are using abusive AI “nudify” bots on Telegram 
The messaging app is a hotbed for popular AI bots that “remove clothes” from photos of people to create nonconsensual deepfake images. (Wired

About The Author