OpenAI is once again lifting the lid (just a crack) on its safety-testing processes. Last month the company shared the results of an investigation that looked at how often ChatGPT produced a harmful gender or racial stereotype based on a user’s name. Now it has put out two papers describing how it stress-tests its powerful…
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. A new public database lists all the ways AI could go wrong What’s new: Adopting AI can be fraught with danger. Systems could be biased, or parrot falsehoods, or even become addictive. And…
Adopting AI can be fraught with danger. Systems could be biased, or parrot falsehoods, or even become addictive. And that’s before you consider the possibility AI could be used to create new biological or chemical weapons, or even one day somehow spin out of our control. To manage these potential risks, we first need to…