Over the last few decades, we’ve seen how the internet and the smartphone rapidly transformed our lives. Artificial intelligence is now poised to do the same, but some experts are worried that the current pace of its development will cause harm.
An open letter signed by hundreds of leaders in the tech world, including Steve Wozniak, who co-founded Apple, and Elon Musk, has proposed pausing the development of any AI past the capabilities of OpenAI’s GPT-4.
They expressed concern that AI labs were locked in an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
The Guardian’s UK technology editor, Alex Hern, tells Hannah Moore where these concerns come from.
“You will have already seen the rapid, rapid increase in the capabilities of these cutting edge artificial intelligence, and that’s with external humans having to do the actual work,” Hern tells Moore. “One fear is that once you end up with an AI that can meaningfully improve the speed of AI research … you go from something that is GPT-4 to super-intelligent in a matter of years or maybe even months.”
Support The Guardian
The Guardian is editorially independent.
And we want to keep our journalism open and accessible to all.
But we increasingly need our readers to fund our work.