Opinion | Why AI chatbots are unlikely to bring about human extinction
Stories of chatbots behaving in surprising ways are fuelling the rise of ‘AI doomers’ who warn of the risk of human extinction. This extreme view is misguided

But some of these chatbot results have also been worrying, giving rise to a group of people, many AI creators themselves, who warn of the dangers of developing AI so it becomes “superintelligent”, a term variously defined and hard to nail down.
The concern has gone to an extreme – a recent bestselling book is called If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. Many doomers believe superintelligent AI risks the extinction of humanity and therefore the world must prevent its development.
In a podcast episode titled “The AI Doomers”, two doomers – including one of the book’s authors – are interviewed as part of host Andy Mills’ series “The Last Invention”. The interviewees warn that because even the creators of the AI models don’t understand how they work and why they produce the results they deliver – some of them quite worrying – it will be virtually impossible to control what AI does once it reaches superintelligence.
There have been disquieting reports. For example, a teenager spent hours in his room discussing with a chatbot whether he should commit suicide. The chatbot encouraged him to do so, and, tragically, he did.
Probably the most widely shared example of an astonishing chatbot result was that of New York Times reporter Kevin Roose, whose conversation with a Microsoft Bing chatbot in 2023 led to a declaration of love from the AI. Featured by Mills on another episode of “The Last Invention” series, Roose said he tried to “test its guard rails and see what kinds of things it wouldn’t do”. He asked the chatbot if there were “any dark desires it might have that it wasn’t allowed to act on”. Roose said it then “went off the rails”.
