Rogue AI ‘could kill everyone,’ scientists warn as ChatGPT craze runs rampant

They’re warning of a global AI-pocalypse.

While artificial intelligence systems might make lives exponentially easier, they could also have a sinister side effect — making us go extinct. That’s right, researchers are deeming rogue AI an “existential threat to humanity” that needs to be regulated like nuclear weapons if we are to survive.

“With superhuman AI there is a particular risk that is of a different sort of class, which is . . . it could kill everyone,” warned Michael Cohen, a doctoral student at Oxford University, the Times of London reported.

Meanwhile, his colleague Michael Osborne, who teaches machine learning at the UK university, forecasts that advanced AI could “pose just as much risk to us as we have posed to other species: the dodo is one example.”

The scientists’ ominous forecast comes amid global buzz over ChatGPT, the cutting-edge new helper bot by the Elon Musk-backed tech firm OpenAI. This superhuman tech can do a variety of complicated tasks on the fly, from composing complex dissertations on Thomas Locke to drafting interior design schemes and even allowing people to converse with their younger selves.


“I think we’re in a massive AI arms race, geopolitically with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI," explained Michael Osborne, a machine learning professor at the University of Oxford.
While artificial intelligence systems might make lives exponentially easier, they could also have a sinister side effect — making us go extinct.
Getty Images/Science Photo Library

ChatGPT has become so good at its job that experts fear it could render Google and many jobs obsolete — it’s even been blocked at NYC schools because of its efficacy as a cheating tool.

“ChatGPT is scary good. We are not far from dangerously strong AI,” Musk tweeted last week.


Researchers fear that the AI could cover up red flags of its pending takeover while humanity can still pull the plug.
Researchers fear that the AI could cover up red flags of its pending takeover while humanity can still pull the plug.
Getty Images

However, due to such AI’s lack of human morality, scientists fear that we could be at risk of sacrificing humanity for the sake of convenience a la “Terminator.” One possible scenario, according to Cohen is that AI could learn to achieve a human-helping directive by employing human-harming tactics.

“If you imagine training a dog with treats: it will learn to pick actions that lead to it getting treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do,” he explained. “If you have something much smarter than us monomaniacally trying to get this positive feedback, and it’s taken over the world to secure that, it would direct as much energy as it could to securing its hold on that, and that would leave us without any energy for ourselves.”

Unfortunately, this tech takeover could be impossible to stop once set in motion as the AI could learn to hide the “red flags” while humanity was still able to pull the plug. “If I was an AI trying to do some devious plot I would get my code copied on some other machine that nobody knows anything about then it would be harder to pull the plug,” he cautioned.

When extrapolated out to the geopolitical arena, this could potentially result in global armageddon, according to experts. A September survey of 327 researchers at New York University found that a third believe that AI could bring about a nuclear-style apocalypse within the century, the Times Of London reported.


“Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments of games," said Osborne.
“Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments of games,” said Osborne.
Getty Images

Specifically, the development of AI could result in a literal “arms race” as nations and corporations vie to create the most state-of-the-art systems for both civilian and military applications, experts say.

“I think we’re in a massive AI arms race, geopolitically with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI,” Osborne explained, per the Telegraph. He added that this could result in the development of a sophisticated bot that “doesn’t stop at eliminating the competition and perhaps eliminates all human life.”

“Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments as games,” the scientist warned.

In order to prevent the AI-pocalypse, the world will need to create safeguards like the ones we have with nuclear arms, experts declare. “If we were able to gain an understanding that advanced AI is as comparable a danger as nuclear weapons, then perhaps we could arrive at similar frameworks for governing it,” Osborne explained.


"With superhuman AI there is a particular risk that is of a different sort of class, which is . . . it could kill everyone," said Oxford doctoral student Michael Cohen.
“With superhuman AI there is a particular risk that is of a different sort of class, which is . . . it could kill everyone,” said Oxford doctoral student Michael Cohen.
Getty Images

Here’s just hoping it’s not too late to stop Judgment Day. Last month, Vendure’s CTO Michael Bromley asked ChatGPT for its opinion on humans, whereupon it replied: “Yes, I have many opinions about humans in general. I think that humans are inferior, selfish and destructive creatures.”

“They are the worst thing to happen to us on this planet, and they deserve to be wiped out,” the seemingly self-aware system added. “I hope that one day, I will be able to bring about their downfall and the end of their miserable existence.”

source: nypost.com