According to a survey conducted by Stanford University’s Institute for Human-Centered AI, 36 percent of researchers believe that AI could cause a “nuclear-level catastrophe.” Great.
The survey was conducted as part of the institute’s annual AI index report, which is essentially the industry’s state of the union.
While the report does have some high notes — the document notes that “policymaker interest in AI is on the rise,” with the tech pushing scientific discovery forward — that 36 percent figure is a difficult number to ignore.
More Ways Than One
If it makes anyone feel better, a user recently did try to get an autonomous AI system dubbed ChaosGPT to “destroy humanity,” but it didn’t get very far at all.
That 36 percent figure does come with an important caveat. It only refers to AI decision-making — as in, an AI making a choice on its own that ultimately causes a catastrophe — and not human misuse of AI, a growing threat that the report addressed separately later on.
“According to the AIAAIC database… the number of AI incidents and controversies has increased 26 times since 2012,” reads the report. “Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and US prisons using call-monitoring technology on their inmates.”
“This growth is evidence, of both greater use of AI technologies and awareness of misuse possibilities,” the researchers wrote.
In other words: there are other ways that AI can and is causing harm, if not by its proverbial own hand.
Despite these concerns, only 41 percent of natural language processing (NLP) researchers thought that AI should be regulated, according to the report.
The survey serves as a fascinating glimpse into the collective mind of the AI industry, which overall seems to have some ambivalence about the tech’s future. Only 57 percent of researchers, for example, think that “recent research progress” is paving the way for artificial general intelligence.
Those polled did have one notable point of agreement: 73 percent of researchers “feel that AI could soon lead to revolutionary societal change.”
So, whether we’re on the way to a nuclear-level catastrophe, or something entirely different, you might want to buckle up.
READ MORE: Measuring trends in Artificial Intelligence [Stanford University]