36% of Researchers Fear Nuclear-Level AI Catastrophe, Stanford Study Finds

Data presented by Atlas VPN shows that more than a third of credible AI experts believe that AI will cause a nuclear-level catastrophe within this century.

These findings are part of Stanford's 2023 Artificial Intelligence Index Report, released in April 2023.

During the months of May and June 2022, a team of American researchers polled the natural language processing (NLP) community on a range of topics, including the condition of artificial general intelligence (AGI), NLP, and ethics fields.

The field of NLP is a branch of artificial intelligence concerned with providing computers the capacity to comprehend written and spoken words in a manner similar to that of humans.

The poll was completed by 480 people, 68% of whom had written at least two papers for the Association for Computational Linguistics (ACL) between 2019 and 2022.

The poll offers one of the most complete perspectives on how AI experts feel about AI development.

More than a third (36%) of respondents agreed or weakly agreed with the statement: "It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is a least as bad as an all-out nuclear war."

Despite these concerns, only 41% of NLP researchers thought AI should be regulated.

One significant area of agreement among those surveyed was that "AI could soon lead to revolutionary societal change," 73% of AI experts agreed with the statement.

One month ago, Geoffrey Hinton, considered the "godfather of artificial intelligence," told CBS News' Brook Silva-Braga that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."

Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."

Moratorium for Advanced AI Systems

In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."

Elon Musk, the CEO of Tesla and Twitter, who also signed the letter calling for a pause, was said to be "developing plans to launch a new artificial intelligence start-up to compete with OpenAI,” according to a recent article in The Financial Times.

The same Stanford research also found that 77% of AI experts either agreed or weakly agreed that private AI firms have too much influence.

To read the full article, head over to: https://atlasvpn.com/blog/36-of-researchers-fear-nuclear-level-ai-catastrophe-stanford-study-finds

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.