Actualizado el 23/07/2024

icon Facebook icon Twiiter icon RSS icon EMAIL
  1. Portada
  2. >
  3. TecnonewsWorld
  4. >
  5. 36% of researchers fear nuclear-level AI catastrophe, Stanford study finds

36% of researchers fear nuclear-level AI catastrophe, Stanford study finds

Escrito por Redacción TNI el 16/05/2023 a las 13:19:41

These findings are part of Stanford's 2023 Artificial Intelligence Index Report, released in April 2023.
During the months of May and June 2022, a team of American researchers polled the natural language processing (NLP) community on a range of topics, including the condition of artificial general intelligence (AGI), NLP, and ethics fields.
The field of NLP is a branch of artificial intelligence concerned with providing computers the capacity to comprehend written and spoken words in a manner similar to that of humans.
The poll was completed by 480 people, 68% of whom had written at least two papers for the Association for Computational Linguistics (ACL) between 2019 and 2022.
The poll offers one of the most complete perspectives on how AI experts feel about AI development.
More than a third (36%) of respondents agreed or weakly agreed with the statement: "It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is a least as bad as an all-out nuclear war."
Despite these concerns, only 41% of NLP researchers thought AI should be regulated.
One significant area of agreement among those surveyed was that "AI could soon lead to revolutionary societal change," 73% of AI experts agreed with the statement.
One month ago, Geoffrey Hinton, considered the "godfather of artificial intelligence," toldCBS News' Brook Silva-Braga that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."

The same Stanford research also found that 77% of AI experts either agreed or weakly agreed that private AI firms have too much influence.