OpenAI Insider Warns of 70% Chance AI Will End Humanity

0
AI robot with warning sign, dark background, red alert text




An insider from OpenAI has made a startling claim that there is a 70% chance that artificial intelligence (AI) will lead to the end of humanity.


This alarming prediction has sparked widespread concern and debate within the tech community and beyond.


Key Takeaways

  • An OpenAI insider estimates a 70% probability that AI will end humanity.
  • The insider criticises OpenAI for prioritising product development over safety measures.
  • Former employees have raised concerns about the lack of adequate safety protocols.
  • The term "p(doom)" is used to describe the probability of AI causing catastrophic harm.

The Insider's Claim

Daniel Kokotajlo, a former governance researcher at OpenAI, has publicly stated that there is a 70% chance that AI will either destroy or catastrophically harm humanity. Kokotajlo's prediction is based on his experience and observations while working at OpenAI. He accuses the company of ignoring the monumental risks posed by artificial general intelligence (AGI) in favour of racing to be the first to achieve it.


The Concept of p(doom)

The term "p(doom)" is used within the AI research community to quantify the probability that AI will lead to catastrophic outcomes for humanity. Kokotajlo's estimate of a 70% p(doom) is significantly higher than other estimates, which vary widely among experts. For instance, some researchers believe the probability is as low as 20%, while others, like Roman Yampolskiy, suggest it could be as high as 99.9%.


Safety Concerns and Internal Criticism

Kokotajlo is not alone in his concerns. An open letter signed by former and current employees of OpenAI, as well as other AI researchers, claims that they have been silenced when raising safety issues. The letter argues that the company is more focused on developing "shiny products" than on implementing robust safety measures.

In his interview with The New York Times, Kokotajlo revealed that he had urged OpenAI CEO Sam Altman to pivot towards safety and spend more time on implementing guardrails to control the technology. However, he felt that these concerns were not taken seriously, leading to his resignation in April.


OpenAI's Response

In response to these allegations, OpenAI has stated that they are committed to developing the most capable and safest AI systems. The company claims to have a scientific approach to addressing risks and emphasises the importance of rigorous debate. OpenAI also mentioned that they have avenues for employees to express their concerns, including an anonymous integrity hotline and a Safety and Security Committee.


The Broader Implications

The debate over the safety of AI is far from settled. While some experts believe that the benefits of AI outweigh the risks, others are increasingly concerned about the potential for catastrophic outcomes. The differing p(doom) estimates highlight the uncertainty and complexity of predicting the future impact of AI.

As AI technology continues to advance, the need for robust safety measures and ethical guidelines becomes ever more critical. The warnings from insiders like Kokotajlo serve as a stark reminder of the potential risks and the importance of prioritising safety in the race to develop advanced AI systems.


Sources



Tags:

Post a Comment

0Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!