A prominent AI expert has revised his predictions for when artificial intelligence might pose an existential threat to humanity, significantly extending the timeline for AI's potential to achieve superintelligence and autonomously code.
Key Takeaways
- Daniel Kokotajlo, formerly of OpenAI, has delayed his projected timeline for AI's ability to code autonomously.
- The revised forecast suggests fully autonomous coding may occur in the early 2030s, pushing the potential for superintelligence to 2034.
- Experts note the complexities of real-world integration and the diminishing relevance of the term 'AGI' as AI systems become more general.
Revisiting the AI 2027 Scenario
Daniel Kokotajlo, who previously outlined a scenario dubbed 'AI 2027' envisioning unchecked AI development leading to humanity's destruction by mid-2030, has now adjusted his outlook. The original scenario posited that AI systems would achieve fully autonomous coding by 2027, triggering an intelligence explosion. However, in an update, Kokotajlo and his co-authors now anticipate this milestone to occur in the early 2030s, with a new horizon for superintelligence set at 2034. The revised forecast omits a specific prediction for when AI might cause human extinction.
Expert Opinions and Real-World Complexities
The debate surrounding AI timelines has intensified following the rapid advancements seen with systems like ChatGPT. While some experts initially predicted the arrival of Artificial General Intelligence (AGI) within years or decades, recent developments have led to a recalibration. Malcolm Murray, an AI risk management expert, noted that "a lot of other people have been pushing their timelines further out in the past year, as they realise how jagged AI performance is." He highlighted the significant "inertia in the real world that will delay complete societal change."
Furthermore, the very definition of AGI is being questioned. Henry Papadatos, executive director of SaferAI, commented that "The term AGI made sense from far away, when AI systems were very narrow... Now we have systems that are quite general already and the term does not mean as much."
Challenges in AI Integration
Andrea Castagna, an AI policy researcher, pointed out that the path from a superintelligent AI to real-world impact is not straightforward. "The fact that you have a superintelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years," she stated. The increasing development of AI reveals that "the world is a lot more complicated than that," suggesting that practical implementation and societal integration present substantial hurdles that extend beyond theoretical AI capabilities.
Internal Goals and Future Outlook
Leading AI companies continue to pursue ambitious goals. OpenAI CEO Sam Altman has stated that having an automated AI researcher by March 2028 is an "internal goal," while acknowledging the possibility of failure. The ongoing discussions and revised timelines underscore the dynamic and evolving nature of AI development and its potential long-term implications.
