Ilya Sutskever, co-founder and former Chief Scientist of OpenAI, has secured $1 billion in funding for his new AI startup, Safe Superintelligence Inc. (SSI).
The company, which aims to develop 'safe' AI systems that surpass human capabilities, has attracted investment from top venture capital firms despite being only three months old.
Key Takeaways
- SSI has raised $1 billion in funding within three months of its founding.
- The company aims to develop 'safe' superintelligent AI systems.
- Investors include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel.
- SSI is valued at $5 billion despite having no publicly-known products yet.
- The company plans to focus on research and development for the next few years.
Formation and Mission
SSI was co-founded by Ilya Sutskever, Daniel Gross, and Daniel Levy in June 2024. Sutskever left OpenAI after a period of discontent, particularly regarding the allocation of resources to his 'superalignment' research team. SSI's mission is to develop AI systems that are not only superintelligent but also safe and aligned with human values.
Funding and Valuation
The $1 billion funding round saw participation from prominent venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Despite being a nascent company with only 10 employees and no products, SSI has been valued at $5 billion. The funds will be used to acquire computing power and attract top talent, with teams based in Palo Alto, California, and Tel Aviv, Israel.
Focus on AI Safety
SSI's emphasis on AI safety stems from the belief that powerful AI systems could pose existential risks to humanity. The company plans to spend the next few years on research and development before bringing any products to market. This approach contrasts with other AI startups that prioritise rapid productisation.
Industry Impact and Controversies
The topic of AI safety has sparked debate within the tech industry. Companies and AI experts have differing views on proposed safety regulations, such as California's controversial SB-1047 bill. SSI's focus on safety aligns it with other companies like Anthropic, which was also founded by former OpenAI employees concerned about AI safety.
Future Prospects
SSI's successful funding round indicates continued faith in AI research led by top talent. The company aims to build a trusted, skilled team and advance AI capabilities while ensuring safety. As the industry grapples with balancing innovation and safety, SSI's approach could set a new standard for responsible AI development.
Sources
- Sutskever strikes AI gold with billion-dollar backing for superintelligent AI | Ars Technica, Ars Technica.
- Ex-OpenAI co-founder's Safe Superintelligence raises $1B | VentureBeat, VentureBeat.
- Ilya Sutskever’s AI Startup SSI Secures $1 billion, National Crowdfunding & Fintech Association of Canada.
- Exclusive-OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion - MarketScreener, marketscreener.com.