Former OpenAI Chief Scientist Ilya Sutskever has co-founded a new company, “Safe Superintelligence” (SSI), focused on developing AI systems that surpass human capabilities while prioritizing safety, the business has already raised USD 1 billion in funding, SSI’s executives told Reuters.
With a small team of 10 top industry talents, SSI plans to invest in better hardware and attract more exceptional talent. The company is building a small, highly effective team of trusted engineers and researchers, with offices in Palo Alto, California, and Tel Aviv, Israel.
Sutskever is committed to putting AI safety at the forefront of SSI’s mission. AI safety refers to the principle of preventing AI from causing harm, whether it is through catastrophic events like rogue AI acting against human interests or by mitigating issues like misinformation.
A California bill, SB1047, which seeks to impose stricter safety regulations on AI, has divided opinion among industry giants. While companies like OpenAI and Google have opposed the bill, others like Anthropic and Elon Musk’s xAI have supported it.
Despite general skepticism about funding potentially unprofitable ventures, SSI’s ability to secure significant investment suggests that venture capitalists are willing to back projects led by industry experts like Sutskever.
From Board Member to Venture Owner:
Last year, Sutskever was part of the board at OpenAI, the company behind ChatGPT, which controversially fired CEO Sam Altman for “being candid in his communication with the board.” The decision was quickly reversed, and nearly all OpenAI employees, including Sutskever, signed a letter demanding the board’s resignation and Altman’s return. Sutskever was eventually removed from the board and left OpenAI in May, despite supporting Altman’s reinstatement.
Unlike OpenAI’s decentralized and unconventional structure, SSI follows a traditional for-profit model with a strong focus on talent acquisition aligned with its principles.
Sutskever remains a strong believer in scaling, a hypothesis that AI models improve when given vast amounts of computing power. This concept has driven significant investment in AI chips, data centers, and energy, paving the way for advancements like ChatGPT.
“Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” Sutskever remarked. “Some people can work really long hours and they’ll just go down the same path faster. It’s not so much our style. But if you do something different, then it becomes possible for you to do something special.”