04/29 2025
535
An unprecedented fundraising feat has captivated the tech world.
Recently, Safe Superintelligence (SSI), an AI research lab, secured a monumental $2 billion (approximately RMB 14.6 billion) in funding, valuing the company at $32 billion. This round was spearheaded by Greenoaks Capital with a $500 million investment, joined by heavyweights like Google and NVIDIA.
This marks the second significant funding round for SSI in just seven months. Last September, the company raised $1 billion from a prestigious group of investors, including a16z and Sequoia Capital.
Founded in June 2024, SSI has its headquarters in Palo Alto, California, and Tel Aviv, Israel, and is dedicated to pioneering safe super AI. As of March this year, the company boasts a lean team of approximately 20 employees.
At the helm of SSI is AI luminary Ilya Sutskever, a former cornerstone of OpenAI. During last year's internal upheaval, Ilya led the ouster of OpenAI CEO Sam Altman.
Remarkably, SSI has garnered substantial investment and become one of the world's most highly valued AI startups despite not having launched any products. This success is solely attributed to the formidable reputation of its founding team and its clear, compelling mission.
Investors are willing to take a leap of faith for two primary reasons: they are investing in Ilya Sutskever himself and in the mission of developing safe super AI.
Ilya Sutskever's journey is nothing short of extraordinary. Born in 1985 in Tashkent, Soviet Union (now Uzbekistan), Ilya, a Jewish, immigrated to Israel shortly after the collapse of the Soviet Union. He spent his formative years in Israel, where he laid a solid educational foundation and developed a passion for mathematics and computing. The family later relocated to Canada, where Ilya completed his entire education—from high school to his bachelor's, master's, and PhD at the University of Toronto.
From his master's degree onward, Ilya collaborated with Geoffrey Hinton, the "godfather of deep learning," pioneering groundbreaking research on deep neural networks. During his PhD, he revolutionized the field by first using GPUs to accelerate the training of deep convolutional neural networks. His paper in the renowned ImageNet competition not only reduced the error rate from 26% to 15% but also clinched the championship. This achievement marked a pivotal moment, propelling deep learning from academic fringes into mainstream industrial applications.
After graduation, Ilya joined Google and became a cornerstone of the "Google Brain" team. In 2015, he co-founded OpenAI with Elon Musk, Sam Altman, and others, serving as Chief Scientist. At OpenAI, Ilya was instrumental in shaping the GPT technology roadmap, vigorously advocating for the pre-training + fine-tuning paradigm.
However, irreconcilable differences with Altman over the pace of AI development, control methods, and ultimate goals led to a shocking power struggle in 2023, ultimately resulting in Ilya's departure from OpenAI.
Today, Ilya stands as one of the preeminent figures in the AI community. When he ventures into entrepreneurship, investment institutions eagerly offer funding, often without hesitation.
Turning to the mission of developing safe super AI, it envisions an AI system that surpasses human intelligence, safeguards human interests under all circumstances, accurately comprehends and adheres to humans' true intentions across scenarios, avoids unexpected behaviors, and even actively upholds human long-term survival and value systems in extreme situations.
SSI does not dabble in chatbots, SaaS services, or public APIs. Its sole ambition is to realize a fully controllable superintelligent AI system that benefits humanity. SSI underscores that safety trumps commercialization.
Investors seek returns, so why back something seemingly distant from commercialization? Investing in SSI is not a traditional "invest-exit" strategy but rather a "strategic bet + value-anchored investment."
SSI aims to be the first entity to safely build and control superintelligence. If successful, it has the potential to become the global "AI governance hub," with future AI ecosystems potentially revolving around it. Its "architecture, security protocols, and computing resources" could become foundational infrastructure. Investors are wagering on "shaping the future AI constitution of humanity," prioritizing long-term equity over short-term payouts, aiming to become original shareholders of the rule-makers.
Although SSI maintains it is "not commercialized," this does not preclude exit pathways. Possible return forms include:
· High-premium strategic mergers and acquisitions, akin to Microsoft's acquisition of control over OpenAI's nonprofit division;
· Government cooperation/capital injection: Once superintelligence is realized, governments are likely to participate in "escorting" it and facilitate strategic buybacks;
· Fund share transfers: Early-stage funds can resell shares to later-stage strategic investors, national capital, or strategic parties.
Investing in SSI is not about replicating the next OpenAI but about securing a commanding position in the "new world order of AI."
While SSI maintains a high degree of confidentiality, public reports indicate that the latest funding will primarily be allocated to accelerating research and development of "safe superintelligence" systems, building supercomputing infrastructure to support large-scale model training and experimentation (which may include constructing dedicated data centers, procuring high-performance GPU clusters, and optimizing computing architectures), and attracting top AI researchers and engineers to join the team.
Are you optimistic about SSI and its aspirations?
This article does not constitute investment advice. Images sourced from WeChat.