a colorful cloud of paint
@AlphaAvenue
a colorful cloud of paint

New focus on secure superintelligence

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has announced the creation of a new AI company called Safe Superintelligence Inc. (SSI) was announced. This company has the ambitious goal of developing a superintelligence that is secure and aligned with human values.

Background and motivation

SSI was founded together with Daniel Gross, a former AI manager at Apple, and Daniel Levy, the head of OpenAI’s optimization team. The founders emphasize that the development of secure superintelligence is the most important technical challenge of our time. They plan to focus exclusively on this goal and create a superintelligence that can operate safely within human society.

Goals and methods

The company will employ a small, highly qualified group of researchers and engineers to meet this challenge through ground-breaking innovations. Sutskever describes the approach as a “straight shot” to secure superintelligence, with just one focus, one goal and one product. This concentrated approach is to be achieved through revolutionary breakthroughs.

Comparison with OpenAI’s superalignment initiative

At the same time, OpenAI has launched its own superintelligence initiative. OpenAI’s Superalignment group aims to solve the technical challenges of superintelligence alignment within four years. This includes developing scalable training methods, validating the models, and performing stress tests where intentionally misaligned models are trained to detect and correct the worst types of mismatches.

Collaboration and community

Sutskever’s decision to leave OpenAI and found SSI underscores the growing importance of the secure development and implementation of high-performance AI systems within the research community. SSI plans to work closely with other experts and institutions to ensure that its technical solutions also address broader human and societal concerns.

For more detailed information you can visit the official SSI website.

Conclusion

The foundation of Safe Superintelligence Inc. by Ilya Sutskever marks a significant step towards the safe and responsible development of AI. By focusing on superintelligence alignment and collaborating with leading experts, SSI aims to maximize the benefits of AI technology while minimizing its risks.

Sources

https://www.theverge.com/2024/6/19/24181870/openai-former-chief-scientist-ilya-sutskever-ssi-safe-superintelligence
https://www.enterpriseai.news/2023/07/06/openai-launches-alignment-initiative-aimed-at-mitigating-superintelligent-ai
https://techcrunch.com/2024/06/19/ilya-sutskever-openais-former-chief-scientist-launches-new-ai-company/

Picture of Justus Becker

Justus Becker

I have a passion for storytelling. AI enthusiast and addicted to midjourney.
Comments

Leave a Reply

Your email address will not be published. Required fields are marked *