- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.
TL;DR: (AI-generated 🤖)
Superintelligence is a technology that has the potential to greatly impact humanity by helping solve important global problems. However, it also poses significant risks, including the possibility of humanity being disempowered or even becoming extinct. Although superintelligence may still seem distant, there is a belief that it could be developed within this decade. Managing these risks will require the establishment of new governing institutions and finding ways to align superintelligent AI with human intent. Currently, there is no solution for controlling or steering a potentially superintelligent AI to prevent it from going rogue. Existing techniques for aligning AI rely on humans supervising the systems, but these methods will not work effectively for superintelligent AI that surpasses human abilities. Therefore, new scientific and technical breakthroughs are needed to address these challenges.
Under the Hood
gpt-3.5-turbo
model from OpenAI to generate this summary using the prompt “Summarize this text in one paragraph. Include all important points.
”How to Use AutoTLDR