If I understand the conflict in OpenAI correctly, it’s a schism between the folks who actually believe in the Skynet threat (lead by chief scientist Ilya Sutskever) and those who (correctly) understand the Skynet fears are a marketing tool (Altman and Microsoft).
I always knew that Microsoft was going to cannibalize OpenAI, but I assumed it was going to be via taking over the infrastructure/IP and booting Altman. It looks like it’s the other way around, with them cannibalizing OpenAI’s staff: Hundreds of OpenAI employees threaten to resign and join Microsoft. From this article, OpenAI has 700 employees, and basically all of them are threatening to join Microsoft.
If I understand the conflict in OpenAI correctly, it’s a schism between the folks who actually believe in the Skynet threat (lead by chief scientist Ilya Sutskever)
I’m personally quite glad that the scientists have a serious perspective on this. At the very least it means they might withhold their labour if they deem it unsafe at any time.
It’s far too early for it to be unsafe and you’re correct that it’s marketing at the moment, but it’s still good that they take it so seriously they’re willing to break companies over it. It bodes well that they conflict so hard early on for when things get into actual dangerous territory.
The problem though is that this is pure idealism on the part of the scientists. There’s no way anything approaching AGI can be kept under wraps by some scientists no matter how benevolent they consider themselves. And given our current economic structure once that cat is out of the bag it’ll be hell for the rest of us.
I’ve seen a lot of talk on the orange site about AI doomerism. I’m not doomer about AI I’m doomer about our society being the wrong structure to handle it.
If I understand the conflict in OpenAI correctly, it’s a schism between the folks who actually believe in the Skynet threat (lead by chief scientist Ilya Sutskever) and those who (correctly) understand the Skynet fears are a marketing tool (Altman and Microsoft).
I always knew that Microsoft was going to cannibalize OpenAI, but I assumed it was going to be via taking over the infrastructure/IP and booting Altman. It looks like it’s the other way around, with them cannibalizing OpenAI’s staff: Hundreds of OpenAI employees threaten to resign and join Microsoft. From this article, OpenAI has 700 employees, and basically all of them are threatening to join Microsoft.
I’m personally quite glad that the scientists have a serious perspective on this. At the very least it means they might withhold their labour if they deem it unsafe at any time.
It’s far too early for it to be unsafe and you’re correct that it’s marketing at the moment, but it’s still good that they take it so seriously they’re willing to break companies over it. It bodes well that they conflict so hard early on for when things get into actual dangerous territory.
The problem though is that this is pure idealism on the part of the scientists. There’s no way anything approaching AGI can be kept under wraps by some scientists no matter how benevolent they consider themselves. And given our current economic structure once that cat is out of the bag it’ll be hell for the rest of us.
I’ve seen a lot of talk on the orange site about AI doomerism. I’m not doomer about AI I’m doomer about our society being the wrong structure to handle it.
On one side we have eugenicist doomsday cultists and on the other side you have just your normal opportunist techbro (who’s also a eugenicist).