No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?
While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.
Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.
While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.
Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.
deleted by creator
too damn lesswrong