Ways the singularity could fail to happen
Jun. 22nd, 2011 02:59 pm
Dashing off a quick post with some half-formed thoughts just to see what people think. When I refer to the singularity here, I mean it in the intelligence explosion sense. I'm trying to categorize the different ways the singularity could fail to happen, here are the categories I've come up with so far:
What have I missed out?
EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.
- Human minds might be fundamentally different to other physical things, and not subject to thinking about like an engineer
- The idea of one mind being greatly more efficient than another might not be meaningful
- Human minds might be within a few orders of magnitude of the most efficient minds possible in principle in our Universe
- ... as above in our corner of the Universe
- ... as above given the height of our physical knowledge
- ... as above given the limitations of the height of our manufacturing ability
- ... as above given the limitations of the height of our design ability
- We might not continue to study the problem, or the relevant fields necessary
- We might hit an existential risk before reaching the heights of our potential
- We might have the ability to build superintelligent minds, but choose not to do so
- We might build superintelligent minds and it not make a great difference to the world
What have I missed out?
EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.
no subject
Date: 2011-06-23 07:31 pm (UTC)Either:
Cognition that we can recognise as human turns out to be dependent on the substrate it's running on being a human mind; when deprived of essential stimulus, such as socialization, the cognition diverges (e.g. goes catatonic, insane, etc). This might be a special case of your point #1.
We're able to either build systems more intelligent than we are or to understand their working, but not both; relying on more than trivially intelligent cognitions that are usefully general (i.e. not just super Jeopardy players) requires unacceptable leaps of faith, either because of prejudices that can't be overcome, or reasonable distrust (because we can demonstrate that any cognition sufficiently more intelligent than us to be useful is also capable of understanding how we think on a level deep enough to manipulate us to our own disadvantage). This might be a special case of your point #5.