Ways the singularity could fail to happen
Jun. 22nd, 2011 02:59 pm
Dashing off a quick post with some half-formed thoughts just to see what people think. When I refer to the singularity here, I mean it in the intelligence explosion sense. I'm trying to categorize the different ways the singularity could fail to happen, here are the categories I've come up with so far:
What have I missed out?
EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.
- Human minds might be fundamentally different to other physical things, and not subject to thinking about like an engineer
- The idea of one mind being greatly more efficient than another might not be meaningful
- Human minds might be within a few orders of magnitude of the most efficient minds possible in principle in our Universe
- ... as above in our corner of the Universe
- ... as above given the height of our physical knowledge
- ... as above given the limitations of the height of our manufacturing ability
- ... as above given the limitations of the height of our design ability
- We might not continue to study the problem, or the relevant fields necessary
- We might hit an existential risk before reaching the heights of our potential
- We might have the ability to build superintelligent minds, but choose not to do so
- We might build superintelligent minds and it not make a great difference to the world
What have I missed out?
EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.
no subject
Date: 2011-06-22 10:26 pm (UTC)0. The singularity might happen, but not be the earth-shaking event we thought it would be.
I. The singularity might be impossible for humans to cause.
1. The concept of continually self-improving minds might not be coherent.
2. Human minds might already be close to the best minds possible in principle.
3. Self-improving superhuman intelligence might be possible, but not reachable from the human condition. (This summarises several of the reasons you list.)
II. Humans might fail to cause the singularity for contingent reasons.
4. We might not study the problem.
5. We might lose the ability (knowledge or resources) to make superintelligent minds.
6. We might become extinct before it happens. (Subsumed by (4) and (5), but a notable special case.)
7. We might consciously choose not to create superintelligent minds even though we can.
8. We might be able to create superintelligent minds and try, but fail.
9. Superintelligent minds might not improve themselves, even though they can.
(no subject)
From:(no subject)
From:no subject
Date: 2011-06-23 08:22 am (UTC)I know you mistrust examples taken from fiction, but there's one that hints at what I'm trying to get at here that I know you've seen. It's the scene in Babylon 5 where G'Kar temporarily picks up an ant, and then asks Catherine Sakai how it might answer another ant who asked 'what was that?'
I don't know 100% what you think consciousness or 'mind' is (although I know we've discussed it a lot), but to me it's an emergent property of the way our brains have evolved to manipulate and organise symbols representing the world around us. Because I see it as an emergent property, I think it would be very hard to detect simply by measuring brain activity; you not only have to look at the brain activity, you have to look at it at the right scale to see brain activity. The Turing Test famously does this; it looks at 'mind' on the level of another 'mind' communicating with it, which AFAICT is roughly the level at which we humans generally perceive our own minds. That's all well and good if we create an AI we can communicate with, but if we can't, how will we even recognise that we've done it?
(I hate rhetorical questions, so I'm open to answers to that last one. I currently think that it won't always be possible, though.)
no subject
Date: 2011-06-23 07:31 pm (UTC)Either:
Cognition that we can recognise as human turns out to be dependent on the substrate it's running on being a human mind; when deprived of essential stimulus, such as socialization, the cognition diverges (e.g. goes catatonic, insane, etc). This might be a special case of your point #1.
We're able to either build systems more intelligent than we are or to understand their working, but not both; relying on more than trivially intelligent cognitions that are usefully general (i.e. not just super Jeopardy players) requires unacceptable leaps of faith, either because of prejudices that can't be overcome, or reasonable distrust (because we can demonstrate that any cognition sufficiently more intelligent than us to be useful is also capable of understanding how we think on a level deep enough to manipulate us to our own disadvantage). This might be a special case of your point #5.
no subject
Date: 2012-02-22 12:36 am (UTC)Other possible special cases of "we won't be able to figure out how to design a strongly superhuman AI anytime soon", that I personally find compelling, include: