Ways the singularity could fail to happen
Jun. 22nd, 2011 02:59 pm
Dashing off a quick post with some half-formed thoughts just to see what people think. When I refer to the singularity here, I mean it in the intelligence explosion sense. I'm trying to categorize the different ways the singularity could fail to happen, here are the categories I've come up with so far:
What have I missed out?
EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.
- Human minds might be fundamentally different to other physical things, and not subject to thinking about like an engineer
- The idea of one mind being greatly more efficient than another might not be meaningful
- Human minds might be within a few orders of magnitude of the most efficient minds possible in principle in our Universe
- ... as above in our corner of the Universe
- ... as above given the height of our physical knowledge
- ... as above given the limitations of the height of our manufacturing ability
- ... as above given the limitations of the height of our design ability
- We might not continue to study the problem, or the relevant fields necessary
- We might hit an existential risk before reaching the heights of our potential
- We might have the ability to build superintelligent minds, but choose not to do so
- We might build superintelligent minds and it not make a great difference to the world
What have I missed out?
EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.
no subject
Date: 2011-06-23 08:22 am (UTC)I know you mistrust examples taken from fiction, but there's one that hints at what I'm trying to get at here that I know you've seen. It's the scene in Babylon 5 where G'Kar temporarily picks up an ant, and then asks Catherine Sakai how it might answer another ant who asked 'what was that?'
I don't know 100% what you think consciousness or 'mind' is (although I know we've discussed it a lot), but to me it's an emergent property of the way our brains have evolved to manipulate and organise symbols representing the world around us. Because I see it as an emergent property, I think it would be very hard to detect simply by measuring brain activity; you not only have to look at the brain activity, you have to look at it at the right scale to see brain activity. The Turing Test famously does this; it looks at 'mind' on the level of another 'mind' communicating with it, which AFAICT is roughly the level at which we humans generally perceive our own minds. That's all well and good if we create an AI we can communicate with, but if we can't, how will we even recognise that we've done it?
(I hate rhetorical questions, so I'm open to answers to that last one. I currently think that it won't always be possible, though.)