ciphergoth: (Default)
[personal profile] ciphergoth
Dashing off a quick post with some half-formed thoughts just to see what people think. When I refer to the singularity here, I mean it in the intelligence explosion sense. I'm trying to categorize the different ways the singularity could fail to happen, here are the categories I've come up with so far:
  • Human minds might be fundamentally different to other physical things, and not subject to thinking about like an engineer
  • The idea of one mind being greatly more efficient than another might not be meaningful
  • Human minds might be within a few orders of magnitude of the most efficient minds possible in principle in our Universe
  • ... as above in our corner of the Universe
  • ... as above given the height of our physical knowledge
  • ... as above given the limitations of the height of our manufacturing ability
  • ... as above given the limitations of the height of our design ability
  • We might not continue to study the problem, or the relevant fields necessary
  • We might hit an existential risk before reaching the heights of our potential
  • We might have the ability to build superintelligent minds, but choose not to do so
  • We might build superintelligent minds and it not make a great difference to the world

What have I missed out?

EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.

Date: 2011-06-22 10:26 pm (UTC)
From: [identity profile] http://sucs.org/~pwb/wordpress/
You can classify these reasons to some extent. I think the problems I perceived before (and attempted to express on Twitter) are due to the detail making me think too much :) Consider this the rewrite you asked for.

0. The singularity might happen, but not be the earth-shaking event we thought it would be.

I. The singularity might be impossible for humans to cause.
1. The concept of continually self-improving minds might not be coherent.
2. Human minds might already be close to the best minds possible in principle.
3. Self-improving superhuman intelligence might be possible, but not reachable from the human condition. (This summarises several of the reasons you list.)

II. Humans might fail to cause the singularity for contingent reasons.
4. We might not study the problem.
5. We might lose the ability (knowledge or resources) to make superintelligent minds.
6. We might become extinct before it happens. (Subsumed by (4) and (5), but a notable special case.)
7. We might consciously choose not to create superintelligent minds even though we can.
8. We might be able to create superintelligent minds and try, but fail.
9. Superintelligent minds might not improve themselves, even though they can.

Date: 2011-06-23 08:04 am (UTC)
djm4: (Default)
From: [personal profile] djm4
"We might in principle be able to create superintelligent minds and try, repeatedly, but always fail."

It's a tricky one, because we'd probably never know that it was possible, so it would only be apparent to an entity with a hypothetical external perspective. After all, if we consistently fail to do something, in what sense are we 'able' to do it? It's the free will vs determinism discussion all over again.

So it might not count for that reason. But I think it's valid, myself.

Date: 2011-06-23 08:22 am (UTC)
djm4: (Default)
From: [personal profile] djm4
I'm not sure if it's covered by any of your points, but 'we might build superior intelligences, and not realise we've done it/be able to interact with them in any meaningful way'. Of course, that doesn't matter of we're also able to build superior intelligences that we can and do interact with, but I'm considering the possibility that the 'consciousness' of anything with a significantly higher intelligence might operate at a level we can't perceive.

I know you mistrust examples taken from fiction, but there's one that hints at what I'm trying to get at here that I know you've seen. It's the scene in Babylon 5 where G'Kar temporarily picks up an ant, and then asks Catherine Sakai how it might answer another ant who asked 'what was that?'

I don't know 100% what you think consciousness or 'mind' is (although I know we've discussed it a lot), but to me it's an emergent property of the way our brains have evolved to manipulate and organise symbols representing the world around us. Because I see it as an emergent property, I think it would be very hard to detect simply by measuring brain activity; you not only have to look at the brain activity, you have to look at it at the right scale to see brain activity. The Turing Test famously does this; it looks at 'mind' on the level of another 'mind' communicating with it, which AFAICT is roughly the level at which we humans generally perceive our own minds. That's all well and good if we create an AI we can communicate with, but if we can't, how will we even recognise that we've done it?

(I hate rhetorical questions, so I'm open to answers to that last one. I currently think that it won't always be possible, though.)

Date: 2011-06-23 07:31 pm (UTC)
From: [personal profile] hythloday
I'm assuming here that it's likely that we fail to increase our intelligence "from scratch", instead attempting self-enhancement of some kind.

Either:

Cognition that we can recognise as human turns out to be dependent on the substrate it's running on being a human mind; when deprived of essential stimulus, such as socialization, the cognition diverges (e.g. goes catatonic, insane, etc). This might be a special case of your point #1.

We're able to either build systems more intelligent than we are or to understand their working, but not both; relying on more than trivially intelligent cognitions that are usefully general (i.e. not just super Jeopardy players) requires unacceptable leaps of faith, either because of prejudices that can't be overcome, or reasonable distrust (because we can demonstrate that any cognition sufficiently more intelligent than us to be useful is also capable of understanding how we think on a level deep enough to manipulate us to our own disadvantage). This might be a special case of your point #5.

Date: 2012-02-22 12:36 am (UTC)
zwol: ((mad) science)
From: [personal profile] zwol
[personal profile] rysmiel has thrown out the possibility that enhancing intelligence might become harder rather than easier as your starting point becomes more intelligent -- so, weakly superhuman AI or augmented humans will eventually happen, but we won't get the recursive intellectual runaway that describes the classical singularity. I think this is a special case of your "as above given the limitations of the height of our design ability", but I think it's a valuable special case to consider.

Other possible special cases of "we won't be able to figure out how to design a strongly superhuman AI anytime soon", that I personally find compelling, include:


  • We might be far more ignorant of the actual mechanism of human intelligence than futurists would like to think. (This one has substantial empirical evidence in its favor: human-equivalent AI has been estimated to be 20+ years in the future by actual researchers in the field since the 1960s, and is still so today.)
  • Human intelligence might be close to a local maximum; strongly superhuman intelligence might require a totally different architecture, which we will have great difficulty conceiving, let alone designing.
  • It might turn out that NP ⊈ P and enhancing intelligence much beyond human capabilities is equivalent to an NP-hard problem (so a strongly superhuman AI cannot operate in real time).

Profile

ciphergoth: (Default)
ciphergoth

April 2015

S M T W T F S
   1234
567891011
121314151617 18
19202122232425
2627282930  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 17th, 2017 09:50 am
Powered by Dreamwidth Studios