ciphergoth: (Default)
[personal profile] ciphergoth
Dashing off a quick post with some half-formed thoughts just to see what people think. When I refer to the singularity here, I mean it in the intelligence explosion sense. I'm trying to categorize the different ways the singularity could fail to happen, here are the categories I've come up with so far:
  • Human minds might be fundamentally different to other physical things, and not subject to thinking about like an engineer
  • The idea of one mind being greatly more efficient than another might not be meaningful
  • Human minds might be within a few orders of magnitude of the most efficient minds possible in principle in our Universe
  • ... as above in our corner of the Universe
  • ... as above given the height of our physical knowledge
  • ... as above given the limitations of the height of our manufacturing ability
  • ... as above given the limitations of the height of our design ability
  • We might not continue to study the problem, or the relevant fields necessary
  • We might hit an existential risk before reaching the heights of our potential
  • We might have the ability to build superintelligent minds, but choose not to do so
  • We might build superintelligent minds and it not make a great difference to the world

What have I missed out?

EDITED TO ADD 15:20: just added "The idea of one mind being greatly more efficient than another might not be meaningful", which is a popular one.

Date: 2011-06-23 08:22 am (UTC)
djm4: (Default)
From: [personal profile] djm4
I'm not sure if it's covered by any of your points, but 'we might build superior intelligences, and not realise we've done it/be able to interact with them in any meaningful way'. Of course, that doesn't matter of we're also able to build superior intelligences that we can and do interact with, but I'm considering the possibility that the 'consciousness' of anything with a significantly higher intelligence might operate at a level we can't perceive.

I know you mistrust examples taken from fiction, but there's one that hints at what I'm trying to get at here that I know you've seen. It's the scene in Babylon 5 where G'Kar temporarily picks up an ant, and then asks Catherine Sakai how it might answer another ant who asked 'what was that?'

I don't know 100% what you think consciousness or 'mind' is (although I know we've discussed it a lot), but to me it's an emergent property of the way our brains have evolved to manipulate and organise symbols representing the world around us. Because I see it as an emergent property, I think it would be very hard to detect simply by measuring brain activity; you not only have to look at the brain activity, you have to look at it at the right scale to see brain activity. The Turing Test famously does this; it looks at 'mind' on the level of another 'mind' communicating with it, which AFAICT is roughly the level at which we humans generally perceive our own minds. That's all well and good if we create an AI we can communicate with, but if we can't, how will we even recognise that we've done it?

(I hate rhetorical questions, so I'm open to answers to that last one. I currently think that it won't always be possible, though.)

Profile

ciphergoth: (Default)
Paul Crowley

January 2025

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 16th, 2026 07:43 pm
Powered by Dreamwidth Studios