Rant I wrote in IM
Feb. 21st, 2012 01:26 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I never post, so here's a rant written in IM I want to preserve. Edited somewhat.
here's how it seems to me
there's an argument for the singularity that goes like this
"A, B, C, D, and E all seem likely"
"E says that A + B + C + D = Singularity"
and then people say "No, the singularity is rubbish"
and we say "do you disagree with A, B, C, D, or E?"
and they say "You're all a bunch of wild-eyed dreamers"
and we say "Err, so is that C you disagree with?"
and they say "It's religion for geeks, man!"
and we say "Err, but..."
and ... they just DON'T FUCKING ENGAGE AT ALL.
That's why I keep pointing at http://ciphergoth.dreamwidth.org/357313.html
it takes the contrapositive, and says "If not singularity, then either ¬A or ¬B or ¬C or ¬D or ¬E"
No-one said "oh wait, you forgot F"
but none of ¬A or ¬B or ¬C or ¬D or ¬E got a lot of support.
I am willing to accept that this misrepresents singularity critics horribly - you certainly don't all call us names for example! But I hope the broad form of my frustration is clear and if I'm confused I hope it makes it easier for you to clear up my confusion :-)
here's how it seems to me
there's an argument for the singularity that goes like this
"A, B, C, D, and E all seem likely"
"E says that A + B + C + D = Singularity"
and then people say "No, the singularity is rubbish"
and we say "do you disagree with A, B, C, D, or E?"
and they say "You're all a bunch of wild-eyed dreamers"
and we say "Err, so is that C you disagree with?"
and they say "It's religion for geeks, man!"
and we say "Err, but..."
and ... they just DON'T FUCKING ENGAGE AT ALL.
That's why I keep pointing at http://ciphergoth.dreamwidth.org/357313.html
it takes the contrapositive, and says "If not singularity, then either ¬A or ¬B or ¬C or ¬D or ¬E"
No-one said "oh wait, you forgot F"
but none of ¬A or ¬B or ¬C or ¬D or ¬E got a lot of support.
I am willing to accept that this misrepresents singularity critics horribly - you certainly don't all call us names for example! But I hope the broad form of my frustration is clear and if I'm confused I hope it makes it easier for you to clear up my confusion :-)
no subject
Date: 2012-02-21 01:43 pm (UTC)no subject
Date: 2012-02-21 05:04 pm (UTC)no subject
Date: 2012-02-21 05:23 pm (UTC)(The analogy which suggested my original comment was that of a maths teacher faced with a complicated and fiddly 'proof' by a student of a result the teacher knows to be false for some unarguable reason like having a clear counterexample. The teacher can be confident of the falsity of the result without actually having to find the faulty step in the student's proof – but of course they probably have the annoying job of finding the flaw anyway, and will be aided in this by actually using the counterexample they have in mind and seeing where the proof stops making true statements about it.)
no subject
Date: 2012-03-30 09:32 am (UTC)no subject
Date: 2012-02-21 01:53 pm (UTC)* We might in principle be able to create superintelligent minds and try, repeatedly, but always fail.
* We might build superior intelligences, and not realise we've done it/be able to interact with them in any meaningful way.
Both from me (although the first was a modification of an earlier suggestion, based on your comments), neither with a reply from you. I don't know whether this is because you thought I was too trivially missing the point to engage with, but I do feel a bit misrepresented there. If either of us was refusing to engage in that thread - and I'm not saying we were - it visibly wasn't me.
Also, it's possible that none of ¬A or ¬B or ¬C or ¬D or ¬E got a lot of support because you didn't actually ask for any of us to support them. You were explicitly drawing up a list and asking what you missed, not asking for critique on the ones you'd already got on the list.
no subject
Date: 2012-02-21 02:04 pm (UTC)no subject
Date: 2012-02-21 11:24 pm (UTC)Personally I go for a hefty dose of global warming with a side order of AI is really REALLY hard. I doubt if it's impossible, but I have hopes that before we solve it we may realise what a silly idea it could be.
no subject
Date: 2012-02-21 04:43 pm (UTC)Is it to be uploads or AI? If the former, capturing brain states that will run in a satisfactory fashion might be very hard (or impossible). If the latter, AI might be very hard; evidence so far suggests it is. Designing an AI meaningfulyl cleverer than yourself might be very hard or impossible, in a way that is not tractable to thinking about it for longer.
Climate change and the energy crisis may well put human society in a state where - at the very least - computers stop getting faster.
"Your superior intelligence is no match for our puny weapons"; when it gets started, the real world might recognise what's going on and take extremely drastic steps.
no subject
Date: 2012-02-21 05:19 pm (UTC)Climate change and such could kill us all, which would definitely put the kibosh on the whole thing. If it doesn't kill us all, it could delay it a great deal; it's not impossible that we could survive but never get back to making progress again, but I don't think that's likely.
"Meaningfully" cleverer than us isn't a strict requirement - if it is "merely" a million times faster than us it can compress the time from Socrates to today into less than a day, and if it's "merely" the equivalent of a billion human-level intelligences each working at this 1E6 accelerated rate, it will have a significant intellectual advantage.
Any given AGI might successfully be stopped by violence; this broadly comes under the heading of humanity deciding not to build one. However, we may eventually - wisely or otherwise, deliberately or otherwise - allow one to run for long enough that it makes a big difference to our future.
no subject
Date: 2012-02-22 06:59 am (UTC)Engaging the hidden initial assumptions of the world model is a lot trickier than engaging with the argument, since they'd involve rooting out the implicit world model, figuring out where it's getting oversimplified, and how to bring the necessary additional complexity in to illustrate the problems with the simplistic solution, all of which are really hard work and not likely to get much help from the interlocutor.
This strikes me as a reasonably good approach to most overreaching first-principles social ideologies like libertarianism or communism, but these also have the shared failure mode of being intended to run on top of human society and probably not being prepared to deal with all the messy incidental complexity present in humans. Singularity ideologies are different in that end result is not intended to run on top of a human society, but they probably still get pattern-matched into the category of too-simple solutions to the very-complex problem of human society.
no subject
Date: 2012-02-22 07:44 am (UTC)no subject
Date: 2012-02-22 08:04 am (UTC)I basically think the conversation pattern results from intuiting that a similar critique could be made for singularity. It's obviously not practical to start composing the thing during casual discussion, and if the interlocutor doesn't share the intuition, you suspect they might be stuck in some reality-ignoring first principles mode of thinking and just ignore the line of argument questioning their premises, so it's more expedient to just not engage with them to begin with.