ciphergoth: (Default)
[personal profile] ciphergoth
I never post, so here's a rant written in IM I want to preserve. Edited somewhat.

here's how it seems to me
there's an argument for the singularity that goes like this
"A, B, C, D, and E all seem likely"
"E says that A + B + C + D = Singularity"
and then people say "No, the singularity is rubbish"
and we say "do you disagree with A, B, C, D, or E?"
and they say "You're all a bunch of wild-eyed dreamers"
and we say "Err, so is that C you disagree with?"
and they say "It's religion for geeks, man!"
and we say "Err, but..."
and ... they just DON'T FUCKING ENGAGE AT ALL.
That's why I keep pointing at
it takes the contrapositive, and says "If not singularity, then either ¬A or ¬B or ¬C or ¬D or ¬E"
No-one said "oh wait, you forgot F"
but none of ¬A or ¬B or ¬C or ¬D or ¬E got a lot of support.

I am willing to accept that this misrepresents singularity critics horribly - you certainly don't all call us names for example! But I hope the broad form of my frustration is clear and if I'm confused I hope it makes it easier for you to clear up my confusion :-)

Date: 2012-02-21 01:43 pm (UTC)
simont: (Default)
From: [personal profile] simont
In fairness, it isn't actually a logically incoherent position for someone to think "I have reasons unrelated to A,B,C,D,E for thinking the singularity is disproved or at least very unlikely. Supposing I'm right, logic says that at least one of A,B,C,D,E must also be wrong in spite of all of them looking plausible, but I haven't worked out which yet."

Date: 2012-02-21 05:23 pm (UTC)
simont: (Default)
From: [personal profile] simont
Indeed. And you'd also hope that the argument and counterargument can be played off against each other to find the place where (the latter claims that) the former fails.

(The analogy which suggested my original comment was that of a maths teacher faced with a complicated and fiddly 'proof' by a student of a result the teacher knows to be false for some unarguable reason like having a clear counterexample. The teacher can be confident of the falsity of the result without actually having to find the faulty step in the student's proof – but of course they probably have the annoying job of finding the flaw anyway, and will be aided in this by actually using the counterexample they have in mind and seeing where the proof stops making true statements about it.)

Date: 2012-03-30 09:32 am (UTC)
reddragdiva: (Default)
From: [personal profile] reddragdiva
Yes: it's easy to make bad arguments that are hard to take apart well enough to convince the person who made the bad argument. (I'm working on this Gish gallop this week, for example. I'm only bothering because there are ex-YECs who have said the reply article would have actually helped them.)

Date: 2012-02-21 01:53 pm (UTC)
djm4: (Default)
From: [personal profile] djm4
In that post you link to, you got at least two 'what about F?'s:

* We might in principle be able to create superintelligent minds and try, repeatedly, but always fail.
* We might build superior intelligences, and not realise we've done it/be able to interact with them in any meaningful way.

Both from me (although the first was a modification of an earlier suggestion, based on your comments), neither with a reply from you. I don't know whether this is because you thought I was too trivially missing the point to engage with, but I do feel a bit misrepresented there. If either of us was refusing to engage in that thread - and I'm not saying we were - it visibly wasn't me.

Also, it's possible that none of ¬A or ¬B or ¬C or ¬D or ¬E got a lot of support because you didn't actually ask for any of us to support them. You were explicitly drawing up a list and asking what you missed, not asking for critique on the ones you'd already got on the list.

Date: 2012-02-21 11:24 pm (UTC)
purplerabbits: (Default)
From: [personal profile] purplerabbits
What David said, especially the latter point. Perhaps if you wanted people to vite for one you should have provided ticky boxes :-)

Personally I go for a hefty dose of global warming with a side order of AI is really REALLY hard. I doubt if it's impossible, but I have hopes that before we solve it we may realise what a silly idea it could be.

Date: 2012-02-21 04:43 pm (UTC)
damerell: (brains)
From: [personal profile] damerell
I remember writing some of these down, but:

Is it to be uploads or AI? If the former, capturing brain states that will run in a satisfactory fashion might be very hard (or impossible). If the latter, AI might be very hard; evidence so far suggests it is. Designing an AI meaningfulyl cleverer than yourself might be very hard or impossible, in a way that is not tractable to thinking about it for longer.

Climate change and the energy crisis may well put human society in a state where - at the very least - computers stop getting faster.

"Your superior intelligence is no match for our puny weapons"; when it gets started, the real world might recognise what's going on and take extremely drastic steps.

Date: 2012-02-22 06:59 am (UTC)
From: [personal profile] rsaarelm
I see this pattern a lot when the argument is for libertarianism or objectivism instead of the singularity, for example. I figure the implicit reasoning is something in the lines of the problem, coming up with a great way to run all human societies or an excellent all-encompassing philosophy, appearing to most likely be both very hard and very complex, and the advocates proposing a very simple-sounding solution. Their reasoning might well be sound starting from the initial model for reality they picked up, but the proposed solution is so all-encompassing and simplistic sounding that there is most likely something wrong with the initial assumptions.

Engaging the hidden initial assumptions of the world model is a lot trickier than engaging with the argument, since they'd involve rooting out the implicit world model, figuring out where it's getting oversimplified, and how to bring the necessary additional complexity in to illustrate the problems with the simplistic solution, all of which are really hard work and not likely to get much help from the interlocutor.

This strikes me as a reasonably good approach to most overreaching first-principles social ideologies like libertarianism or communism, but these also have the shared failure mode of being intended to run on top of human society and probably not being prepared to deal with all the messy incidental complexity present in humans. Singularity ideologies are different in that end result is not intended to run on top of a human society, but they probably still get pattern-matched into the category of too-simple solutions to the very-complex problem of human society.

Date: 2012-02-22 08:04 am (UTC)
From: [personal profile] rsaarelm
Yes, that's part of what I was thinking. It looks like it was a lot of work to come up with, and it does a lot of its work by not just engaging with the argument, but instead going out and digging up stuff about the presuppositions and the actual observed effects of the proposed stuff.

I basically think the conversation pattern results from intuiting that a similar critique could be made for singularity. It's obviously not practical to start composing the thing during casual discussion, and if the interlocutor doesn't share the intuition, you suspect they might be stuck in some reality-ignoring first principles mode of thinking and just ignore the line of argument questioning their premises, so it's more expedient to just not engage with them to begin with.


ciphergoth: (Default)
Paul Crowley

December 2018

1617 1819202122

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 20th, 2019 06:46 am
Powered by Dreamwidth Studios