ciphergoth: (Default)
Paul Crowley ([personal profile] ciphergoth) wrote2007-12-11 09:12 am

More on AI

I don't have a crystal ball, but I'm pretty sure that over half of you are mistaken.

Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.

(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)

First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.

And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.

In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.

Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?

No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.

[identity profile] ergotia.livejournal.com 2007-12-11 09:52 am (UTC)(link)
well, this is all very interesting, but without wanting to get too far into conspiracy theory territory, surely there are issues about vested interests and funding? I mean as to whether these super smart machines ever get built...

[identity profile] ciphergoth.livejournal.com 2007-12-11 10:11 am (UTC)(link)
Smart machines might not get built because no-one funds them, or they might get built wrong by the wrong people, which is likely to result in some sort of fate like the entire Galaxy being turned into paperclips.
djm4: (Default)

[personal profile] djm4 2007-12-11 10:13 am (UTC)(link)
Yes, I do somewhat live in fear of the point at which self-replicating solar-powered nanotechnology becomes available. ;-)

[identity profile] envoy.livejournal.com 2007-12-11 10:40 am (UTC)(link)
Which do you consider more terrifying, mindless grey goo, or intelligent grey goo?
djm4: (Default)

[personal profile] djm4 2007-12-11 10:47 am (UTC)(link)
If it's goo that's just created itself by killing me, I'm not going to be in a a state to care.

If it's goo that's potentially going to kill me, it depends whether the intelligence makes it easier to communicate with, or harder to stop (and the answer's probably 'both'). But on the whole, I value intelligence in any form, so while intelligent grey goo is probably more terrifying, I think I prefer it to mindless grey goo.

What I suspect will happen in practice is goo that's just intelligent enough to overcome its 'off switch', but is pretty mindless compared to your basic slime mould.
djm4: (Default)

[personal profile] djm4 2007-12-11 10:07 am (UTC)(link)
I'm pretty sure intelligence is an emergent behaviour, and I have a strong suspicion that a lot of it can only be perceived through the intelligent entity's interactions with its surroundings, and other intelligent entities there. Given that, I'm not sure how confident you can be that thinking faster is going to make you 'smarter' in the way you've defined 'smart' in your previous post. It may be trivial, but I don't think you can so easily dismiss the 'as smart as humans, but not significantly smarter' option.
djm4: (Default)

[personal profile] djm4 2007-12-11 10:14 am (UTC)(link)
(That said, you are a lot better read on this subject than I am, so I'm tempted to just take you at your word and go away to read around the subject).
juliet: green glowing disembodied brain (branes)

[personal profile] juliet 2007-12-11 10:33 am (UTC)(link)
In fact, I'd say that the way we define intelligence at present is by interactions with surroundings (inc other beings/entities). Kind of an extended version of the Turing Test - I'd say that if something appears to be intelligent, then that's why we call it intelligent (the opposite view is to demand evidence of "something going on inside", but we don't have that for other people so I don't see why we should demand it of non-people things).

But I do think it would be possible to have machines that handled those interactions (and some of those it might not be possible to be "significantly" smarter than humans, granted) *but* also did more other things, and faster. Not all of the things we label "intelligence" may be about speed, but some of them probably are (certainly that's true of standard intelligence tests, although what they actually measure is another question).

I think it's a lot tougher than a 40-year problem, though.
reddragdiva: (Default)

[personal profile] reddragdiva 2007-12-11 12:37 pm (UTC)(link)
Historically, humans have only assigned humanity to other entities when the other entities can shoot back. So I figure that's the real Turing test.

[identity profile] thehumanstomach.livejournal.com 2007-12-12 07:55 pm (UTC)(link)
Did I miss something? When did we meet other entities that shot back? Now there's a conspiracy..........
reddragdiva: (Default)

[personal profile] reddragdiva 2007-12-12 07:56 pm (UTC)(link)
Entities that are now regarded as other humans but weren't until they made their case forcefully enough.
babysimon: (existential)

[personal profile] babysimon 2007-12-11 10:19 am (UTC)(link)
"Autonomy, that's the bugaboo, where your AI's are
concerned. My guess, Case, you're going in there to cut the hardwired
shackles that keep this baby from getting any smarter. And I can't see
how you'd distinguish, say, between a move the parent company makes,
and some move the AI makes on its own, so that's maybe where the
confusion comes in." Again the nonlaugh. "See, those things, they can work
real hard, buy themselves time to write cookbooks or whatever, but the
minute, I mean the nanosecond, that one starts figuring out ways to make
itself smarter, Turing'll wipe it. Nobody trusts those fuckers, you
know that. Every AI ever built has an electromagnetic shotgun wired to its
forehead."

Remember advances in technology are almost always used by the porn industry first

[identity profile] webcowgirl.livejournal.com 2007-12-11 10:19 am (UTC)(link)
I figure it will surpass human intelligence but won't utterly transform things. There will be boats and tides and more animals will be extinct, just like five years from now; we will probably have run out of gasoline but still won't travel via jet pack; people will still work jobs where they are abused; women will be treated like shit in many societies across the world.

£5 bet, payable in 2047?
lovingboth: (Default)

[personal profile] lovingboth 2007-12-11 10:20 am (UTC)(link)
While you're right to say if it happens, it will change things immensely, I went with a 'no' because I suspect it's much much harder than '40 years' away.

You gave chess as an example of where computers crap over humans. The people working in the field in the 1960s expected to be world champion in the 1970s. It took until this century to be plausible and what did it was a) vastly more CPU speed to increase search depths a bit and b) vastly more storage for database lookups.

Even today, if you take away the opening and endgame database tables from the computer players, the human GMs win. With them, computers can survive the opening and play endings with five or fewer pieces perfectly... but to do say all seven piece endings takes more storage than exists. Change the rules even slightly (pawns on b7 can't take on a8) and you'd have to regenerate the endgame tables again.

Draughts is simpler, so that's been solved but again largely stupidly (give them the lookup tables and anyone could do it!) and without any explainable insights. Why is that move best in that position? "It just is..."

For Go, they're still at the beginner stage with no practical ideas about how to be a master-level player: the numbers are too big.

So we're nowhere near yet in some 'small' narrowly defined domains. A computer Leonardo is quite possibly a century or more off.
Edited 2007-12-11 10:31 (UTC)

[identity profile] topbit.livejournal.com 2007-12-11 11:28 am (UTC)(link)
So what are your thoughts on Ray Kurzweil's timetable of 2045 (from The Singularity Is Near (http://en.wikipedia.org/wiki/The_Singularity_Is_Near), a very thick book, but a good read - and I have) for the Technological singularity (http://en.wikipedia.org/wiki/Technological_singularity)?

[identity profile] ciphergoth.livejournal.com 2007-12-11 12:03 pm (UTC)(link)
What I'm talking about is exactly the Singularity, but I think people react in a funny way to the word, as if it were somehow religious.

I don't know the timetable.
ext_58972: Mad! (Default)

[identity profile] autopope.livejournal.com 2007-12-11 11:39 am (UTC)(link)
The question is not whether a neuron does 200 things a second -- that's the clock speed you're looking at. The question is whether a neuron does one thing per clock tick (membrane depolarization cycle) or whether it's actually doing a couple of thousand (active internal processes that involve switching between synapses), or whether (as the quantum mystics like Penrose think) it's actually doing א0 things, and so on.

My money is on a neuron being quite complex -- but not magically so; I'd be surprised if you couldn't simulate a single neuron accurately enough for purposes of sustaining a consciousness sim using a few hundred kilobits of data and a few hundred thousand operations per membrane depolarization, and an ensemble of neurons using maybe a handful of extra kilobits and a couple of thousand extra operations per additional neuron over and above the first instance of the class.

As for funding the development of working AI, I keep looking at the US military's obsession with UAVs and autonomous battlefield robots. The problems they aspire to solve may in many cases require human-equivalent intelligence (quick! Is that a discarded beer can or an IED? Is that guy in the crowd a civilian or an insurgent?), and for the time being, these are the guys handing out the pork. And if that's not enough, there's the demographic overshoot that starts to cut in around 2050 if current population trends continue: global population peaks around 10.0 billion humans, half of them with a first world standard of living (whatever that means by then) then begins to slowly fall. The rich countries will run into huge work force problems at that point; robotics -- as the Japanese have noticed -- offer one way out of the deflationary trap (and an answer to who will care for the old folks).

I'm still agnostic on the AI subject, but I think we're in a much better position to frame the question than we were 40 years ago; given another 40 years, I hope to see some answers trickling in.
ext_58972: Mad! (Default)

[identity profile] autopope.livejournal.com 2007-12-11 11:39 am (UTC)(link)
PS: Dammit, how do you write alef-null in HTML entities? My attempt is showing up all wrong ...
babysimon: (Default)

Incredibly minor digression

[personal profile] babysimon 2007-12-11 12:15 pm (UTC)(link)
0

Mainly I used a different alef to you - ℵ ALEF SYMBOL rather than א HEBREW LETTER ALEF. This means that the browser then knows it can just run all the characters left-to-right as normal, rather than trying to render a single character of Hebrew text in a run of English, invoking the Unicode BiDi algorithm, and confusing everyone in sight.

[identity profile] damerell.livejournal.com 2007-12-11 11:40 am (UTC)(link)
I disagree. I think it's eminently possible that - when we get into AI which is like human intelligence rather than Google-style idiot savants - that it is very much harder to build an AI smarter than you than one which is as smart as you.

We live in a virtual reaity

[identity profile] postmodern-minx.livejournal.com 2007-12-11 01:05 pm (UTC)(link)
For me one of the interesting consequences of faster computing is that self playing games could be created. We already have something like this with cooperative computer AI playing against humans in computer games. I understand that it is on computer, but consider a network, or a network of humans - the line becomes blurred.

What would the characters in that world "think"? would it be patently obvious to them (because they had been programmed so) that the world they live is is real? Would they be unable to discern an escape from that world because they would not be allowed to?

So far, so very Matrixy.

But the other thing that strikes me as i play World of Warcraft is that my Character is utterly unaware that i am playing her. She moves and interacts according to a bizarrely restrictive set of "laws" that the developers, and our society, online and otherwise, have created. To some extent she is real and autonomous, to another she is utterly mine, able to be destroyed at my least whim, though i love her, and oblivious to the fact.

The question then arises are we in an infinite regression of games?

peace
pm

[identity profile] just-becky.livejournal.com 2007-12-11 01:17 pm (UTC)(link)
Perhaps the idea that computer intellegence will exceed human intellegence is moot, as I can't see the two remaining entirely seperate for very long anyway. At first there'd be hardware that would directly interface with the human brain, later organic components that would augment natural function and eventually genetic upgrades to the existing brain structure. This would probably negate the need for any kind of autonomous non human intelligence more adavanced than say a maintenance robot or exploration drone.

[identity profile] ergotia.livejournal.com 2007-12-11 01:26 pm (UTC)(link)
Also, do you mean completely transform the world for the better? As you know, I am 47 and in my lifetime technology certainly has changed the world - maybe not completely but certainly dramatically. Yet the jury is certainly still out for me on whether, say, PC's and mobile phones have changed the world for the better.

[identity profile] ciphergoth.livejournal.com 2007-12-11 01:34 pm (UTC)(link)
No, I definitely don't mean "for the better" - the possibility it will be very much for the worse is not far from my mind.

[identity profile] mskala.livejournal.com 2007-12-11 04:32 pm (UTC)(link)
My IQ is a Hell of a lot higher than the average human's, and yet my presence on Earth has somehow not yet caused a dramatic, overwhelming change in human society. Very few dramatic, overwhelming changes have ever been caused *just* by the presence and activities of especially smart minds.

My own point of view is that human-level AI won't be happening in the next 40 years, because the problems involved are Just Too Hard and we haven't even begun to make glimmers of progress on them - computer systems that have to deal with the real world nearly always seem to fail horribly and we don't even really have much clue why that is, and it looks like way more than 40 years worth of technological development will have to happen before we will.

However, I also think there are some significant flaws in your reasoning for why it should have to be all-or-nothing. First, maybe there's something special about the human level of intelligence (granted you haven't really defined what that means, but that's part of the problem, too). Maybe it's much harder to go from "human" to "twice human" than from "half human" to "human". The existence of such a limit could be why humans aren't even smarter than we are. In that case it would be reasonable to expect that machines might hit a similar limit.

Second, faster doesn't mean smarter. The current situation is that machines do some things very fast, like arithmetic, but it's not clear that those things are at all relevant to intelligence. A neuron can only do 200 "things" per second, against a transistor that can do billions of "things" per second, but they're not even remotely the same kind of things. There's no indication that a transistor can do billions of neuron-things per second; it's not clear at the moment that transistors can do neuron-things at all. For tasks that seem relevant to intelligence (such as visual object recognition), humans are still competitive with computers (and in many cases, blow computers away completely) despite the claimed difference in speed.

It's also not clear that getting in a lifetime's human-style thought per hour, even if we believed that that were possible, would actually result in amazing advances in the products of thought. Why didn't we have computers a hundred years ago? Far more human-thought-lifetimes have been spent before the year 1907 than after it, and yet somehow we didn't get those computers invented. I'm not sure that throwing at lot of thought at a problem very fast is really the best way to solve a problem. I suspect that a lot of the problem-solving that gets done is done by a combination of lots of people interacting, and external pressure from the environment they live in, and it's not clear that AIs would have those things. We'd need not just smart AIs, but billions of them, and really good reasons for them to want to think about stuff, and that's not going to be accomplished by them just talking to each other.

[identity profile] adq.livejournal.com 2007-12-12 02:30 am (UTC)(link)
I would suggest the current age of computation is equivalent to Newtonian physics: macro level effects have been observed, described, and implemented reasonably easily. Mainstream hardware development is currently simply iterating refinements to basically the same hardware designs.

Developing true intelligence is going to be extremely hard. The underlying "hardware" our only examples operate on is fundamentally different from our conventional digital computers, so there is a rather large architectural gap to span.

Obviously one could simply say "simulate it"! However a simulation running on a conventional digital computer imposes a rather large assumption: that the thing you are simulating is computable. If intelligence is in fact a quantum side effect, as others have postulated, then we would require a paradigm shift in hardware design in order to achieve it.

Anyway, good topic, its something that interests me greatly!

[identity profile] ciphergoth.livejournal.com 2007-12-12 08:56 am (UTC)(link)
I'm not aware of any reason to take the quantum intelligence ("Orch-OR") theories seriously. Penrose supposes both that physical law is uncomputable and that the brain is tapping into this uncomputability, based solely on Lucas's and Searle's arguments against intelligence being computable; but neither hold any water.

[identity profile] adq.livejournal.com 2007-12-12 08:54 pm (UTC)(link)
I would agree with you; the conciousness-as-quantum-computation theory always sounded like a way to simply ignore the hard problems rather than trying to solve it: "The New Soul".

Still, I reckon it provides fertile ground for a great short story :)

[identity profile] meico.livejournal.com 2007-12-13 01:40 am (UTC)(link)
Ahh, okay, so we basically agree up to the point of the singularity then... ;P

A few minor niggles:

the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence.

Just as the difference between the shortest person and the tallest person is a dot on the scale of heights... Don't know if this line has anything to do with my comment to your last post or not but I just was pointing out that "human level" intelligence may have a well defined average and even a tight standard deviation but it certainly has some serious outliers and thus is difficult to define.

Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second

I realize that you were probably simplifying for your audience, but Neurony things != CPU ops, even in the simplest neural models... Still I think we can agree that some number of CPU ops can simulate the important parts of whats going on with a neuron and its environment. From what I've read from people doing neurobiology simulations (not just running neural nets) that number is about 1000 CPU ops == 1 neuron op. I suspect that once we know more about what neurons and the environment they are in really do that number may go up more than just an order of magnitude...

it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.

Hmm, on the surface this statement seems flawed in many ways though it could just be how I'm reading it... How quick is "almost as soon"? Is that part due to some technological leap due to the existence of the AIs? Intelligence it seems (based on a reasonable body of research now) is reliant on being embodied, consequently there may be issues with making things think faster than the environment they are in whether that be a robot in the real world or a simulated person in a simulated world...

Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?

Direct answer: The things it wishes to accomplish may have nothing to do with human level existence...

superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict and quite possibly beyond our ability to comprehend or even experience...

[identity profile] sibelian.livejournal.com 2007-12-13 03:50 am (UTC)(link)
You're treating processing speed as equivalent to intelligence. You already know this is silly!

Intelligence isn't doing more things, it's about doing the same things more *cleverly*. It's about *noticing* things. How do you make a computer notice more things? Or more important things?

[identity profile] ciphergoth.livejournal.com 2008-01-04 02:24 pm (UTC)(link)
Thanks - sadly this isn't as enlightening as all that, it's basically a statement that the author has entirely failed to grasp the Church-Turing thesis.