More on AI
Dec. 11th, 2007 09:12 amI don't have a crystal ball, but I'm pretty sure that over half of you are mistaken.
Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.
(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)
First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.
In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.
Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.
(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)
First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.
In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.
no subject
Date: 2007-12-11 11:39 am (UTC)My money is on a neuron being quite complex -- but not magically so; I'd be surprised if you couldn't simulate a single neuron accurately enough for purposes of sustaining a consciousness sim using a few hundred kilobits of data and a few hundred thousand operations per membrane depolarization, and an ensemble of neurons using maybe a handful of extra kilobits and a couple of thousand extra operations per additional neuron over and above the first instance of the class.
As for funding the development of working AI, I keep looking at the US military's obsession with UAVs and autonomous battlefield robots. The problems they aspire to solve may in many cases require human-equivalent intelligence (quick! Is that a discarded beer can or an IED? Is that guy in the crowd a civilian or an insurgent?), and for the time being, these are the guys handing out the pork. And if that's not enough, there's the demographic overshoot that starts to cut in around 2050 if current population trends continue: global population peaks around 10.0 billion humans, half of them with a first world standard of living (whatever that means by then) then begins to slowly fall. The rich countries will run into huge work force problems at that point; robotics -- as the Japanese have noticed -- offer one way out of the deflationary trap (and an answer to who will care for the old folks).
I'm still agnostic on the AI subject, but I think we're in a much better position to frame the question than we were 40 years ago; given another 40 years, I hope to see some answers trickling in.
no subject
Date: 2007-12-11 11:39 am (UTC)Incredibly minor digression
Date: 2007-12-11 12:15 pm (UTC)Mainly I used a different alef to you - ℵ ALEF SYMBOL rather than א HEBREW LETTER ALEF. This means that the browser then knows it can just run all the characters left-to-right as normal, rather than trying to render a single character of Hebrew text in a run of English, invoking the Unicode BiDi algorithm, and confusing everyone in sight.