More on AI
Dec. 11th, 2007 09:12 amI don't have a crystal ball, but I'm pretty sure that over half of you are mistaken.
Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.
(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)
First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.
In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.
Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.
(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)
First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.
In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.
no subject
Date: 2007-12-11 10:20 am (UTC)You gave chess as an example of where computers crap over humans. The people working in the field in the 1960s expected to be world champion in the 1970s. It took until this century to be plausible and what did it was a) vastly more CPU speed to increase search depths a bit and b) vastly more storage for database lookups.
Even today, if you take away the opening and endgame database tables from the computer players, the human GMs win. With them, computers can survive the opening and play endings with five or fewer pieces perfectly... but to do say all seven piece endings takes more storage than exists. Change the rules even slightly (pawns on b7 can't take on a8) and you'd have to regenerate the endgame tables again.
Draughts is simpler, so that's been solved but again largely stupidly (give them the lookup tables and anyone could do it!) and without any explainable insights. Why is that move best in that position? "It just is..."
For Go, they're still at the beginner stage with no practical ideas about how to be a master-level player: the numbers are too big.
So we're nowhere near yet in some 'small' narrowly defined domains. A computer Leonardo is quite possibly a century or more off.