Paul Crowley (
ciphergoth) wrote2007-12-11 09:12 am
More on AI
I don't have a crystal ball, but I'm pretty sure that over half of you are mistaken.
Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.
(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)
First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.
In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.
Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.
(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)
First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.
In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.
no subject
no subject
no subject
no subject
no subject
If it's goo that's potentially going to kill me, it depends whether the intelligence makes it easier to communicate with, or harder to stop (and the answer's probably 'both'). But on the whole, I value intelligence in any form, so while intelligent grey goo is probably more terrifying, I think I prefer it to mindless grey goo.
What I suspect will happen in practice is goo that's just intelligent enough to overcome its 'off switch', but is pretty mindless compared to your basic slime mould.
no subject
no subject
no subject
But I do think it would be possible to have machines that handled those interactions (and some of those it might not be possible to be "significantly" smarter than humans, granted) *but* also did more other things, and faster. Not all of the things we label "intelligence" may be about speed, but some of them probably are (certainly that's true of standard intelligence tests, although what they actually measure is another question).
I think it's a lot tougher than a 40-year problem, though.
no subject
no subject
no subject
no subject
concerned. My guess, Case, you're going in there to cut the hardwired
shackles that keep this baby from getting any smarter. And I can't see
how you'd distinguish, say, between a move the parent company makes,
and some move the AI makes on its own, so that's maybe where the
confusion comes in." Again the nonlaugh. "See, those things, they can work
real hard, buy themselves time to write cookbooks or whatever, but the
minute, I mean the nanosecond, that one starts figuring out ways to make
itself smarter, Turing'll wipe it. Nobody trusts those fuckers, you
know that. Every AI ever built has an electromagnetic shotgun wired to its
forehead."
Remember advances in technology are almost always used by the porn industry first
£5 bet, payable in 2047?
no subject
You gave chess as an example of where computers crap over humans. The people working in the field in the 1960s expected to be world champion in the 1970s. It took until this century to be plausible and what did it was a) vastly more CPU speed to increase search depths a bit and b) vastly more storage for database lookups.
Even today, if you take away the opening and endgame database tables from the computer players, the human GMs win. With them, computers can survive the opening and play endings with five or fewer pieces perfectly... but to do say all seven piece endings takes more storage than exists. Change the rules even slightly (pawns on b7 can't take on a8) and you'd have to regenerate the endgame tables again.
Draughts is simpler, so that's been solved but again largely stupidly (give them the lookup tables and anyone could do it!) and without any explainable insights. Why is that move best in that position? "It just is..."
For Go, they're still at the beginner stage with no practical ideas about how to be a master-level player: the numbers are too big.
So we're nowhere near yet in some 'small' narrowly defined domains. A computer Leonardo is quite possibly a century or more off.
no subject
no subject
I don't know the timetable.
no subject
My money is on a neuron being quite complex -- but not magically so; I'd be surprised if you couldn't simulate a single neuron accurately enough for purposes of sustaining a consciousness sim using a few hundred kilobits of data and a few hundred thousand operations per membrane depolarization, and an ensemble of neurons using maybe a handful of extra kilobits and a couple of thousand extra operations per additional neuron over and above the first instance of the class.
As for funding the development of working AI, I keep looking at the US military's obsession with UAVs and autonomous battlefield robots. The problems they aspire to solve may in many cases require human-equivalent intelligence (quick! Is that a discarded beer can or an IED? Is that guy in the crowd a civilian or an insurgent?), and for the time being, these are the guys handing out the pork. And if that's not enough, there's the demographic overshoot that starts to cut in around 2050 if current population trends continue: global population peaks around 10.0 billion humans, half of them with a first world standard of living (whatever that means by then) then begins to slowly fall. The rich countries will run into huge work force problems at that point; robotics -- as the Japanese have noticed -- offer one way out of the deflationary trap (and an answer to who will care for the old folks).
I'm still agnostic on the AI subject, but I think we're in a much better position to frame the question than we were 40 years ago; given another 40 years, I hope to see some answers trickling in.
no subject
Incredibly minor digression
Mainly I used a different alef to you - ℵ ALEF SYMBOL rather than א HEBREW LETTER ALEF. This means that the browser then knows it can just run all the characters left-to-right as normal, rather than trying to render a single character of Hebrew text in a run of English, invoking the Unicode BiDi algorithm, and confusing everyone in sight.
no subject
We live in a virtual reaity
What would the characters in that world "think"? would it be patently obvious to them (because they had been programmed so) that the world they live is is real? Would they be unable to discern an escape from that world because they would not be allowed to?
So far, so very Matrixy.
But the other thing that strikes me as i play World of Warcraft is that my Character is utterly unaware that i am playing her. She moves and interacts according to a bizarrely restrictive set of "laws" that the developers, and our society, online and otherwise, have created. To some extent she is real and autonomous, to another she is utterly mine, able to be destroyed at my least whim, though i love her, and oblivious to the fact.
The question then arises are we in an infinite regression of games?
peace
pm
no subject
no subject
no subject
no subject
My own point of view is that human-level AI won't be happening in the next 40 years, because the problems involved are Just Too Hard and we haven't even begun to make glimmers of progress on them - computer systems that have to deal with the real world nearly always seem to fail horribly and we don't even really have much clue why that is, and it looks like way more than 40 years worth of technological development will have to happen before we will.
However, I also think there are some significant flaws in your reasoning for why it should have to be all-or-nothing. First, maybe there's something special about the human level of intelligence (granted you haven't really defined what that means, but that's part of the problem, too). Maybe it's much harder to go from "human" to "twice human" than from "half human" to "human". The existence of such a limit could be why humans aren't even smarter than we are. In that case it would be reasonable to expect that machines might hit a similar limit.
Second, faster doesn't mean smarter. The current situation is that machines do some things very fast, like arithmetic, but it's not clear that those things are at all relevant to intelligence. A neuron can only do 200 "things" per second, against a transistor that can do billions of "things" per second, but they're not even remotely the same kind of things. There's no indication that a transistor can do billions of neuron-things per second; it's not clear at the moment that transistors can do neuron-things at all. For tasks that seem relevant to intelligence (such as visual object recognition), humans are still competitive with computers (and in many cases, blow computers away completely) despite the claimed difference in speed.
It's also not clear that getting in a lifetime's human-style thought per hour, even if we believed that that were possible, would actually result in amazing advances in the products of thought. Why didn't we have computers a hundred years ago? Far more human-thought-lifetimes have been spent before the year 1907 than after it, and yet somehow we didn't get those computers invented. I'm not sure that throwing at lot of thought at a problem very fast is really the best way to solve a problem. I suspect that a lot of the problem-solving that gets done is done by a combination of lots of people interacting, and external pressure from the environment they live in, and it's not clear that AIs would have those things. We'd need not just smart AIs, but billions of them, and really good reasons for them to want to think about stuff, and that's not going to be accomplished by them just talking to each other.
no subject
Developing true intelligence is going to be extremely hard. The underlying "hardware" our only examples operate on is fundamentally different from our conventional digital computers, so there is a rather large architectural gap to span.
Obviously one could simply say "simulate it"! However a simulation running on a conventional digital computer imposes a rather large assumption: that the thing you are simulating is computable. If intelligence is in fact a quantum side effect, as others have postulated, then we would require a paradigm shift in hardware design in order to achieve it.
Anyway, good topic, its something that interests me greatly!
no subject
no subject
Still, I reckon it provides fertile ground for a great short story :)
no subject
A few minor niggles:
the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence.
Just as the difference between the shortest person and the tallest person is a dot on the scale of heights... Don't know if this line has anything to do with my comment to your last post or not but I just was pointing out that "human level" intelligence may have a well defined average and even a tight standard deviation but it certainly has some serious outliers and thus is difficult to define.
Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second
I realize that you were probably simplifying for your audience, but Neurony things != CPU ops, even in the simplest neural models... Still I think we can agree that some number of CPU ops can simulate the important parts of whats going on with a neuron and its environment. From what I've read from people doing neurobiology simulations (not just running neural nets) that number is about 1000 CPU ops == 1 neuron op. I suspect that once we know more about what neurons and the environment they are in really do that number may go up more than just an order of magnitude...
it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.
Hmm, on the surface this statement seems flawed in many ways though it could just be how I'm reading it... How quick is "almost as soon"? Is that part due to some technological leap due to the existence of the AIs? Intelligence it seems (based on a reasonable body of research now) is reliant on being embodied, consequently there may be issues with making things think faster than the environment they are in whether that be a robot in the real world or a simulated person in a simulated world...
Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?
Direct answer: The things it wishes to accomplish may have nothing to do with human level existence...
superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict and quite possibly beyond our ability to comprehend or even experience...
no subject
Intelligence isn't doing more things, it's about doing the same things more *cleverly*. It's about *noticing* things. How do you make a computer notice more things? Or more important things?
no subject
Thought this might interest you:
http://kenmacleod.blogspot.com/2008/01/ai-skeptic-writes.html
no subject