More on AI

Dec. 11th, 2007 09:12 am
ciphergoth: (Default)
[personal profile] ciphergoth
I don't have a crystal ball, but I'm pretty sure that over half of you are mistaken.

Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.

(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)

First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.

And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.

In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.

Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?

No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.

Date: 2007-12-11 04:32 pm (UTC)
From: [identity profile] mskala.livejournal.com
My IQ is a Hell of a lot higher than the average human's, and yet my presence on Earth has somehow not yet caused a dramatic, overwhelming change in human society. Very few dramatic, overwhelming changes have ever been caused *just* by the presence and activities of especially smart minds.

My own point of view is that human-level AI won't be happening in the next 40 years, because the problems involved are Just Too Hard and we haven't even begun to make glimmers of progress on them - computer systems that have to deal with the real world nearly always seem to fail horribly and we don't even really have much clue why that is, and it looks like way more than 40 years worth of technological development will have to happen before we will.

However, I also think there are some significant flaws in your reasoning for why it should have to be all-or-nothing. First, maybe there's something special about the human level of intelligence (granted you haven't really defined what that means, but that's part of the problem, too). Maybe it's much harder to go from "human" to "twice human" than from "half human" to "human". The existence of such a limit could be why humans aren't even smarter than we are. In that case it would be reasonable to expect that machines might hit a similar limit.

Second, faster doesn't mean smarter. The current situation is that machines do some things very fast, like arithmetic, but it's not clear that those things are at all relevant to intelligence. A neuron can only do 200 "things" per second, against a transistor that can do billions of "things" per second, but they're not even remotely the same kind of things. There's no indication that a transistor can do billions of neuron-things per second; it's not clear at the moment that transistors can do neuron-things at all. For tasks that seem relevant to intelligence (such as visual object recognition), humans are still competitive with computers (and in many cases, blow computers away completely) despite the claimed difference in speed.

It's also not clear that getting in a lifetime's human-style thought per hour, even if we believed that that were possible, would actually result in amazing advances in the products of thought. Why didn't we have computers a hundred years ago? Far more human-thought-lifetimes have been spent before the year 1907 than after it, and yet somehow we didn't get those computers invented. I'm not sure that throwing at lot of thought at a problem very fast is really the best way to solve a problem. I suspect that a lot of the problem-solving that gets done is done by a combination of lots of people interacting, and external pressure from the environment they live in, and it's not clear that AIs would have those things. We'd need not just smart AIs, but billions of them, and really good reasons for them to want to think about stuff, and that's not going to be accomplished by them just talking to each other.

Profile

ciphergoth: (Default)
Paul Crowley

January 2025

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 24th, 2026 11:20 am
Powered by Dreamwidth Studios