More on AI

Dec. 11th, 2007 09:12 am
ciphergoth: (Default)
[personal profile] ciphergoth
I don't have a crystal ball, but I'm pretty sure that over half of you are mistaken.

Those of you who say that it won't happen at all may well be right - it may still be much harder than some people guess, and/or global catastrophe of some sort or another may put a stop to research. I don't know if I see either of these as the most likely outcome but they are certainly very reasonable possibilities. However, the two "middle" options won't happen.

(These ideas are not mine: credit goes to many people of whom I'll name Vernor Vinge and Eliezer Yudkowsky)

First, if AI is developed, there's no way we'll exactly hit the target of human performance and then fail to push right past it; the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence. Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second, it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.

And of course one of the things you can use that immense thinking power for is working out how to build much faster computers. The equivalent of millions of new researchers in the fields of chip design, optical computing, or nanotechological rod logic can't help but speed things up some.

In practice I strongly suspect that speedup will be just one small aspect of the gulf between human and machine intelligence, but it's an aspect that's pretty much guaranteed.

Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?

No, of those four options, only two are plausible: machine intelligence will not be developed in the next forty years, or machine superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict.

Date: 2007-12-13 01:40 am (UTC)
From: [identity profile] meico.livejournal.com
Ahh, okay, so we basically agree up to the point of the singularity then... ;P

A few minor niggles:

the difference between the dumbest normal person and the greatest genius is a dot on the scale of intelligence.

Just as the difference between the shortest person and the tallest person is a dot on the scale of heights... Don't know if this line has anything to do with my comment to your last post or not but I just was pointing out that "human level" intelligence may have a well defined average and even a tight standard deviation but it certainly has some serious outliers and thus is difficult to define.

Given that any neuron in the human brain can only do about 200 things a second, while the components in a computer currently do over 2 billion things a second

I realize that you were probably simplifying for your audience, but Neurony things != CPU ops, even in the simplest neural models... Still I think we can agree that some number of CPU ops can simulate the important parts of whats going on with a neuron and its environment. From what I've read from people doing neurobiology simulations (not just running neural nets) that number is about 1000 CPU ops == 1 neuron op. I suspect that once we know more about what neurons and the environment they are in really do that number may go up more than just an order of magnitude...

it seems certain that almost as soon as we can build AIs we will be able to build, at the very least, machines that think millions of times faster than we do, which can put a lifetime's thought into every half hour.

Hmm, on the surface this statement seems flawed in many ways though it could just be how I'm reading it... How quick is "almost as soon"? Is that part due to some technological leap due to the existence of the AIs? Intelligence it seems (based on a reasonable body of research now) is reliant on being embodied, consequently there may be issues with making things think faster than the environment they are in whether that be a robot in the real world or a simulated person in a simulated world...

Second, if it is built it will certainly transform the world. Look at how human intelligence has done so; how could the presence of an intelligence vastly greater than our own fail to do vastly more?

Direct answer: The things it wishes to accomplish may have nothing to do with human level existence...

superintelligence will utterly transform everything in the world in ways that are thoroughly beyond our ability to predict and quite possibly beyond our ability to comprehend or even experience...

Profile

ciphergoth: (Default)
Paul Crowley

January 2025

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 24th, 2026 07:37 pm
Powered by Dreamwidth Studios