Artificial Intelligence and the year 2047
Dec. 10th, 2007 11:25 amClarification: By "smart" I mean general smarts: the sort of smarts that allow you to do things like pass a Turing test or solve open problems in nanotechnology. Obviously computers are ahead of humans in narrow domains like playing chess.
NB: your guess as to what will happen should also be one of your guesses about what might happen - thanks! This applies to
wriggler,
ablueskyboy,
thekumquat,
redcountess,
thehalibutkid,
henry_the_cow and
cillygirl. If you tick only one option (which is not the last) in the first poll, it means you think it's the only possible outcome.
[Poll #1103617]
And of course, I'm fascinated to know why you make those guesses. In particular - I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it, and I'd love to hear more about that position.
NB: your guess as to what will happen should also be one of your guesses about what might happen - thanks! This applies to
[Poll #1103617]
And of course, I'm fascinated to know why you make those guesses. In particular - I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it, and I'd love to hear more about that position.
no subject
Date: 2007-12-10 11:30 am (UTC)no subject
Date: 2007-12-10 11:38 am (UTC)Even if you don't feel Google's smarter than a human yet (just faster), I suspect in the next ten or so years it will become so.
no subject
Date: 2007-12-10 11:39 am (UTC)Also depends on the humans, though, I suppose.
no subject
Date: 2007-12-10 11:48 am (UTC)Cynical much? me?? :)
no subject
Date: 2007-12-10 11:53 am (UTC)On the other hand, that could just be my failure of imagination, given that our brains had to evolve from pattern recognition and response to current creative synthetic abilities somehow and I guess there's no reason why silicon 'neurons' couldn't do the same, but I doubt it in the next 40 years.
no subject
Date: 2007-12-10 11:54 am (UTC)no subject
Date: 2007-12-10 11:57 am (UTC)no subject
Date: 2007-12-10 12:00 pm (UTC)no subject
Date: 2007-12-10 12:02 pm (UTC)no subject
Date: 2007-12-10 12:03 pm (UTC)It's a given that some machines are already vastly better at performing some tasks previously performed by human 'computers' already, and this has transformed the human experience of much of the world.
If by "machines vastly smarter than humans" you mean machines capable of exhibiting characteristics indistinguishable from that of a human intellect and yet surpassing any "smart" human intellect, then I doubt it will happen without several considerable shifts in what is currently considered AI. These shifts may occur, but I suspect they may be as elusive as sustainable fusion power.
no subject
Date: 2007-12-10 12:19 pm (UTC)no subject
Date: 2007-12-10 12:24 pm (UTC)Yes! Absolutely my position. And we could argue about what it means for a human or a machine to be smart, but one thing I do feel confident in predicting is that people in 40 years' time will come to a very different set of answers. 40 years ago takes us back to perhaps the heyday of AI, and our ideas now about artificial and human intelligence are quite different.
My main problem with the options: I'd say that machines are already smarter than people, and this has already transformed everything in the world.
Leaving aside the abstract stuff, none of the physical developments that enable our current world to happen - agriculture, production, travel, communications - are possible at anything like the current scale without smart machines to do the required thinking for us. The C19th clerical revolution enabled the industrial revolution; the C20th IT revolution has enabled an even more profound transformation of society.
The rate of change is increasing. (I'd guess it's still positive down to third differentials.) I'm not a Singularity person, but any trend that can't continue won't. That can't. At least, I can't think it can - but maybe we can invent machines that will enable us to do that thinking.
I didn't predict how profoundly the world has changed in the last twenty years. I wasn't a lot better at predicting what was going to happen over the course of the last year. (I wasn't quite so wrong, but only because things can't change so substantially over just a single year.) I don't think I'll do any better over a longer timescale.
Except ... ask me again in 39 years, and machines will have got smarter so that I can give a much better answer!
no subject
Date: 2007-12-10 12:27 pm (UTC)no subject
Date: 2007-12-10 12:31 pm (UTC)I don't think that's very likely but it's just about conceivable. I can imagine an argument along the lines that you can't build something that's smarter than you yourself are (for some definitions of smart). It'd be a bit of a woolly argument to my mind, but it might have some force. I also think it's possible (but not at all likely) that there turns out to be some fundamental limit to how general-purpose-smart things can be made, and humans are already at it.
I suspect, though, that most people holding that position a) don't hold it very explicitly or consideredly (if that's a real adverb), and b) are somewhat influenced by fear of the consequences if the process doesn't stop at the point they claim it will.
Not me! I, for one, welcome our new machine overlords.
no subject
Date: 2007-12-10 12:59 pm (UTC)no subject
Date: 2007-12-10 01:01 pm (UTC)I'm not saying that it's not possible, but rather that developments will tend towards a particular purpose and thus we may have machines that are vastly superior in some areas, but which will have massive gaps in others. Thus, I'm not convinced that a general comparison between machines & humans will truly be meaningful within the specified time period.
no subject
Date: 2007-12-10 01:01 pm (UTC)For that matter, how do you fully separate the two? If Google is smarter than me, but I'm smart enough to use Google, does that make me smarter than before?
no subject
Date: 2007-12-10 01:05 pm (UTC)See also 'control systems engineering: how to make a system unstable by making it too sensitive to small changes, and too fast to respond'. ;-)
IMHO
Date: 2007-12-10 01:07 pm (UTC)Separation anxiety.
Date: 2007-12-10 01:10 pm (UTC)We have machine aided smarts, which brings an interesting problem in demarkation - the tool is not smart, we have the smarts. How complex does a tool have to be for it to be considered separately smart from its user(s)?
no subject
Date: 2007-12-10 01:15 pm (UTC)no subject
Date: 2007-12-10 01:27 pm (UTC)(moved to correct comment, sorry!)
no subject
Date: 2007-12-10 01:31 pm (UTC)I'm thinking of, for example, the WHO's global polio eradication programme which has been running for about 20 years at this point and still isn't there. And that's a problem where the solution is very clear.
no subject
Date: 2007-12-10 01:31 pm (UTC)I think it likely that independent, self-supporting robots of some sort will be developed; I strongly doubt that they'll be capable of passing a Turing Test, i.e. of holding a 'normal' conversation with you.
no subject
Date: 2007-12-10 01:33 pm (UTC)I don't think that's going to happen in 40 years, just based on looking at where AI has got in the last 50. Raw computing power has not for the most part been the problem, and developments in everything other than raw power are usually slow.
However, whilst not all, I fully expect to see far more of the many things a human mind can do performed equally well by machines within this time. And that may ultimately be enough in itself to bring on what you're alluding to - in particular because such a huge and well-funded sector of research is already focused on machines to help us design the hardware and software of other machines.