Artificial Intelligence and the year 2047
Dec. 10th, 2007 11:25 amClarification: By "smart" I mean general smarts: the sort of smarts that allow you to do things like pass a Turing test or solve open problems in nanotechnology. Obviously computers are ahead of humans in narrow domains like playing chess.
NB: your guess as to what will happen should also be one of your guesses about what might happen - thanks! This applies to
wriggler,
ablueskyboy,
thekumquat,
redcountess,
thehalibutkid,
henry_the_cow and
cillygirl. If you tick only one option (which is not the last) in the first poll, it means you think it's the only possible outcome.
[Poll #1103617]
And of course, I'm fascinated to know why you make those guesses. In particular - I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it, and I'd love to hear more about that position.
NB: your guess as to what will happen should also be one of your guesses about what might happen - thanks! This applies to
[Poll #1103617]
And of course, I'm fascinated to know why you make those guesses. In particular - I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it, and I'd love to hear more about that position.
no subject
Date: 2007-12-10 11:30 am (UTC)no subject
Date: 2007-12-10 11:38 am (UTC)Even if you don't feel Google's smarter than a human yet (just faster), I suspect in the next ten or so years it will become so.
no subject
Date: 2007-12-10 11:39 am (UTC)Also depends on the humans, though, I suppose.
no subject
Date: 2007-12-10 11:48 am (UTC)Cynical much? me?? :)
no subject
Date: 2007-12-10 01:48 pm (UTC)And I think it _is_ a very difficult project; so difficult I would have picked the answer I did not even if I didn't expect a catastrophe of unprecedented proportions to overtake us.
(no subject)
From:no subject
Date: 2007-12-10 11:53 am (UTC)On the other hand, that could just be my failure of imagination, given that our brains had to evolve from pattern recognition and response to current creative synthetic abilities somehow and I guess there's no reason why silicon 'neurons' couldn't do the same, but I doubt it in the next 40 years.
no subject
Date: 2007-12-10 12:24 pm (UTC)Yes! Absolutely my position. And we could argue about what it means for a human or a machine to be smart, but one thing I do feel confident in predicting is that people in 40 years' time will come to a very different set of answers. 40 years ago takes us back to perhaps the heyday of AI, and our ideas now about artificial and human intelligence are quite different.
My main problem with the options: I'd say that machines are already smarter than people, and this has already transformed everything in the world.
Leaving aside the abstract stuff, none of the physical developments that enable our current world to happen - agriculture, production, travel, communications - are possible at anything like the current scale without smart machines to do the required thinking for us. The C19th clerical revolution enabled the industrial revolution; the C20th IT revolution has enabled an even more profound transformation of society.
The rate of change is increasing. (I'd guess it's still positive down to third differentials.) I'm not a Singularity person, but any trend that can't continue won't. That can't. At least, I can't think it can - but maybe we can invent machines that will enable us to do that thinking.
I didn't predict how profoundly the world has changed in the last twenty years. I wasn't a lot better at predicting what was going to happen over the course of the last year. (I wasn't quite so wrong, but only because things can't change so substantially over just a single year.) I don't think I'll do any better over a longer timescale.
Except ... ask me again in 39 years, and machines will have got smarter so that I can give a much better answer!
(no subject)
From:(no subject)
From:(no subject)
From:no subject
Date: 2007-12-10 11:54 am (UTC)no subject
Date: 2007-12-10 11:57 am (UTC)no subject
Date: 2007-12-10 12:00 pm (UTC)(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:no subject
Date: 2007-12-10 12:03 pm (UTC)It's a given that some machines are already vastly better at performing some tasks previously performed by human 'computers' already, and this has transformed the human experience of much of the world.
If by "machines vastly smarter than humans" you mean machines capable of exhibiting characteristics indistinguishable from that of a human intellect and yet surpassing any "smart" human intellect, then I doubt it will happen without several considerable shifts in what is currently considered AI. These shifts may occur, but I suspect they may be as elusive as sustainable fusion power.
no subject
Date: 2007-12-10 12:19 pm (UTC)(no subject)
From:(no subject)
From:no subject
Date: 2007-12-10 12:31 pm (UTC)I don't think that's very likely but it's just about conceivable. I can imagine an argument along the lines that you can't build something that's smarter than you yourself are (for some definitions of smart). It'd be a bit of a woolly argument to my mind, but it might have some force. I also think it's possible (but not at all likely) that there turns out to be some fundamental limit to how general-purpose-smart things can be made, and humans are already at it.
I suspect, though, that most people holding that position a) don't hold it very explicitly or consideredly (if that's a real adverb), and b) are somewhat influenced by fear of the consequences if the process doesn't stop at the point they claim it will.
Not me! I, for one, welcome our new machine overlords.
no subject
Date: 2007-12-10 12:59 pm (UTC)(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:(no subject)
From:no subject
Date: 2007-12-10 01:01 pm (UTC)I'm not saying that it's not possible, but rather that developments will tend towards a particular purpose and thus we may have machines that are vastly superior in some areas, but which will have massive gaps in others. Thus, I'm not convinced that a general comparison between machines & humans will truly be meaningful within the specified time period.
no subject
Date: 2007-12-10 01:01 pm (UTC)For that matter, how do you fully separate the two? If Google is smarter than me, but I'm smart enough to use Google, does that make me smarter than before?
Separation anxiety.
Date: 2007-12-10 01:10 pm (UTC)We have machine aided smarts, which brings an interesting problem in demarkation - the tool is not smart, we have the smarts. How complex does a tool have to be for it to be considered separately smart from its user(s)?
Re: Separation anxiety.
From:Re: Separation anxiety.
From:Re: Separation anxiety.
From:(no subject)
From:IMHO
Date: 2007-12-10 01:07 pm (UTC)Re: IMHO
Date: 2007-12-11 12:51 pm (UTC)I have no doubt that given a vast repository of potential solutions and amazing processing speed a computer could identify a potential requirement for something and come up with a very efficient solution to the issue using modified versions of existing items. But actual invention, innovation or inspiration could it ever do those?
Now I am no neuro scientist, mathematician or computer programmer, Art is more my thing, and in this field at least I know that faster/bigger/brighter is not neccesarily better, so I will draw an example from there. Could a computer when faced with a sunrise over a landscape which fulfils all its established criteria for "Beauty", ever think:
"You know what, instead of striving to represent this image as accurately as I can, like every other visual representation that has been done for the last 50 years or so, I am going to render it in simple swathes of colour instead! Giving a mere 'impression' of the scene if you will!"
Re: IMHO
From:Re: IMHO
From:no subject
Date: 2007-12-10 01:15 pm (UTC)no subject
Date: 2007-12-10 04:44 pm (UTC)i.e. we're already a hell of a lot smarter than the decisions made on our behalf.
no subject
Date: 2007-12-10 01:31 pm (UTC)I'm thinking of, for example, the WHO's global polio eradication programme which has been running for about 20 years at this point and still isn't there. And that's a problem where the solution is very clear.
no subject
Date: 2007-12-10 01:31 pm (UTC)I think it likely that independent, self-supporting robots of some sort will be developed; I strongly doubt that they'll be capable of passing a Turing Test, i.e. of holding a 'normal' conversation with you.
no subject
Date: 2007-12-10 01:33 pm (UTC)I don't think that's going to happen in 40 years, just based on looking at where AI has got in the last 50. Raw computing power has not for the most part been the problem, and developments in everything other than raw power are usually slow.
However, whilst not all, I fully expect to see far more of the many things a human mind can do performed equally well by machines within this time. And that may ultimately be enough in itself to bring on what you're alluding to - in particular because such a huge and well-funded sector of research is already focused on machines to help us design the hardware and software of other machines.
no subject
Date: 2007-12-10 02:29 pm (UTC)no subject
Date: 2007-12-10 04:47 pm (UTC)no subject
Date: 2007-12-10 06:04 pm (UTC)I suspect that the transformation of society by machine intelligences will be like the subtle change in the nature of money: now the coins and paper notes are just tokens for the electronic reality in computerised accounts. But who could've told you when it was the other way around, and 'printing banknotes' really did mean inflating the economy?
What we're going to see is the increasing use of systems like present-day rules-discovery 'AI' systems learning differential diagnosis from medical records: they are moving from a teaching tool to being a back-up source of useful guesses in ambiguous cases, and will soon be mandatory 'cover-your-ass' diagnosis that justifies additional testing in litigation-prone jurisdictions. It is only a matter of time before they become a clinician's first reference... And, after a period of expensive mistakes wherein lazy doctors use them like foreign lorry drivers following bridleways marked as trucking routes, these systems will become the primary source of clinical judgement and diagnosis.
Just as the days of the London Cabbie are numbered by the continuing improvement in SatNav systems, so the days of the 'prop' trader speculating on the currency, derivatives and commodity markets are passing. Algorithmic trading systems aren't quite there, but the writing's on the wall. The list is expanding.
So spread that out to every profession, and to every forecasting and resourcing decision in your place of work. Just as management accountants have a natural career progression from being juniors who prepare the cashflow projections to being the managers who make the investment decision, so too will decision-support AIs make the transition to decision-making.
Did I say "we'll see"?
While this is going on, we will pretend not to notice. I mean, the planes that landed in the fog today at Heathrow had human pilots, didn't they? Someone supervised the landing, anyway, and would've intervened if anything went wrong. Yeah, right.
Yes, the systems need to be smarter - medicine and mapping being a case in point - but not infinitely so. It's forseeable technology: ask anyone in natural-language programming. And the leap to cognitive intelligence - self-awareness - might be an entirely unexpected and unpredictable thing: among other things, such an individual - or colony organism - will want to augment it's procesing ability and it will sequester resources.
I have no idea what will happen then. It will definitely assimilate all the rules-based 'dumb' AI out there, and will therefore have more 'working' knowledge than any individual human - in addition to possessing all the factual resources of all the world's libraries and a nifty capability at natural-language searching.
no subject
Date: 2007-12-10 07:38 pm (UTC)I think intelligence is deeply intertwingled with intention, which is even more slippery than intelligence. I know that's a bit biocentric and wibbly...
I see no particular selection pressure driving any rapid advance of artificial intelligence, at least, not in the way that could be related to as intelligence by an ordinary human being. And I think the next 40 years could well revolve around rather more immediate concerns...
Might be wrong, of course. The planned economies (which might very well start to look really quite attractive in the not-too-very-distant-at-all-future), despite their spectacular failures in the recent past, might well do far better under the watchful, benevolent eye of a Silicon Overlord. SOMEONE's going to have to handle all those "Freecycle - XTREME!!!" spreadsheets... with sensitivity and tact...
no subject
Date: 2007-12-10 07:47 pm (UTC)no subject
Date: 2007-12-10 07:44 pm (UTC)I'm currently in a field that came out of, among other things, frustration at the failures of classical AI, and I have friends doing work at the cutting edge of machine translation and decision support. This makes me more pessimistic about human-equivalent AI than I might be otherwise, because I can see the road from here to there, and not very far from where we are now, there's a cliff face, the road goes straight up, and I don't see any way to climb it.
Science and engineering method is to take intractable problems apart into small pieces. Often those small pieces are tractable, and often, when you put the solved problems back together you find you've got an acceptable solution to the original. That has worked spectacularly well for us to date, but it didn't work in classical AI and it's not working in modern AI either. Every decomposition of human-level intelligence that's so far been tried produces a bunch of small problems that we can solve but don't go back together into the original.
To put it another way, to make much more forward progress on any of the things that are generally thought of as subcomponents of intelligence — natural language, speech recognition, spatial reasoning, object identification, decision making — we will have to put them back together and solve it all at once. We don't have any idea how to do that.
It gets worse. Confronted with this problem (it has been foreseeable since the 1970s if not earlier), my field decided to go study real brains for a while. We have a pretty good empirical understanding at this point of how a human child develops, learns stuff, becomes a functioning adult. And it is dependent in detail on the child's brain being part of the child's body. You can cause horrible developmental problems by, for instance, raising a kitten for the first six weeks of its life in a box with no visible detail other than vertical stripes. The cat is ever after unable to recognize horizontal lines.
Thus, the science fiction trope of the disembodied intellect in the computer is never going to happen. (In particular, contra suggestions above, Google is not going to turn into Skynet.) If we want a true AI it's going to have to be in the form of a biomimetic robot. Furthermore, the easiest way to implement the thinking part is going to be with a detailed mimic of the human brain, not necessarily in meat, but including all its limitations. In particular the robot will not be able to learn faster or via a qualitatively different method than human children do. (Biological brains do a whole bunch of stuff, especially to do with memory, with resonance loops at 5-20Hz, and if you mess with the timing you get a fascinating variety of cognitive disorders.)
I think the construction of such a robot is feasible and might even happen in the 40-year timeframe, but I wouldn't call it a sure thing. And I think we will be able to make them very smart, but not in any qualitatively different way from very smart humans.
no subject
Date: 2007-12-11 03:06 am (UTC)Embodying it there makes so many things much easier. For example, simulated skin that detects touch becomes a simple by product of your physics contact and penetration solver and not some intractable materials engineering problem...
Anyway, I'm working on the cutting edge of reality simulations (games) and have no doubt that they'll soon be good enough to place embodied intelligences into and have them develop real world knowledge and skills (actually some people are already doing exactly this).
no subject
Date: 2007-12-10 09:57 pm (UTC)no subject
Date: 2007-12-10 11:30 pm (UTC)will get* Internet access* 'already have'
What do you mean by "might happen"?
Date: 2007-12-10 10:07 pm (UTC)I also liked the comments by xquiq (developments will tend towards particular purposes), mistdog (changing the world is hard, at least in some ways), martling (machines will do many of the things humans do now, and zwol (it's a hard problem). Then again, back in the 70's I didn't predict where we are now, and some increases in power are happening exponentially rather than linearly which, as Kurzweil points out, most humans aren't good at reasoning about.
Here's a follow-up: if more intelligent machines make more of us unneeded by the ruling elite (the financiers and their colleagues), what will those elites do to stay in power?
Re: What do you mean by "might happen"?
Date: 2007-12-11 10:02 am (UTC)If vastly more intelligent machines are developed, and they are inclined to be friendly towards the interests of all humanity, then any ruling elite that wants to execute such an evil plan will find it has a formidable oppoent. If the machines are not inclined to be friendly towards humanity, there will be no ruling elite and no the rest of us!
no subject
Date: 2007-12-10 11:27 pm (UTC)no subject
Date: 2007-12-11 09:59 am (UTC)(no subject)
From:no subject
Date: 2007-12-11 02:55 am (UTC)We (humans) cover a pretty broad range of intelligence- everything from drooling meat bags with mere traces of brain activity to polymaths living hundreds of years ago that have given us insights that today we are only beginning to understand with full might of a massive technologically enhanced computer powered civilization...
So how smart are we? I find it amusing that not even humans always pass Turing tests. :) I also find it interesting that for _most_ humans an impressively consistent basic level of intelligence does exists- one surprisingly afforded to us across a wide assortment of brain sizes, shapes, and variations.
[wild-speculation]
I suspect that in the near future (about 25 years) we will be able to simulate a full human brain inside a single computer in real time. Last I checked "fully" accurate brain tissue simulations with over 10000 neurons were being done in real time nowadays. Taking that and Moore's law is how I got the figure of ~25 years.
At that time I suspect that we as humans will still not know that much about how to make ourselves vastly smarter and at first neither will any simulated human- intelligence itself will still be a mystery even if it is one we can replicate. Quickly after that though I think things will get interesting... "Singularity" levels of interesting.
Since simulated people can be replicated and work on problems in parallel, societies of them can do interesting things that human societies can't (or at least not as effectively). Slicing and dicing parts of their living brains in ways that would be considered crimes against humanity could be routine and pretty quickly they (not us) will have a real notion of what intelligence is, the minimum requirements to generate it, and how to optimize it. Then the copying and replication parts really kick in. Viola a "singularity".
[/wild-speculation]
[super-wild-speculation]
But what would a singularity do? I suspect such an entity would be well beyond almost all human concerns. It would regard us with as much importance as we do for the bacteria living inside our keyboards. It would however need to survive and perpetuate and the only real way to do that is by making sure no other singularities come into being.
I would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.
[/super-wild-speculation]
Uhg. Sorry for the ramble- it's late and I should be in bed.
In short, without winging about definitions, I suspect "Machines vastly smarter than humans will be developed, but the impact will stop short of transforming everything in the world".
no subject
Date: 2007-12-13 01:45 am (UTC)I would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.
Should have been:
It would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.