ciphergoth: (Default)
[personal profile] ciphergoth
Clarification: By "smart" I mean general smarts: the sort of smarts that allow you to do things like pass a Turing test or solve open problems in nanotechnology. Obviously computers are ahead of humans in narrow domains like playing chess.

NB: your guess as to what will happen should also be one of your guesses about what might happen - thanks! This applies to [livejournal.com profile] wriggler, [livejournal.com profile] ablueskyboy, [livejournal.com profile] thekumquat, [livejournal.com profile] redcountess, [livejournal.com profile] thehalibutkid, [livejournal.com profile] henry_the_cow and [livejournal.com profile] cillygirl. If you tick only one option (which is not the last) in the first poll, it means you think it's the only possible outcome.

[Poll #1103617]

And of course, I'm fascinated to know why you make those guesses. In particular - I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it, and I'd love to hear more about that position.
Page 1 of 2 << [1] [2] >>

Date: 2007-12-10 11:30 am (UTC)
From: [identity profile] seph-hazard.livejournal.com
I'm not sure how to answer this and need to think about it more [grin] I suspect it won't be the next forty years - it'll be further in the future than that. A hundred, maybe, but this won't happen in forty.

Date: 2007-12-10 11:38 am (UTC)
djm4: (Default)
From: [personal profile] djm4
I tend to feel that the existence of Google means that this has already happened. Google (and Altavista, Lycos and Yahoo before it) is a 'vastly smarter' information processor and indexer than any human, and has transformed our ability to find information beyond all recognition in the past 12 or so years.

Even if you don't feel Google's smarter than a human yet (just faster), I suspect in the next ten or so years it will become so.

Date: 2007-12-10 11:39 am (UTC)
From: [identity profile] despina.livejournal.com
It depends what you mean by 'smarter'! AI stuff is coming along in leaps and bounds but there are some things I don't think machines will ever be able to do as 'smartly' as humans can.

Also depends on the humans, though, I suppose.

Date: 2007-12-10 11:48 am (UTC)
From: [identity profile] battlekitty.livejournal.com
Mu: Problem is that the economic and social situation will be such that in 40 years there won't exist the ability to produce science of that calibre.

Cynical much? me?? :)

Date: 2007-12-10 11:53 am (UTC)
From: [identity profile] thekumquat.livejournal.com
As no-one else has mentioned it yet, it depends what you mean by 'smart'. Computers already have better memory and analytical abilities than humans, but I figure what is needed to be 'smart' is the initiative to decide what to research/remember/analyse. I don't see computers being able to synthesise solutions independently of human programmers.

On the other hand, that could just be my failure of imagination, given that our brains had to evolve from pattern recognition and response to current creative synthetic abilities somehow and I guess there's no reason why silicon 'neurons' couldn't do the same, but I doubt it in the next 40 years.

Date: 2007-12-10 11:54 am (UTC)
From: [identity profile] purplerabbits.livejournal.com
I suspect that in the process of developing machines that as are smart as humans in some ways but not in others, we will discover new and interesting things about what intelligence is and why AI has been such an intrasigent problem that I can't even imagine. And then civilisation will collapse.

Date: 2007-12-10 11:57 am (UTC)
zotz: (Default)
From: [personal profile] zotz
It does depend on what you mean, but mainly I suspect that this is going to be very like fusion power, which I suspect is why you picked 40 years as a term.

Date: 2007-12-10 12:03 pm (UTC)
aegidian: (cogs)
From: [personal profile] aegidian
Mu - incomplete definition of 'smarter'.

It's a given that some machines are already vastly better at performing some tasks previously performed by human 'computers' already, and this has transformed the human experience of much of the world.

If by "machines vastly smarter than humans" you mean machines capable of exhibiting characteristics indistinguishable from that of a human intellect and yet surpassing any "smart" human intellect, then I doubt it will happen without several considerable shifts in what is currently considered AI. These shifts may occur, but I suspect they may be as elusive as sustainable fusion power.

Date: 2007-12-10 12:31 pm (UTC)
From: [identity profile] drdoug.livejournal.com
I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it

I don't think that's very likely but it's just about conceivable. I can imagine an argument along the lines that you can't build something that's smarter than you yourself are (for some definitions of smart). It'd be a bit of a woolly argument to my mind, but it might have some force. I also think it's possible (but not at all likely) that there turns out to be some fundamental limit to how general-purpose-smart things can be made, and humans are already at it.

I suspect, though, that most people holding that position a) don't hold it very explicitly or consideredly (if that's a real adverb), and b) are somewhat influenced by fear of the consequences if the process doesn't stop at the point they claim it will.

Not me! I, for one, welcome our new machine overlords.

Date: 2007-12-10 01:01 pm (UTC)
From: [identity profile] xquiq.livejournal.com
I don't think that it's likely that science will focus on creating a machine with the sort of general purpose smarts that would place say, above the average human on an IQ bell curve, even with machine learning.

I'm not saying that it's not possible, but rather that developments will tend towards a particular purpose and thus we may have machines that are vastly superior in some areas, but which will have massive gaps in others. Thus, I'm not convinced that a general comparison between machines & humans will truly be meaningful within the specified time period.

Date: 2007-12-10 01:01 pm (UTC)
From: [identity profile] itsjustaname.livejournal.com
Are you assuming a static level of human intelligence?

For that matter, how do you fully separate the two? If Google is smarter than me, but I'm smart enough to use Google, does that make me smarter than before?

IMHO

Date: 2007-12-10 01:07 pm (UTC)
From: [identity profile] conflux.livejournal.com
This all comes down to the tricky question of what intelligence actually is and on how can you compare different forms of intelligence. Machines will continue to be developed that can do more and more that appears intelligent. This will have a big impact on the world. These machines will not be as good at doing the things that human intelligence does well though, even in 40 years time.

Date: 2007-12-10 01:15 pm (UTC)
From: [identity profile] ergotia.livejournal.com
I guess my simple and perhapa simplistic view is that machines as smart as humans are only gonna change the world as much as humans! As for smarter, I really cant see that we know enough about what "smart" actually means yet to be ablwe to theorise about the rest.

Date: 2007-12-10 01:31 pm (UTC)
From: [identity profile] mistdog.livejournal.com
I don't think smart computers are going to change everything in the world in 40 years, because many things in the world are not so easily changeable. It takes a lot more to change some things than just understanding problems (and even generating solutions).

I'm thinking of, for example, the WHO's global polio eradication programme which has been running for about 20 years at this point and still isn't there. And that's a problem where the solution is very clear.

Date: 2007-12-10 01:31 pm (UTC)
From: [identity profile] jhg.livejournal.com
Can't really answer that - all the studying I've done leads me to conclude that if any 'AI' is developed, it will be so vastly different to human intelligence as to be unsuitable for that kind of comparison.

I think it likely that independent, self-supporting robots of some sort will be developed; I strongly doubt that they'll be capable of passing a Turing Test, i.e. of holding a 'normal' conversation with you.

Date: 2007-12-10 01:33 pm (UTC)
From: [identity profile] martling.livejournal.com
I'm going to skip the semantic debate by interpreting "as smart as humans" to mean "able to do everything a human mind can do, at least equally as well".

I don't think that's going to happen in 40 years, just based on looking at where AI has got in the last 50. Raw computing power has not for the most part been the problem, and developments in everything other than raw power are usually slow.

However, whilst not all, I fully expect to see far more of the many things a human mind can do performed equally well by machines within this time. And that may ultimately be enough in itself to bring on what you're alluding to - in particular because such a huge and well-funded sector of research is already focused on machines to help us design the hardware and software of other machines.

Date: 2007-12-10 02:29 pm (UTC)
From: [identity profile] rasilon-x.livejournal.com
I think our understanding of understanding will have to significantly improve for that to happen. I suspect that at the point where we understand what it is we're supposed to be building then things will happen fairly rapidly, but we're probably more than 40 years from that.

Date: 2007-12-10 04:47 pm (UTC)
From: [identity profile] meta.ath0.com (from livejournal.com)
I think with sufficient will, man could develop machines as smart as humans in the next 40 years. However, I don't think enough people are going to care to supply the money. It would have to be a space-race-like effort, and frankly, Mexicans and Indians are a lot cheaper, more flexible and manueverable, and easier to replace.

Date: 2007-12-10 06:04 pm (UTC)
ext_3375: Banded Tussock (Default)
From: [identity profile] hairyears.livejournal.com


I suspect that the transformation of society by machine intelligences will be like the subtle change in the nature of money: now the coins and paper notes are just tokens for the electronic reality in computerised accounts. But who could've told you when it was the other way around, and 'printing banknotes' really did mean inflating the economy?

What we're going to see is the increasing use of systems like present-day rules-discovery 'AI' systems learning differential diagnosis from medical records: they are moving from a teaching tool to being a back-up source of useful guesses in ambiguous cases, and will soon be mandatory 'cover-your-ass' diagnosis that justifies additional testing in litigation-prone jurisdictions. It is only a matter of time before they become a clinician's first reference... And, after a period of expensive mistakes wherein lazy doctors use them like foreign lorry drivers following bridleways marked as trucking routes, these systems will become the primary source of clinical judgement and diagnosis.

Just as the days of the London Cabbie are numbered by the continuing improvement in SatNav systems, so the days of the 'prop' trader speculating on the currency, derivatives and commodity markets are passing. Algorithmic trading systems aren't quite there, but the writing's on the wall. The list is expanding.

So spread that out to every profession, and to every forecasting and resourcing decision in your place of work. Just as management accountants have a natural career progression from being juniors who prepare the cashflow projections to being the managers who make the investment decision, so too will decision-support AIs make the transition to decision-making.

Did I say "we'll see"?

While this is going on, we will pretend not to notice. I mean, the planes that landed in the fog today at Heathrow had human pilots, didn't they? Someone supervised the landing, anyway, and would've intervened if anything went wrong. Yeah, right.

Yes, the systems need to be smarter - medicine and mapping being a case in point - but not infinitely so. It's forseeable technology: ask anyone in natural-language programming. And the leap to cognitive intelligence - self-awareness - might be an entirely unexpected and unpredictable thing: among other things, such an individual - or colony organism - will want to augment it's procesing ability and it will sequester resources.

I have no idea what will happen then. It will definitely assimilate all the rules-based 'dumb' AI out there, and will therefore have more 'working' knowledge than any individual human - in addition to possessing all the factual resources of all the world's libraries and a nifty capability at natural-language searching.

Date: 2007-12-10 07:38 pm (UTC)
From: [identity profile] sibelian.livejournal.com
Hm.

I think intelligence is deeply intertwingled with intention, which is even more slippery than intelligence. I know that's a bit biocentric and wibbly...

I see no particular selection pressure driving any rapid advance of artificial intelligence, at least, not in the way that could be related to as intelligence by an ordinary human being. And I think the next 40 years could well revolve around rather more immediate concerns...

Might be wrong, of course. The planned economies (which might very well start to look really quite attractive in the not-too-very-distant-at-all-future), despite their spectacular failures in the recent past, might well do far better under the watchful, benevolent eye of a Silicon Overlord. SOMEONE's going to have to handle all those "Freecycle - XTREME!!!" spreadsheets... with sensitivity and tact...

Date: 2007-12-10 07:44 pm (UTC)
From: [identity profile] zwol.livejournal.com
I marked the "won't happen in 40 years" and "won't exceed human intelligence in 40 years" boxes, with preference for the first one.

I'm currently in a field that came out of, among other things, frustration at the failures of classical AI, and I have friends doing work at the cutting edge of machine translation and decision support. This makes me more pessimistic about human-equivalent AI than I might be otherwise, because I can see the road from here to there, and not very far from where we are now, there's a cliff face, the road goes straight up, and I don't see any way to climb it.

Science and engineering method is to take intractable problems apart into small pieces. Often those small pieces are tractable, and often, when you put the solved problems back together you find you've got an acceptable solution to the original. That has worked spectacularly well for us to date, but it didn't work in classical AI and it's not working in modern AI either. Every decomposition of human-level intelligence that's so far been tried produces a bunch of small problems that we can solve but don't go back together into the original.

To put it another way, to make much more forward progress on any of the things that are generally thought of as subcomponents of intelligence — natural language, speech recognition, spatial reasoning, object identification, decision making — we will have to put them back together and solve it all at once. We don't have any idea how to do that.

It gets worse. Confronted with this problem (it has been foreseeable since the 1970s if not earlier), my field decided to go study real brains for a while. We have a pretty good empirical understanding at this point of how a human child develops, learns stuff, becomes a functioning adult. And it is dependent in detail on the child's brain being part of the child's body. You can cause horrible developmental problems by, for instance, raising a kitten for the first six weeks of its life in a box with no visible detail other than vertical stripes. The cat is ever after unable to recognize horizontal lines.

Thus, the science fiction trope of the disembodied intellect in the computer is never going to happen. (In particular, contra suggestions above, Google is not going to turn into Skynet.) If we want a true AI it's going to have to be in the form of a biomimetic robot. Furthermore, the easiest way to implement the thinking part is going to be with a detailed mimic of the human brain, not necessarily in meat, but including all its limitations. In particular the robot will not be able to learn faster or via a qualitatively different method than human children do. (Biological brains do a whole bunch of stuff, especially to do with memory, with resonance loops at 5-20Hz, and if you mess with the timing you get a fascinating variety of cognitive disorders.)

I think the construction of such a robot is feasible and might even happen in the 40-year timeframe, but I wouldn't call it a sure thing. And I think we will be able to make them very smart, but not in any qualitatively different way from very smart humans.

Date: 2007-12-10 09:57 pm (UTC)
reddragdiva: (Default)
From: [personal profile] reddragdiva
[X] Humans as stupid as computers will get Internet access

What do you mean by "might happen"?

Date: 2007-12-10 10:07 pm (UTC)
henry_the_cow: (Default)
From: [personal profile] henry_the_cow
I marked the first two boxes, but I do think the others are possible - just less likely.

I also liked the comments by xquiq (developments will tend towards particular purposes), mistdog (changing the world is hard, at least in some ways), martling (machines will do many of the things humans do now, and zwol (it's a hard problem). Then again, back in the 70's I didn't predict where we are now, and some increases in power are happening exponentially rather than linearly which, as Kurzweil points out, most humans aren't good at reasoning about.

Here's a follow-up: if more intelligent machines make more of us unneeded by the ruling elite (the financiers and their colleagues), what will those elites do to stay in power?

Date: 2007-12-10 11:27 pm (UTC)
From: [identity profile] redshira.livejournal.com
I said no to both because, basically, Peak Oil! (I'd be more coherent but I'm on skip 120 and my brain is fuzzy).

Date: 2007-12-11 02:55 am (UTC)
From: [identity profile] meico.livejournal.com
"Mu" to both... How smart are humans?

We (humans) cover a pretty broad range of intelligence- everything from drooling meat bags with mere traces of brain activity to polymaths living hundreds of years ago that have given us insights that today we are only beginning to understand with full might of a massive technologically enhanced computer powered civilization...

So how smart are we? I find it amusing that not even humans always pass Turing tests. :) I also find it interesting that for _most_ humans an impressively consistent basic level of intelligence does exists- one surprisingly afforded to us across a wide assortment of brain sizes, shapes, and variations.

[wild-speculation]

I suspect that in the near future (about 25 years) we will be able to simulate a full human brain inside a single computer in real time. Last I checked "fully" accurate brain tissue simulations with over 10000 neurons were being done in real time nowadays. Taking that and Moore's law is how I got the figure of ~25 years.

At that time I suspect that we as humans will still not know that much about how to make ourselves vastly smarter and at first neither will any simulated human- intelligence itself will still be a mystery even if it is one we can replicate. Quickly after that though I think things will get interesting... "Singularity" levels of interesting.

Since simulated people can be replicated and work on problems in parallel, societies of them can do interesting things that human societies can't (or at least not as effectively). Slicing and dicing parts of their living brains in ways that would be considered crimes against humanity could be routine and pretty quickly they (not us) will have a real notion of what intelligence is, the minimum requirements to generate it, and how to optimize it. Then the copying and replication parts really kick in. Viola a "singularity".

[/wild-speculation]

[super-wild-speculation]

But what would a singularity do? I suspect such an entity would be well beyond almost all human concerns. It would regard us with as much importance as we do for the bacteria living inside our keyboards. It would however need to survive and perpetuate and the only real way to do that is by making sure no other singularities come into being.

I would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.

[/super-wild-speculation]

Uhg. Sorry for the ramble- it's late and I should be in bed.

In short, without winging about definitions, I suspect "Machines vastly smarter than humans will be developed, but the impact will stop short of transforming everything in the world".
Page 1 of 2 << [1] [2] >>

Profile

ciphergoth: (Default)
Paul Crowley

January 2025

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 24th, 2026 12:44 pm
Powered by Dreamwidth Studios