ciphergoth: (Default)
[personal profile] ciphergoth
Clarification: By "smart" I mean general smarts: the sort of smarts that allow you to do things like pass a Turing test or solve open problems in nanotechnology. Obviously computers are ahead of humans in narrow domains like playing chess.

NB: your guess as to what will happen should also be one of your guesses about what might happen - thanks! This applies to [livejournal.com profile] wriggler, [livejournal.com profile] ablueskyboy, [livejournal.com profile] thekumquat, [livejournal.com profile] redcountess, [livejournal.com profile] thehalibutkid, [livejournal.com profile] henry_the_cow and [livejournal.com profile] cillygirl. If you tick only one option (which is not the last) in the first poll, it means you think it's the only possible outcome.

[Poll #1103617]

And of course, I'm fascinated to know why you make those guesses. In particular - I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it, and I'd love to hear more about that position.
Page 1 of 2 << [1] [2] >>

Date: 2007-12-10 11:30 am (UTC)
From: [identity profile] seph-hazard.livejournal.com
I'm not sure how to answer this and need to think about it more [grin] I suspect it won't be the next forty years - it'll be further in the future than that. A hundred, maybe, but this won't happen in forty.

Date: 2007-12-10 11:38 am (UTC)
djm4: (Default)
From: [personal profile] djm4
I tend to feel that the existence of Google means that this has already happened. Google (and Altavista, Lycos and Yahoo before it) is a 'vastly smarter' information processor and indexer than any human, and has transformed our ability to find information beyond all recognition in the past 12 or so years.

Even if you don't feel Google's smarter than a human yet (just faster), I suspect in the next ten or so years it will become so.

Date: 2007-12-10 11:39 am (UTC)
From: [identity profile] despina.livejournal.com
It depends what you mean by 'smarter'! AI stuff is coming along in leaps and bounds but there are some things I don't think machines will ever be able to do as 'smartly' as humans can.

Also depends on the humans, though, I suppose.

Date: 2007-12-10 11:48 am (UTC)
From: [identity profile] battlekitty.livejournal.com
Mu: Problem is that the economic and social situation will be such that in 40 years there won't exist the ability to produce science of that calibre.

Cynical much? me?? :)

Date: 2007-12-10 01:48 pm (UTC)
From: [identity profile] damerell.livejournal.com
Or, at any rate, that the non-struggling survivors will be so few in number that the chances of them working on so difficult a project and succeeding will be very small.

And I think it _is_ a very difficult project; so difficult I would have picked the answer I did not even if I didn't expect a catastrophe of unprecedented proportions to overtake us.

(no subject)

From: [identity profile] arkady.livejournal.com - Date: 2007-12-11 12:26 pm (UTC) - Expand

Date: 2007-12-10 11:53 am (UTC)
From: [identity profile] thekumquat.livejournal.com
As no-one else has mentioned it yet, it depends what you mean by 'smart'. Computers already have better memory and analytical abilities than humans, but I figure what is needed to be 'smart' is the initiative to decide what to research/remember/analyse. I don't see computers being able to synthesise solutions independently of human programmers.

On the other hand, that could just be my failure of imagination, given that our brains had to evolve from pattern recognition and response to current creative synthetic abilities somehow and I guess there's no reason why silicon 'neurons' couldn't do the same, but I doubt it in the next 40 years.

Date: 2007-12-10 12:24 pm (UTC)
From: [identity profile] drdoug.livejournal.com
it depends what you mean by 'smart'

Yes! Absolutely my position. And we could argue about what it means for a human or a machine to be smart, but one thing I do feel confident in predicting is that people in 40 years' time will come to a very different set of answers. 40 years ago takes us back to perhaps the heyday of AI, and our ideas now about artificial and human intelligence are quite different.

My main problem with the options: I'd say that machines are already smarter than people, and this has already transformed everything in the world.

Leaving aside the abstract stuff, none of the physical developments that enable our current world to happen - agriculture, production, travel, communications - are possible at anything like the current scale without smart machines to do the required thinking for us. The C19th clerical revolution enabled the industrial revolution; the C20th IT revolution has enabled an even more profound transformation of society.

The rate of change is increasing. (I'd guess it's still positive down to third differentials.) I'm not a Singularity person, but any trend that can't continue won't. That can't. At least, I can't think it can - but maybe we can invent machines that will enable us to do that thinking.

I didn't predict how profoundly the world has changed in the last twenty years. I wasn't a lot better at predicting what was going to happen over the course of the last year. (I wasn't quite so wrong, but only because things can't change so substantially over just a single year.) I don't think I'll do any better over a longer timescale.

Except ... ask me again in 39 years, and machines will have got smarter so that I can give a much better answer!

(no subject)

From: [identity profile] meta.ath0.com - Date: 2007-12-10 04:37 pm (UTC) - Expand

(no subject)

From: [identity profile] meico.livejournal.com - Date: 2007-12-11 03:18 am (UTC) - Expand

(no subject)

From: [personal profile] reddragdiva - Date: 2007-12-10 09:57 pm (UTC) - Expand

Date: 2007-12-10 11:54 am (UTC)
From: [identity profile] purplerabbits.livejournal.com
I suspect that in the process of developing machines that as are smart as humans in some ways but not in others, we will discover new and interesting things about what intelligence is and why AI has been such an intrasigent problem that I can't even imagine. And then civilisation will collapse.

Date: 2007-12-10 11:57 am (UTC)
zotz: (Default)
From: [personal profile] zotz
It does depend on what you mean, but mainly I suspect that this is going to be very like fusion power, which I suspect is why you picked 40 years as a term.

Date: 2007-12-10 12:00 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
Actually I picked it because I think most of us expect to live at least another forty years.

(no subject)

From: [personal profile] zotz - Date: 2007-12-10 12:02 pm (UTC) - Expand

(no subject)

From: [identity profile] jhg.livejournal.com - Date: 2007-12-10 01:35 pm (UTC) - Expand

(no subject)

From: [personal profile] zotz - Date: 2007-12-10 01:39 pm (UTC) - Expand

(no subject)

From: [identity profile] jhg.livejournal.com - Date: 2007-12-10 01:53 pm (UTC) - Expand

(no subject)

From: [personal profile] zotz - Date: 2007-12-10 01:58 pm (UTC) - Expand

(no subject)

From: [identity profile] drdoug.livejournal.com - Date: 2007-12-10 03:05 pm (UTC) - Expand

(no subject)

From: [identity profile] topbit.livejournal.com - Date: 2007-12-10 08:11 pm (UTC) - Expand

(no subject)

From: [identity profile] jhg.livejournal.com - Date: 2007-12-10 09:37 pm (UTC) - Expand

(no subject)

From: [identity profile] topbit.livejournal.com - Date: 2007-12-10 11:27 pm (UTC) - Expand

(no subject)

From: [identity profile] seph-hazard.livejournal.com - Date: 2007-12-10 04:18 pm (UTC) - Expand

(no subject)

From: [personal profile] lovingboth - Date: 2007-12-10 04:30 pm (UTC) - Expand

(no subject)

From: [identity profile] seph-hazard.livejournal.com - Date: 2007-12-10 04:32 pm (UTC) - Expand

(no subject)

From: [identity profile] jhg.livejournal.com - Date: 2007-12-10 09:44 pm (UTC) - Expand

Date: 2007-12-10 12:03 pm (UTC)
aegidian: (cogs)
From: [personal profile] aegidian
Mu - incomplete definition of 'smarter'.

It's a given that some machines are already vastly better at performing some tasks previously performed by human 'computers' already, and this has transformed the human experience of much of the world.

If by "machines vastly smarter than humans" you mean machines capable of exhibiting characteristics indistinguishable from that of a human intellect and yet surpassing any "smart" human intellect, then I doubt it will happen without several considerable shifts in what is currently considered AI. These shifts may occur, but I suspect they may be as elusive as sustainable fusion power.

Date: 2007-12-10 12:19 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
Have updated post to indicate the kind of "smarter" I mean.

(no subject)

From: [personal profile] djm4 - Date: 2007-12-10 12:27 pm (UTC) - Expand

(no subject)

From: [identity profile] topbit.livejournal.com - Date: 2007-12-10 11:44 pm (UTC) - Expand

Date: 2007-12-10 12:31 pm (UTC)
From: [identity profile] drdoug.livejournal.com
I'm surprised how many people think it's likely that machines as smart as humans might emerge while nothing smarter comes of it

I don't think that's very likely but it's just about conceivable. I can imagine an argument along the lines that you can't build something that's smarter than you yourself are (for some definitions of smart). It'd be a bit of a woolly argument to my mind, but it might have some force. I also think it's possible (but not at all likely) that there turns out to be some fundamental limit to how general-purpose-smart things can be made, and humans are already at it.

I suspect, though, that most people holding that position a) don't hold it very explicitly or consideredly (if that's a real adverb), and b) are somewhat influenced by fear of the consequences if the process doesn't stop at the point they claim it will.

Not me! I, for one, welcome our new machine overlords.

Date: 2007-12-10 12:59 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
Even if it were only as smart as us in some deep sense, it might still think a million times faster than we do, and thus give any question a lifetime of thought in half an hour, and that would be a pretty substantial change.

(no subject)

From: [personal profile] djm4 - Date: 2007-12-10 01:05 pm (UTC) - Expand

(no subject)

From: [identity profile] ciphergoth.livejournal.com - Date: 2007-12-10 01:27 pm (UTC) - Expand

(no subject)

From: [identity profile] drdoug.livejournal.com - Date: 2007-12-10 02:15 pm (UTC) - Expand

(no subject)

From: [identity profile] drdoug.livejournal.com - Date: 2007-12-10 03:12 pm (UTC) - Expand

(no subject)

From: [identity profile] ciphergoth.livejournal.com - Date: 2007-12-10 03:27 pm (UTC) - Expand

Date: 2007-12-10 01:01 pm (UTC)
From: [identity profile] xquiq.livejournal.com
I don't think that it's likely that science will focus on creating a machine with the sort of general purpose smarts that would place say, above the average human on an IQ bell curve, even with machine learning.

I'm not saying that it's not possible, but rather that developments will tend towards a particular purpose and thus we may have machines that are vastly superior in some areas, but which will have massive gaps in others. Thus, I'm not convinced that a general comparison between machines & humans will truly be meaningful within the specified time period.

Date: 2007-12-10 01:01 pm (UTC)
From: [identity profile] itsjustaname.livejournal.com
Are you assuming a static level of human intelligence?

For that matter, how do you fully separate the two? If Google is smarter than me, but I'm smart enough to use Google, does that make me smarter than before?

Separation anxiety.

Date: 2007-12-10 01:10 pm (UTC)
aegidian: (cogs)
From: [personal profile] aegidian
Blow Google, we're smart enough to create pocket calculators, orreries, and sharpened stone tools. And yes, since the tools assist us in performing the same tasks more efficiently, they do make us smarter.

We have machine aided smarts, which brings an interesting problem in demarkation - the tool is not smart, we have the smarts. How complex does a tool have to be for it to be considered separately smart from its user(s)?

Re: Separation anxiety.

From: [identity profile] keirf.livejournal.com - Date: 2007-12-10 01:55 pm (UTC) - Expand

Re: Separation anxiety.

From: [personal profile] aegidian - Date: 2007-12-10 02:58 pm (UTC) - Expand

Re: Separation anxiety.

From: [identity profile] itsjustaname.livejournal.com - Date: 2007-12-10 02:52 pm (UTC) - Expand

(no subject)

From: [identity profile] topbit.livejournal.com - Date: 2007-12-10 11:48 pm (UTC) - Expand

IMHO

Date: 2007-12-10 01:07 pm (UTC)
From: [identity profile] conflux.livejournal.com
This all comes down to the tricky question of what intelligence actually is and on how can you compare different forms of intelligence. Machines will continue to be developed that can do more and more that appears intelligent. This will have a big impact on the world. These machines will not be as good at doing the things that human intelligence does well though, even in 40 years time.

Re: IMHO

Date: 2007-12-11 12:51 pm (UTC)
From: [identity profile] just-becky.livejournal.com
Ooh good point! Is intelligence simpley faster processing, more parallel processes or greater data storage? Sure a machine will one day be able to beat us on all those criteria but can they ever have spontaneous thought?

I have no doubt that given a vast repository of potential solutions and amazing processing speed a computer could identify a potential requirement for something and come up with a very efficient solution to the issue using modified versions of existing items. But actual invention, innovation or inspiration could it ever do those?

Now I am no neuro scientist, mathematician or computer programmer, Art is more my thing, and in this field at least I know that faster/bigger/brighter is not neccesarily better, so I will draw an example from there. Could a computer when faced with a sunrise over a landscape which fulfils all its established criteria for "Beauty", ever think:
"You know what, instead of striving to represent this image as accurately as I can, like every other visual representation that has been done for the last 50 years or so, I am going to render it in simple swathes of colour instead! Giving a mere 'impression' of the scene if you will!"
Edited Date: 2007-12-11 01:10 pm (UTC)

Re: IMHO

From: [identity profile] ciphergoth.livejournal.com - Date: 2007-12-11 01:13 pm (UTC) - Expand

Re: IMHO

From: [identity profile] just-becky.livejournal.com - Date: 2007-12-11 02:08 pm (UTC) - Expand

Date: 2007-12-10 01:15 pm (UTC)
From: [identity profile] ergotia.livejournal.com
I guess my simple and perhapa simplistic view is that machines as smart as humans are only gonna change the world as much as humans! As for smarter, I really cant see that we know enough about what "smart" actually means yet to be ablwe to theorise about the rest.

Date: 2007-12-10 04:44 pm (UTC)
From: [identity profile] meta.ath0.com (from livejournal.com)
machines as smart as humans are only gonna change the world as much as humans!

i.e. we're already a hell of a lot smarter than the decisions made on our behalf.

Date: 2007-12-10 01:31 pm (UTC)
From: [identity profile] mistdog.livejournal.com
I don't think smart computers are going to change everything in the world in 40 years, because many things in the world are not so easily changeable. It takes a lot more to change some things than just understanding problems (and even generating solutions).

I'm thinking of, for example, the WHO's global polio eradication programme which has been running for about 20 years at this point and still isn't there. And that's a problem where the solution is very clear.

Date: 2007-12-10 01:31 pm (UTC)
From: [identity profile] jhg.livejournal.com
Can't really answer that - all the studying I've done leads me to conclude that if any 'AI' is developed, it will be so vastly different to human intelligence as to be unsuitable for that kind of comparison.

I think it likely that independent, self-supporting robots of some sort will be developed; I strongly doubt that they'll be capable of passing a Turing Test, i.e. of holding a 'normal' conversation with you.

Date: 2007-12-10 01:33 pm (UTC)
From: [identity profile] martling.livejournal.com
I'm going to skip the semantic debate by interpreting "as smart as humans" to mean "able to do everything a human mind can do, at least equally as well".

I don't think that's going to happen in 40 years, just based on looking at where AI has got in the last 50. Raw computing power has not for the most part been the problem, and developments in everything other than raw power are usually slow.

However, whilst not all, I fully expect to see far more of the many things a human mind can do performed equally well by machines within this time. And that may ultimately be enough in itself to bring on what you're alluding to - in particular because such a huge and well-funded sector of research is already focused on machines to help us design the hardware and software of other machines.

Date: 2007-12-10 02:29 pm (UTC)
From: [identity profile] rasilon-x.livejournal.com
I think our understanding of understanding will have to significantly improve for that to happen. I suspect that at the point where we understand what it is we're supposed to be building then things will happen fairly rapidly, but we're probably more than 40 years from that.

Date: 2007-12-10 04:47 pm (UTC)
From: [identity profile] meta.ath0.com (from livejournal.com)
I think with sufficient will, man could develop machines as smart as humans in the next 40 years. However, I don't think enough people are going to care to supply the money. It would have to be a space-race-like effort, and frankly, Mexicans and Indians are a lot cheaper, more flexible and manueverable, and easier to replace.

Date: 2007-12-10 06:04 pm (UTC)
ext_3375: Banded Tussock (Default)
From: [identity profile] hairyears.livejournal.com


I suspect that the transformation of society by machine intelligences will be like the subtle change in the nature of money: now the coins and paper notes are just tokens for the electronic reality in computerised accounts. But who could've told you when it was the other way around, and 'printing banknotes' really did mean inflating the economy?

What we're going to see is the increasing use of systems like present-day rules-discovery 'AI' systems learning differential diagnosis from medical records: they are moving from a teaching tool to being a back-up source of useful guesses in ambiguous cases, and will soon be mandatory 'cover-your-ass' diagnosis that justifies additional testing in litigation-prone jurisdictions. It is only a matter of time before they become a clinician's first reference... And, after a period of expensive mistakes wherein lazy doctors use them like foreign lorry drivers following bridleways marked as trucking routes, these systems will become the primary source of clinical judgement and diagnosis.

Just as the days of the London Cabbie are numbered by the continuing improvement in SatNav systems, so the days of the 'prop' trader speculating on the currency, derivatives and commodity markets are passing. Algorithmic trading systems aren't quite there, but the writing's on the wall. The list is expanding.

So spread that out to every profession, and to every forecasting and resourcing decision in your place of work. Just as management accountants have a natural career progression from being juniors who prepare the cashflow projections to being the managers who make the investment decision, so too will decision-support AIs make the transition to decision-making.

Did I say "we'll see"?

While this is going on, we will pretend not to notice. I mean, the planes that landed in the fog today at Heathrow had human pilots, didn't they? Someone supervised the landing, anyway, and would've intervened if anything went wrong. Yeah, right.

Yes, the systems need to be smarter - medicine and mapping being a case in point - but not infinitely so. It's forseeable technology: ask anyone in natural-language programming. And the leap to cognitive intelligence - self-awareness - might be an entirely unexpected and unpredictable thing: among other things, such an individual - or colony organism - will want to augment it's procesing ability and it will sequester resources.

I have no idea what will happen then. It will definitely assimilate all the rules-based 'dumb' AI out there, and will therefore have more 'working' knowledge than any individual human - in addition to possessing all the factual resources of all the world's libraries and a nifty capability at natural-language searching.

Date: 2007-12-10 07:38 pm (UTC)
From: [identity profile] sibelian.livejournal.com
Hm.

I think intelligence is deeply intertwingled with intention, which is even more slippery than intelligence. I know that's a bit biocentric and wibbly...

I see no particular selection pressure driving any rapid advance of artificial intelligence, at least, not in the way that could be related to as intelligence by an ordinary human being. And I think the next 40 years could well revolve around rather more immediate concerns...

Might be wrong, of course. The planned economies (which might very well start to look really quite attractive in the not-too-very-distant-at-all-future), despite their spectacular failures in the recent past, might well do far better under the watchful, benevolent eye of a Silicon Overlord. SOMEONE's going to have to handle all those "Freecycle - XTREME!!!" spreadsheets... with sensitivity and tact...

Date: 2007-12-10 07:47 pm (UTC)
From: [identity profile] zwol.livejournal.com
http://www.poormojo.org/pmjadaily/archives/019099.php


Tyrone: ... if this is the world we're gonna live in anyway, at least let the robot overlords have their shot. World peace, technological utopia -- and no crime! The robot overlords' crime control is swift and merciless.

John: But it's completely ... uncaring! All people will be punished equally regardless of circumstance!

Tyrone: I'm sorry, did you forget I was black?

Date: 2007-12-10 07:44 pm (UTC)
From: [identity profile] zwol.livejournal.com
I marked the "won't happen in 40 years" and "won't exceed human intelligence in 40 years" boxes, with preference for the first one.

I'm currently in a field that came out of, among other things, frustration at the failures of classical AI, and I have friends doing work at the cutting edge of machine translation and decision support. This makes me more pessimistic about human-equivalent AI than I might be otherwise, because I can see the road from here to there, and not very far from where we are now, there's a cliff face, the road goes straight up, and I don't see any way to climb it.

Science and engineering method is to take intractable problems apart into small pieces. Often those small pieces are tractable, and often, when you put the solved problems back together you find you've got an acceptable solution to the original. That has worked spectacularly well for us to date, but it didn't work in classical AI and it's not working in modern AI either. Every decomposition of human-level intelligence that's so far been tried produces a bunch of small problems that we can solve but don't go back together into the original.

To put it another way, to make much more forward progress on any of the things that are generally thought of as subcomponents of intelligence — natural language, speech recognition, spatial reasoning, object identification, decision making — we will have to put them back together and solve it all at once. We don't have any idea how to do that.

It gets worse. Confronted with this problem (it has been foreseeable since the 1970s if not earlier), my field decided to go study real brains for a while. We have a pretty good empirical understanding at this point of how a human child develops, learns stuff, becomes a functioning adult. And it is dependent in detail on the child's brain being part of the child's body. You can cause horrible developmental problems by, for instance, raising a kitten for the first six weeks of its life in a box with no visible detail other than vertical stripes. The cat is ever after unable to recognize horizontal lines.

Thus, the science fiction trope of the disembodied intellect in the computer is never going to happen. (In particular, contra suggestions above, Google is not going to turn into Skynet.) If we want a true AI it's going to have to be in the form of a biomimetic robot. Furthermore, the easiest way to implement the thinking part is going to be with a detailed mimic of the human brain, not necessarily in meat, but including all its limitations. In particular the robot will not be able to learn faster or via a qualitatively different method than human children do. (Biological brains do a whole bunch of stuff, especially to do with memory, with resonance loops at 5-20Hz, and if you mess with the timing you get a fascinating variety of cognitive disorders.)

I think the construction of such a robot is feasible and might even happen in the 40-year timeframe, but I wouldn't call it a sure thing. And I think we will be able to make them very smart, but not in any qualitatively different way from very smart humans.

Date: 2007-12-11 03:06 am (UTC)
From: [identity profile] meico.livejournal.com
I agree with almost all of what you are saying above but disagree on a few minor, but important points. One thing I think I should point out is that an embodied intelligence doesn't have to necessarily be embodied in the real world (though that would probably be helpful). It can be embodied in a simulated world.

Embodying it there makes so many things much easier. For example, simulated skin that detects touch becomes a simple by product of your physics contact and penetration solver and not some intractable materials engineering problem...

Anyway, I'm working on the cutting edge of reality simulations (games) and have no doubt that they'll soon be good enough to place embodied intelligences into and have them develop real world knowledge and skills (actually some people are already doing exactly this).

Date: 2007-12-10 09:57 pm (UTC)
reddragdiva: (Default)
From: [personal profile] reddragdiva
[X] Humans as stupid as computers will get Internet access

Date: 2007-12-10 11:30 pm (UTC)
From: [identity profile] topbit.livejournal.com
[X] Humans as stupid as computers will get* Internet access

* 'already have'

What do you mean by "might happen"?

Date: 2007-12-10 10:07 pm (UTC)
henry_the_cow: (Default)
From: [personal profile] henry_the_cow
I marked the first two boxes, but I do think the others are possible - just less likely.

I also liked the comments by xquiq (developments will tend towards particular purposes), mistdog (changing the world is hard, at least in some ways), martling (machines will do many of the things humans do now, and zwol (it's a hard problem). Then again, back in the 70's I didn't predict where we are now, and some increases in power are happening exponentially rather than linearly which, as Kurzweil points out, most humans aren't good at reasoning about.

Here's a follow-up: if more intelligent machines make more of us unneeded by the ruling elite (the financiers and their colleagues), what will those elites do to stay in power?

Re: What do you mean by "might happen"?

Date: 2007-12-11 10:02 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
if more intelligent machines make more of us unneeded by the ruling elite (the financiers and their colleagues), what will those elites do to stay in power?

If vastly more intelligent machines are developed, and they are inclined to be friendly towards the interests of all humanity, then any ruling elite that wants to execute such an evil plan will find it has a formidable oppoent. If the machines are not inclined to be friendly towards humanity, there will be no ruling elite and no the rest of us!

Date: 2007-12-10 11:27 pm (UTC)
From: [identity profile] redshira.livejournal.com
I said no to both because, basically, Peak Oil! (I'd be more coherent but I'm on skip 120 and my brain is fuzzy).

Date: 2007-12-11 09:59 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
It's OK, I can unpack that answer from here, and it's a perfectly sensible one.

(no subject)

From: [identity profile] redshira.livejournal.com - Date: 2007-12-11 10:43 am (UTC) - Expand

Date: 2007-12-11 02:55 am (UTC)
From: [identity profile] meico.livejournal.com
"Mu" to both... How smart are humans?

We (humans) cover a pretty broad range of intelligence- everything from drooling meat bags with mere traces of brain activity to polymaths living hundreds of years ago that have given us insights that today we are only beginning to understand with full might of a massive technologically enhanced computer powered civilization...

So how smart are we? I find it amusing that not even humans always pass Turing tests. :) I also find it interesting that for _most_ humans an impressively consistent basic level of intelligence does exists- one surprisingly afforded to us across a wide assortment of brain sizes, shapes, and variations.

[wild-speculation]

I suspect that in the near future (about 25 years) we will be able to simulate a full human brain inside a single computer in real time. Last I checked "fully" accurate brain tissue simulations with over 10000 neurons were being done in real time nowadays. Taking that and Moore's law is how I got the figure of ~25 years.

At that time I suspect that we as humans will still not know that much about how to make ourselves vastly smarter and at first neither will any simulated human- intelligence itself will still be a mystery even if it is one we can replicate. Quickly after that though I think things will get interesting... "Singularity" levels of interesting.

Since simulated people can be replicated and work on problems in parallel, societies of them can do interesting things that human societies can't (or at least not as effectively). Slicing and dicing parts of their living brains in ways that would be considered crimes against humanity could be routine and pretty quickly they (not us) will have a real notion of what intelligence is, the minimum requirements to generate it, and how to optimize it. Then the copying and replication parts really kick in. Viola a "singularity".

[/wild-speculation]

[super-wild-speculation]

But what would a singularity do? I suspect such an entity would be well beyond almost all human concerns. It would regard us with as much importance as we do for the bacteria living inside our keyboards. It would however need to survive and perpetuate and the only real way to do that is by making sure no other singularities come into being.

I would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.

[/super-wild-speculation]

Uhg. Sorry for the ramble- it's late and I should be in bed.

In short, without winging about definitions, I suspect "Machines vastly smarter than humans will be developed, but the impact will stop short of transforming everything in the world".

Date: 2007-12-13 01:45 am (UTC)
From: [identity profile] meico.livejournal.com
This line:

I would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.

Should have been:

It would make any further research into super intelligences downright impossible- and don't think we could outsmart it and do the work secretly... Remember, it would be infinitely intelligent compared to us and probably see through any plans we had well before they were even formed.
Page 1 of 2 << [1] [2] >>

Profile

ciphergoth: (Default)
Paul Crowley

January 2025

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 24th, 2026 01:14 am
Powered by Dreamwidth Studios