Story 2014-05-04 3JX Stephen Hawking on the dangers of advanced AI

Stephen Hawking on the dangers of advanced AI

by
Anonymous Coward
in science on (#3JX)
story imageNoted and well-respected theoretical physicist Steven Hawking discusses the potential of advanced artificial intelligence in a recent article published in The Independent. He frames the discussion in terms of "incalculable benefits and risks." Although even the original article is fairly superficial, it raises a good point for discussion: how can we learn to understand and prepare for the implications of this technology today? And who are the thought leaders who are asking (and answering) the right questions about this powerful science ?
Reply 7 comments

Denyng the undeniable (Score: 3, Insightful)

by ploling@pipedot.org on 2014-05-04 23:26 (#1C9)

We can't.

It's that simple and it ought to be obvious: by definition we are unable to predict the result of any general intelligence that is significantly improved over our own. It's not as we're any particularly good at predicting ourselves either or far "simpler" things like Langton's Ant but we're extremely good at pretending we can "predict" outcomes after we've done the same thing over and over again (which of course has nothing at all to do with any real prediction).

99.999999% of humanity has no clue as to the severe limits of determinism in complex uninhibited systems i.e. the real world. Maybe at most a few will cry out "but science!" without realizing that most hard science as it applies outside of laboratory environments is based on generalized empiricism rather than an imagined (because no such thing exists) form of hypothetical deterministic super-accounting.

Anyway back to the "we can't" answer: it is not an "acceptable" answer, in particular it is completely unacceptable to anyone with an interest in hard AI be it academic, financial, megalomanic, tangential, or anything else. Thus denial.

As for "weak AI" the same might apply but because (as with humans) any upsets would rely on unintended emergent properties and/or chaotic or orderly confluences it becomes a far harder argument that easily obfuscates and occults; and this makes it superb for derailing any and all discussions about hard AI so that one can even easily pretend one isn't in denial!

In addition to the above those involved seem fixated on viewing their work as hyper-deterministic "machines" instead of beings . To me that sounds like an incredibly efficient one-step recipe for disaster (both for humans and the AI).

Re: Denyng the undeniable (Score: 3, Interesting)

by computermachine@pipedot.org on 2014-05-05 01:36 (#1CA)

In addition to the above those involved seem fixated on viewing their work as hyper-deterministic "machines" instead of beings .
Yes, since any advanced AI must be a self-learning system, only a limited control can be had over its evolution.

Re: Denyng the undeniable (Score: 0)

by Anonymous Coward on 2014-05-06 20:21 (#1EC)

99.999999% of humanity has no clue as to the severe limits of determinism in complex uninhibited systems i.e. the real world.
And 98.765432% of all statistics are pulled out of someone's ass.

internet law? (Score: 2, Interesting)

by danieldvorkin@pipedot.org on 2014-05-05 04:31 (#1CB)

Isn't there a Somebody-or-other's Law that says that all eminent physicists will eventually embarrass themselves and everyone else by making silly pronouncements about things wildly outside their area of expertise? Sadly, Hawking seems to have reached that point in his career.

Re: internet law? (Score: 4, Insightful)

by rocks@pipedot.org on 2014-05-05 12:31 (#1CR)

I've heard this meme a lot and, for better or worse, I've often liked it. There is something refreshing about viewing the genius in our society as normal in other respects.

On the other hand, it takes amazing skill, luck, or fortune to become someone whose voice has a platform in society, its hard to judge someone for using the platform once they've reached this status. I figure we are all pretty opinionated about lots of topics in our own circles of influence?

Re: internet law? (Score: 4, Interesting)

by danieldvorkin@pipedot.org on 2014-05-05 14:44 (#1CY)

Oh, you're probably right--I make pronouncements about all kinds of things all the time, and if I had the kind of platform Hawking does I'm sure my dumber statements would be blown up to cringe-inducing proportions. OTOH, a lot of people will give such statements by Hawking, or any eminent scientist, far more credit than they deserve, and that's a problem. And it does seem that physicists are particularly prone to this kind of thing, although scientists in other fields certainly aren't immune.

Re: internet law? (Score: 3, Interesting)

by rocks@pipedot.org on 2014-05-05 16:40 (#1D0)

You have touched on two of my pet peeves as well: (1) when the messenger is more important than the message, and (2) when arrogance breeds the offering of opinions without appropriate pause for reflection. However, these seem so eminently human traits, they seem hard to criticize in an absolute sense. My latest approach to these things is just to ignore the bias implicit in listening to others based on perceived eminence and concentrate on their content or message. I suppose in the present context that means trying to say something sensible about the future of artificial intelligence or our fear of losing control over the AI we create... mmm...

I guess I would say we need to consider what purposes intelligences, whether natural or artifical, serve because -- presumably -- intelligence will evolve to support these purposes. And, there isn't enough conversation in society at all levels about the reasons for our moral (or purposeful) choices. Thus, I suppose I could be almost as afraid of the very rich and powerful making decisions which adversely affect my personhood as I am of any future artifical intelligence. Maybe this could change if we could demonstrate how choices for shared good outperform choices for personal good? Maybe an AI superior to our own natural intelligence could help us discover this?