Comment 1C9 Denyng the undeniable


Stephen Hawking on the dangers of advanced AI


Denyng the undeniable (Score: 3, Interesting)

by on 2014-05-04 23:26 (#1C9)

We can't.

It's that simple and it ought to be obvious: by definition we are unable to predict the result of any general intelligence that is significantly improved over our own. It's not as we're any particularly good at predicting ourselves either or far "simpler" things like Langton's Ant but we're extremely good at pretending we can "predict" outcomes after we've done the same thing over and over again (which of course has nothing at all to do with any real prediction).

99.999999% of humanity has no clue as to the severe limits of determinism in complex uninhibited systems i.e. the real world. Maybe at most a few will cry out "but science!" without realizing that most hard science as it applies outside of laboratory environments is based on generalized empiricism rather than an imagined (because no such thing exists) form of hypothetical deterministic super-accounting.

Anyway back to the "we can't" answer: it is not an "acceptable" answer, in particular it is completely unacceptable to anyone with an interest in hard AI be it academic, financial, megalomanic, tangential, or anything else. Thus denial.

As for "weak AI" the same might apply but because (as with humans) any upsets would rely on unintended emergent properties and/or chaotic or orderly confluences it becomes a far harder argument that easily obfuscates and occults; and this makes it superb for derailing any and all discussions about hard AI so that one can even easily pretend one isn't in denial!

In addition to the above those involved seem fixated on viewing their work as hyper-deterministic "machines" instead of beings . To me that sounds like an incredibly efficient one-step recipe for disaster (both for humans and the AI).


Time Reason Points Voter
2014-05-05 03:24 Interesting +1
2014-05-05 01:02 Insightful +1

Junk Status

Not marked as junk