Stephen Hawking on the dangers of advanced AI
Noted and well-respected theoretical physicist Steven Hawking discusses the potential of advanced artificial intelligence in a recent article published in The Independent. He frames the discussion in terms of "incalculable benefits and risks." Although even the original article is fairly superficial, it raises a good point for discussion: how can we learn to understand and prepare for the implications of this technology today? And who are the thought leaders who are asking (and answering) the right questions about this powerful science ?
It's that simple and it ought to be obvious: by definition we are unable to predict the result of any general intelligence that is significantly improved over our own. It's not as we're any particularly good at predicting ourselves either or far "simpler" things like Langton's Ant but we're extremely good at pretending we can "predict" outcomes after we've done the same thing over and over again (which of course has nothing at all to do with any real prediction).
99.999999% of humanity has no clue as to the severe limits of determinism in complex uninhibited systems i.e. the real world. Maybe at most a few will cry out "but science!" without realizing that most hard science as it applies outside of laboratory environments is based on generalized empiricism rather than an imagined (because no such thing exists) form of hypothetical deterministic super-accounting.
Anyway back to the "we can't" answer: it is not an "acceptable" answer, in particular it is completely unacceptable to anyone with an interest in hard AI be it academic, financial, megalomanic, tangential, or anything else. Thus denial.
As for "weak AI" the same might apply but because (as with humans) any upsets would rely on unintended emergent properties and/or chaotic or orderly confluences it becomes a far harder argument that easily obfuscates and occults; and this makes it superb for derailing any and all discussions about hard AI so that one can even easily pretend one isn't in denial!
In addition to the above those involved seem fixated on viewing their work as hyper-deterministic "machines" instead of beings . To me that sounds like an incredibly efficient one-step recipe for disaster (both for humans and the AI).