Stephen Hawking on the dangers of advanced AI
Noted and well-respected theoretical physicist Steven Hawking discusses the potential of advanced artificial intelligence in a recent article published in The Independent. He frames the discussion in terms of "incalculable benefits and risks." Although even the original article is fairly superficial, it raises a good point for discussion: how can we learn to understand and prepare for the implications of this technology today? And who are the thought leaders who are asking (and answering) the right questions about this powerful science ?
I guess I would say we need to consider what purposes intelligences, whether natural or artifical, serve because -- presumably -- intelligence will evolve to support these purposes. And, there isn't enough conversation in society at all levels about the reasons for our moral (or purposeful) choices. Thus, I suppose I could be almost as afraid of the very rich and powerful making decisions which adversely affect my personhood as I am of any future artifical intelligence. Maybe this could change if we could demonstrate how choices for shared good outperform choices for personal good? Maybe an AI superior to our own natural intelligence could help us discover this?