Comment 1D0 Re: internet law?


Stephen Hawking on the dangers of advanced AI


internet law? (Score: 2, Interesting)

by on 2014-05-05 04:31 (#1CB)

Isn't there a Somebody-or-other's Law that says that all eminent physicists will eventually embarrass themselves and everyone else by making silly pronouncements about things wildly outside their area of expertise? Sadly, Hawking seems to have reached that point in his career.

Re: internet law? (Score: 4, Insightful)

by on 2014-05-05 12:31 (#1CR)

I've heard this meme a lot and, for better or worse, I've often liked it. There is something refreshing about viewing the genius in our society as normal in other respects.

On the other hand, it takes amazing skill, luck, or fortune to become someone whose voice has a platform in society, its hard to judge someone for using the platform once they've reached this status. I figure we are all pretty opinionated about lots of topics in our own circles of influence?

Re: internet law? (Score: 4, Interesting)

by on 2014-05-05 14:44 (#1CY)

Oh, you're probably right--I make pronouncements about all kinds of things all the time, and if I had the kind of platform Hawking does I'm sure my dumber statements would be blown up to cringe-inducing proportions. OTOH, a lot of people will give such statements by Hawking, or any eminent scientist, far more credit than they deserve, and that's a problem. And it does seem that physicists are particularly prone to this kind of thing, although scientists in other fields certainly aren't immune.

Re: internet law? (Score: 3, Interesting)

by on 2014-05-05 16:40 (#1D0)

You have touched on two of my pet peeves as well: (1) when the messenger is more important than the message, and (2) when arrogance breeds the offering of opinions without appropriate pause for reflection. However, these seem so eminently human traits, they seem hard to criticize in an absolute sense. My latest approach to these things is just to ignore the bias implicit in listening to others based on perceived eminence and concentrate on their content or message. I suppose in the present context that means trying to say something sensible about the future of artificial intelligence or our fear of losing control over the AI we create... mmm...

I guess I would say we need to consider what purposes intelligences, whether natural or artifical, serve because -- presumably -- intelligence will evolve to support these purposes. And, there isn't enough conversation in society at all levels about the reasons for our moral (or purposeful) choices. Thus, I suppose I could be almost as afraid of the very rich and powerful making decisions which adversely affect my personhood as I am of any future artifical intelligence. Maybe this could change if we could demonstrate how choices for shared good outperform choices for personal good? Maybe an AI superior to our own natural intelligence could help us discover this?


Time Reason Points Voter
2014-05-05 16:49 Interesting +1
2014-05-05 21:43 Interesting +1

Junk Status

Not marked as junk