The existential threat from AI – and from humans misusing it | Letters
Roger Haines writes that there is no evidence of AI sentience, while Prof Paul Huxley recalls Asimov's laws of robotics, and Phyl Hyde says fears of AI are being overblown
Regarding Jonathan Freedland's article about AI (The future of AI is chilling - humans have to act together to overcome this threat to civilisation, 26 May), isn't worrying about whether an AI is sentient" rather like worrying whether a prosthetic limb is alive"? There isn't even any evidence that sentience" is a thing. More likely, like life, it is a bunch of distinct capabilities interacting, and AI" (ie disembodied artificial intellect) is unlikely to reproduce more than a couple of those capabilities.
That's because it is an attempt to reproduce the function of just a small part of the human brain: more particularly, of the evolutionarily new part. Our motivation to pursue self-interest comes from a billion years of evolution of the old brain, which AI is not based upon. The real threat is from humans misusing AI for their own ends, and from the fact that the mechanisms we have evolved to recognise other creatures with minds like ours are (as Freedland highlighted) too easily fooled by superficial evidence.
Roger Haines
London