The danger of blindly embracing the rise of AI | Letters
Readers express their hopes, and fears, about recent developments in artificial intelligence chatbots
Evgeny Morozov's piece is correct insofar as it states that AI is a long way from the general sentient intelligence of human beings (The problem with artificial intelligence? It's neither artificial nor intelligent, 30 March). But that rather misses the point of the thinking behind the open letter of which I and many others are signatories. ChatGPT is only the second AI chatbot to pass the Turing test, which was proposed by the mathematician Alan Turing in 1950 to test the ability of an AI model to convincingly mimic a conversation well enough to be judged human by the other participant. To that extent, current chatbots represent a significant milestone.
The issue, as Evgeny points out, is that a chatbot's abilities are based on a probabilistic prediction model and vast sets of training data fed to the model by humans. To that extent, the output of the model can be guided by its human creators to meet whatever ends they desire, with the danger being that its omnipresence (via search engines) and its human-like abilities have the power to create a convincing reality and trust where none does and should exist. As with other significant technologies that have had an impact on human civilisation, their development and deployment often proceeds at a rate far faster than our ability to understand all their effects - leading to sometimes undesirable and unintended consequences.
Continue reading...