Article 69F1B Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done | John Naughton

Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done | John Naughton

by
John Naughton
from Technology | The Guardian on (#69F1B)

Our tendency to humanise large language models and AI is daft - let's worry about corporate grabs and environmental damage

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversation with Bing, Microsoft's ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructive desires and was in love with him.

The transcript of the conversation, together with Roose's appearance on the paper's The Daily podcast, immediately ratcheted up the moral panic already raging about the implications of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other generative AI" tools that are now loose in the world. These are variously seen as chronically untrustworthy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligence (AGI) - ie human-level intelligence - and therefore posing an existential threat to humanity.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://www.theguardian.com/technology/rss
Feed Title Technology | The Guardian
Feed Link https://www.theguardian.com/us/technology
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2024
Reply 0 comments