Article 69DPZ A fake news frenzy: why ChatGPT could be disastrous for truth in journalism | Emily Bell

A fake news frenzy: why ChatGPT could be disastrous for truth in journalism | Emily Bell

by
Emily Bell
from US news | The Guardian on (#69DPZ)

A platform that can mimic humans' writing with no commitment to the truth is a gift for those who benefit from disinformation. We need to regulate its use now


It has taken a very short time for artificial intelligence application ChatGPT to have a disruptive effect on journalism. A technology columnist for the New York Times wrote that a chatbot expressed feelings (which is impossible). Other media outlets filled with examples of Sydney" the Microsoft-owned Bing AI search experiment being rude" and bullying" (also impossible). Ben Thompson, who writes the Stratechery newsletter, declared that Sydney had provided him with the most mind-blowing computer experience of my life" and he deduced that the AI was trained to elicit emotional reactions - and it seemed to have succeeded.

To be clear, it is not possible for AI such as ChatGPT and Sydney to have emotions. Nor can they tell whether they are making sense or not. What these systems are incredibly good at is emulating human prose, and predicting the correct" words to string together. These large language models" of AI applications, such as ChatGPT, can do this because they have been fed billions of articles and datasets published on the internet. They can then generate answers to questions.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://www.theguardian.com/us-news/rss
Feed Title US news | The Guardian
Feed Link https://www.theguardian.com/us-news
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2024
Reply 0 comments