The more sophisticated AI models get, the more likely they are to lie
When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT's performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI's propensity to give well-structured, eloquent but blatantly wrong answers.
Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans," says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.
Smooth OperatorsEarly large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as how much is 20 +183." But in most cases where they couldn't identify the correct answer, they did what an honest human being would do: They avoided answering the question.