LXer: ChatGPT's odds of getting code questions correct are worse than a coin flip
by LXer from LinuxQuestions.org on (#6DNV9)
Published at LXer:
But its suggestions are so annoyingly plausibleChatGPT, OpenAI's fabulating chatbot, produces wrong answers to software programming questions more than half the time, according to a study from Purdue University. That said, the bot was convincing enough to fool a third of participants.i
Read More...
But its suggestions are so annoyingly plausibleChatGPT, OpenAI's fabulating chatbot, produces wrong answers to software programming questions more than half the time, according to a study from Purdue University. That said, the bot was convincing enough to fool a third of participants.i
Read More...