What Happens When ChatGPT Can Find Bugs in Computer Code?
PC Magazine describes a startling discovery by computer science researchers from Johannes Gutenberg University and University College London. "ChatGPT can weed out errors with sample code and fix it better than existing programs designed to do the same.Researchers gave 40 pieces of buggy code to four different code-fixing systems: ChatGPT, Codex, CoCoNut, and Standard APR. Essentially, they asked ChatGPT: "What's wrong with this code?" and then copy and pasted it into the chat function. On the first pass, ChatGPT performed about as well as the other systems. ChatGPT solved 19 problems, Codex solved 21, CoCoNut solved 19, and standard APR methods figured out seven. The researchers found its answers to be most similar to Codex, which was "not surprising, as ChatGPT and Codex are from the same family of language models." However, the ability to, well, chat with ChatGPT after receiving the initial answer made the difference, ultimately leading to ChatGPT solving 31 questions, and easily outperforming the others, which provided more static answers. "A powerful advantage of ChatGPT is that we can interact with the system in a dialogue to specify a request in more detail," the researchers' report says. "We see that for most of our requests, ChatGPT asks for more information about the problem and the bug. By providing such hints to ChatGPT, its success rate can be further increased, fixing 31 out of 40 bugs, outperforming state-of-the-art....." Companies that create bug-fixing software - and software engineers themselves - are taking note. However, an obvious barrier to tech companies adopting ChatGPT on a platform like Sentry in its current form is that it's a public database (the last place a company wants its engineers to send coveted intellectual property).
Read more of this story at Slashdot.