AI Coding Competition Pits GPT-4 Against Bard, GitHub Co-Pilot, Bing, and Claude+
HackerNoon tested five AI bots on coding problems from Leetcode.com - GPT-4, GitHub Co-Pilot, Bard, Bing, and Claude+. There's some interesting commentary on the strengths and weaknesses of each one -- and of course, the code that they ultimately output. The final results?[GPT-4's submission] passes all tests. It beat 47% of submissions on runtime and 8% on memory. GPT-4 is highly versatile in generating code for various programming languages and applications. Some of the caveats are that it takes much longer to get a response. API usage is also a lot more expensive and costs could ramp up quickly. Overall it got the answer right and passed the test. [Bing's submission] passed all the tests. It beat 47% of submissions on runtime and 37% on memory. This code looks a lot simpler than what GPT-4 generated. It beat GPT-4 on memory and it used less code! Bing seems to have the most efficient code so far, however, it gave a very short explanation of how it solved it. Nonetheless, best so far. But both Bard and Claude+ failed the submission test (badly), while GitHub Copilot "passes all the tests. It scored better than 30% of submissions on runtime and 37% on memory."
Read more of this story at Slashdot.