Round 2: We test the new Gemini-powered Bard against ChatGPT
Enlarge (credit: Aurich Lawson)
Back in April, we ran a series of useful and/or somewhat goofy prompts through Google's (then-new) PaLM-powered Bard chatbot and OpenAI's (slightly older) ChatGPT-4 to see which AI chatbot reigned supreme. At the time, we gave the edge to ChatGPT on five of seven trials, while noting that "it's still early days in the generative AI business."
Now, the AI days are a bit less early," and this week's launch of a new version of Bard powered by Google's new Gemini language model seemed like a good excuse to revisit that chatbot battle with the same set of carefully designed prompts. That's especially true since Google's promotional materials emphasize that Gemini Ultra beats GPT-4 in "30 of the 32 widely used academic benchmarks" (though the more limited Gemini Pro" currently powering Bard fares significantly worse in those not-completely-foolproof benchmark tests).
This time around, we decided to compare the new Gemini-powered Bard to both ChatGPT-3.5-for an apples-to-apples comparison of both companies' current free" AI assistant products-and ChatGPT-4 Turbo-for a look at OpenAI's current top of the line" waitlisted paid subscription product (Google's top-level Gemini Ultra" model won't be publicly available until next year). We also looked at the April results generated by the pre-Gemini Bard model to gauge how much progress Google's efforts have made in recent months.