Early Impressions of Google's Gemini Aren't Great
Google this week took the wraps off of Gemini, its new flagship generative AI model meant to power a range of products and services including Bard. Google has touted Gemini's superior architecture and capabilities, claiming that the model meets or exceeds the performance of other leading gen AI models like OpenAI's GPT-4. But the anecdotal evidence suggests otherwise. TechCrunch: The model fails to get basic facts right, like 2023 Oscar winners: Note that Gemini Pro claims incorrectly that Brendan Gleeson won Best Actor last year, not Brendan Fraser -- the actual winner. I tried asking the model the same question and, bizarrely, it gave a different wrong answer. "Navalny," not "All the Beauty and the Bloodshed," won Best Documentary Feature last year; "All Quiet on the Western Front" won Best International Film; "Women Talking" won Best Adapted Screenplay; and "Pinocchio" won Best Animated Feature Film. That's a lot of mistakes. Translation doesn't appear to be Gemini Pro's strong suit, either. What about summarizing news? Surely Gemini Pro, Google Search and Google News at its disposal, can give a recap of something topical? Not necessarily. It seems Gemini Pro is loathe to comment on potentially controversial news topics, instead telling users to... Google it themselves.
Read more of this story at Slashdot.