Google upstages itself with Gemini 1.5 AI launch, one week after Ultra 1.0
Enlarge / The Gemini 1.5 logo, released by Google. (credit: Google)
One week after its last major AI announcement, Google appears to have upstaged itself. Last Thursday, Google launched Gemini Ultra 1.0, which supposedly represented the best AI language model Google could muster-available as part of the renamed "Gemini" AI assistant (formerly Bard). Today, Google announced Gemini Pro 1.5, which it says "achieves comparable quality to 1.0 Ultra, while using less compute."
Congratulations, Google, you've done it. You've undercut your own premiere AI product. While Ultra 1.0 is possibly still better than Pro 1.5 (what even are we saying here), Ultra was presented as a key selling point of its "Gemini Advanced" tier of its Google One subscription service. And now it's looking a lot less advanced than seven days ago. All this is on top of the confusing name-shuffling Google has been doing recently. (Just to be clear-although it's not really clarifying at all-the free version of Bard/Gemini currently uses the Pro 1.0 model. Got it?)
Google claims that Gemini 1.5 represents a new generation of LLMs that "delivers a breakthrough in long-context understanding," and that it can process up to 1 million tokens, "achieving the longest context window of any large-scale foundation model yet." Tokens are fragments of a word. The first part of the claim about "understanding" is contentious and subjective, but the second part is probably correct. OpenAI's GPT-4 Turbo can reportedly handle 128,000 tokens in some circumstances, and 1 million is quite a bit more-about 700,000 words. A larger context window allows for processing longer documents and having longer conversations. (The Gemini 1.0 model family handles 32,000 tokens max.)