Open AI Announces Its Latest Model GPT-4 Turbo, Key Features Unveiled
OpenAI introduced its latest AI model, GPT-4 Turbo, on Monday, along with a slew of new features for ChatGPT and other AI tools and models. GPT-4 Turbo happens to be the largest update to the OpenAI large language model (LLM) GPT ever since the company rolled out ChatGPT last year. The announcements were made at DevDay, the AI giant's first-ever developer conference held in San Francisco.
An update to the existing GPT-4 model, GPT-4 Turbo, is significantly more powerful and comes with several improvements. OpenAI also unveiled GPT-4 Turbo with vision", a special GPT-4 Turbo variant with optical capabilities.
5 Key GPT-4 Turbo Features RevealedThe upgraded GPT model features several improvements, including access to a larger and updated database. The large language model was originally trained on an older dataset, due to which ChatGPT's knowledge was limited to September 2021.
However, GPT-4 Turbo is based on more up-to-date training data covering information up to April 2023. With its new knowledge cutoff date, the large language model can now answer prompts in more current contexts.
Under the new pricing, developers can purchase 1,000 prompt tokens for $0.01 and 3,000 completion tokens for $0.03.With the launch of GPT-4 Turbo, users can get the AI model to run longer input prompts. GPT-4 Turbo supports up to 128,000 tokens of context", said OpenAI CEO Sam Altman.
To provide more perspective, Altman compared the new limit to around the same number of words as in 300 book pages.
For comparison, GPT-4 has a limit of only 8,000 tokens, which is equivalent to 24 book pages. This makes GPT-4 Turbo far more capable than the previous versions at tasks like analyzing and summarizing extensive documents.
The current pricing model for using OpenAI's API can be quite expensive for developers. This is set to change with the launch of GPT-4 Turbo, which Altman claimed will make it cheaper to provide information and receive answers.
GPT-4 Turbo is also better at following instructions than its predecessors, claims a blog post on OpenAI's website. The new model is a better listener and much more capable of following instructions carefully, the blog stated. Developers who take assistance from ChatGPT to write code will likely find this improvement particularly helpful.
As of now, GPT-4 features a dropdown menu from which ChatGPT Plus users can select a variety of AI tools for the chatbot, depending on the application.
Acknowledging that the model picker can be extremely annoying, Altman announced that GPT-4 Turbo will make the dropdown menu a thing of the past.
The updated AI model will be able to automatically pick the right tools for the job, such as using DALL-E by default when asked to generate an image.
GPT-4 Turbo With Vision to Support Image InputsAnnounced to be coming soon, GPT-4 Turbo with vision is capable of receiving image prompts. This special GPT-4 Turbo variant allows users to upload images directly via the chat box. The AI model then executes prompts like generating captions or describing the contents of the image. The regular GPT-4 Turbo and the GPT-4 Turbo with vision models will both support text-to-speech capabilities with multiple preset voices.
The post Open AI Announces Its Latest Model GPT-4 Turbo, Key Features Unveiled appeared first on The Tech Report.