OpenAI unveils GPT-4 AI model that understands text and images
Miscellaneous / / April 03, 2023
And this is not only about recognizing graphic objects, but also about understanding actions and events in pictures.
OpenAI company, chatbot developer ChatGPT, introduced a powerful new artificial intelligence model GPT-4. Unlike the previous one, GPT-3.5, it is able to understand not only text, but also images.
The AI will be able to tell you how to use what it sees in the photo. As an example, the developers cited shots with balloons and a boxing glove.
An even clearer example is a snapshot of an open refrigerator. GPT-4 will talk about the content and offer recipes from the available products. This feature can be useful for visually impaired people, the developers noted.
In OpenAI called GPT-4 is the most perfect and creative model. It still allows you to analyze texts and structure information according to user requests, but it makes it more “meaningful”. In various professional and academic tests, AI is now able to get higher scores.
In normal conversation, the difference between GPT-3.5 and GPT-4 can be subtle. But the differences appear when the complexity of the task reaches a sufficient threshold - GPT-4 is more reliable, creative and able to process much finer instructions than GPT-3.5, added developers.
AI creators also confirmed that recently submitted Microsoft search engine with chatbot works on GPT-4. In addition, this model is already used in the products of Stripe, Duolingo, Morgan Stanley and A Khan Academy. The latter uses artificial intelligence to create a kind of automated tutor.
GPT-4 is already available to OpenAI users with a ChatGPT Plus subscription. Other users can register with list waiting for API access.
Read also🧐
- How to use GPTZero, a text search tool created by ChatGPT
- Where can you embed ChatGPT and why: 5 options
- 8 best neural networks for drawing online