‍‍‍‍‍‍
Logo Afffect Media - marketing and advertising news.
Marketing

Open Ai deploys the new GPT-4o AI model, and it's bluffing!

In a stunning live demonstration, OPEN AI unveiled GPT4o, and its new capabilities are impressive to say the least.

OpenAI launched its latest AI model, GPT-4o, at the Spring Update event. GPT-4o, with a small "o" for omni. And its new capabilities are impressive.

GPT-4o, Open AI's conversational agent, is capable of " reasoning about audio, vision and text in real time." The updated model "is much faster" and improves "text, vision and audio capabilities," said OpenAI CTO Mira Murati.

GPT 4o: a new multimodal model

Sam Altman, CEO of OpenAI, says the template is "natively multimodal", meaning it can generate content or include voice, text or image commands. While users can already use the text and web search functions of this advanced template, voice and video functionalities will be introduced over the coming weeks. Initially, on ChatGPT Plus and ChatGPT Team accounts. To access GPT-4o, all you need is an OpenAI account. Once logged in, the top drop-down menu displays a list of available models, including GPT-4o, GPT-4 and GPT-3.5. If GPT-4o doesn't appear on your screen, you don't yet have access to it. Although the new model is free for all users, paying customers will benefit from a higher messaging limit than free users.

_____________________________________________________________

Keep up to date with the latest marketing news by subscribing to our newsletter.

_________________________________________________________________

Enhanced response and analysis capabilities

Among the new features presented, the new version of Voice Mode attracted particular attention. And why? Because it enables impressive reproduction of human conversations. Whereas in the old version, Voice Mode could respond to one prompt at a time and work solely on the basis of what it could hear, the new version can now act as a voice assistant, responding in real time by observing its environment.

In a stunning live demonstration, ChatGPT was able to demonstrate the extent of its new capabilities: emotion reading, empathy, translation, equation solving... The new GPT-4o demonstrated substantial improvements in its responses compared to its predecessor. A notable improvement is seen in the solving of mathematical equations, where answers are now presented in a better format without having to perform several steps at once. Other notable improvements include the model's ability to search the web for the latest information, providing citations for each web search result. Users can also download files for analysis and ask the template questions about content.

Last but not least, another announced change: access to templates. Following the Spring Update, Sam Altam acknowledged in a blog post that the company's initial vision had changed. Long criticized for not opening up its advanced AI models, they will now be made available to developers via paid APIs.

Read more articles

Receive Le Feuillet
Your weekly marketing newsletter, so you don't miss a thing.
There's a mistake.