GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We’ll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.

      • Dog@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Who says I want to do better?

        Edit: who says I want it at all?

    • palordrolap@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      It’s not yawn, but not because it’s great. It’s because it’ll be around for just long enough that it will create reliance on it, ruin many things, and then those people who have become reliant will find themselves in the position of having to unruin the many ruined things without the crutch to help them.

      Or maybe I’m being the next iteration of the schoolteacher or parent who said that you won’t have a calculator in your pocket all the time.

      But then, a calculator doesn’t need a terabyte of RAM. We’re a ways off that being consumer-affordable as yet. If past consumer RAM size trends are anything (and the only thing) to go by, a portable LLM would be a 2040s or 2050s expectation.

      Assuming that you’d be allowed to have the terabyte of data for nothing, anyway. Exorbitant subscription models are likely to be the norm by then.