Probably a dumb question but:

With the upcoming GTA 6 where people speculate it might only run at 30FPS, I was wondering why there isn’t some setting on current gen consoles to turn on motion smoothing.

For example my 10 year old TV has a setting for motion smoothing that works perfectly fine even though it probably has less performance than someone’s toaster.

It seems like this is being integrated in some instances for NVIDIA and AMD cards such as DLSS and Fluid Motion Frames which is compatible with some limited games.

But I wonder why can’t this be a thing that is globally integrated in modern tech so that we don’t have to play something under 60FPS anymore in 2025? I honestly couldn’t play something in 30FPS since it’s so straining and hard to see things properly.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Motion smoothing means that instead of showing:

    • Frame 1
    • 33ms rendering
    • Frame 2

    …you would get:

    • Frame 1
    • 33ms rendering
    • #ms interpolating Frames 1 and 2
    • Interpolated Frame 1.5
    • 16ms wait
    • Frame 2

    It might be fine for non-interactive stuff where you can get all the frames in advance, like cutscenes. For anything interactive though, it just increases latency while adding imprecise partial frames.

    It will never turn 30fps into true 60fps like:

    • Frame 1
    • 16ms rendering
    • Frame 2
    • 16ms rendering
    • Frame 3
    • Boomkop3@reddthat.com
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      It’s worse

      • render frame 1 - 33ms
      • render frame 2 -33ms
      • interpolate frame 1|2
      • show frame 1
      • start rendering frame 3…
      • wait 16ms
      • show frame 1|2
      • wait 16 ms
      • show frame 2
      • interpolate frame 2|3
      • start working on frame 4…
      • wait 16ms
      • show frame 2|3
      • wait 16 ms
      • show frame 3 -> this is a whole 33ms late!

      And that’s while ignoring the extra processing time of the interpolation and asynchronous workload. That’s so slow, that if you wiggle your joystick 15 times per second the image on the screen will be moving in the opposite direction

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 days ago

        Hm… good point… but… let’s see, assuming full parallel processing:

        • […]
        • Frame -2 ready
        • Frame -1 ready
          • Show frame -2
          • Start interpolating -2|-1 (should take less than 16ms)
          • Start rendering Frame 0 (will take 33ms)
          • User input 0 (will be received in 20ms if wired)
        • Wait 16ms
          • Frame -2|-1 ready
        • Show Frame -2|-1
        • Wait 4ms
          • Process User input 0 (max 12ms to get into next frame)
          • User input 1 (will be received in 20ms if wired)
        • Wait 12ms
        • Frame 0 ready
          • Show Frame -1
          • Start interpolating -1|0 (should take less than 16ms)
          • Start rendering Frame 1 {includes User input 0} (will take 33ms)
        • Wait 8ms
          • Process User input 1 (…won’t make it into a frame before User input 2 is received)
          • User input 2 (will be received in 20ms if wired)
        • Wait 8ms
          • Frame -1|0 ready
        • Show Frame -1|0
        • Wait 12ms
          • Process User Input 1+2 (…will it take less than 4ms?)
        • Wait 4ms
        • Frame 1 ready {includes user input 0}
          • Show Frame 0
          • Start interpolating 0|1 (should take less than 16ms)
          • Start rendering Frame 2 {includes user input 1+2… maybe} (will take 33ms)
        • Wait 16ms
          • Frame 0|1 ready {includes partial user input 0}
        • Show Frame 0|1 {includes partial user input 0}
        • Wait 16ms
        • Frame 2 ready {…hopefully includes user input 1+2}
          • Show Frame 1 {includes user input 0}
        • […]

        So…

        • From user input to partial display: 66ms
        • From user input to full display: 83ms
        • Some user inputs will be bundled up
        • Some user inputs will take some extra 33ms to get displayed

        Effectively, an input-to-render equivalent of between a blurry 15fps, and an abysmal 8.6fps.

        Could be interesting to run a simulation and see how many user inputs get bundled or “lost”, and what the maximum latency would be.

        Still, at a fixed 30fps, the latency would be:

        • 20ms best case
        • 53ms worst case (missed frame)
        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          You’ve just invented time travel.

          The basic flow is
          [user input -> render 33ms -> frame available]
          It is impossible to have a latency lower than this, a newer frame simply does not exist yet.

          But with interpolation you also need consistent time between frames. You can’t just present a new frame and the interpolated frame instantly after each other. First you present the interpolated frame, then you want half a frame and present the new frame it was interpolated to.

          So your minimum possible latency is 1.5 frames, or 33+16=59ms (which is horrible)

          One thing I wonder tho… could you use the motion vectors from the game engine that are available before a frame even exists?

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            You’ve just invented time travel.

            Oops, you’re right. Got carried away 😅

            could you use the motion vectors from the game engine that are available before a frame even exists?

            Hm… you mean like what video compression algorithms do? I don’t know of any game doing that, but it could be interesting to explore.