Probably a dumb question but:
With the upcoming GTA 6 where people speculate it might only run at 30FPS, I was wondering why there isn’t some setting on current gen consoles to turn on motion smoothing.
For example my 10 year old TV has a setting for motion smoothing that works perfectly fine even though it probably has less performance than someone’s toaster.
It seems like this is being integrated in some instances for NVIDIA and AMD cards such as DLSS and Fluid Motion Frames which is compatible with some limited games.
But I wonder why can’t this be a thing that is globally integrated in modern tech so that we don’t have to play something under 60FPS anymore in 2025? I honestly couldn’t play something in 30FPS since it’s so straining and hard to see things properly.
Motion smoothing means that instead of showing:
…you would get:
It might be fine for non-interactive stuff where you can get all the frames in advance, like cutscenes. For anything interactive though, it just increases latency while adding imprecise partial frames.
It will never turn 30fps into true 60fps like:
It’s worse
And that’s while ignoring the extra processing time of the interpolation and asynchronous workload. That’s so slow, that if you wiggle your joystick 15 times per second the image on the screen will be moving in the opposite direction
Hm… good point… but… let’s see, assuming full parallel processing:
So…
Effectively, an input-to-render equivalent of between a blurry 15fps, and an abysmal 8.6fps.
Could be interesting to run a simulation and see how many user inputs get bundled or “lost”, and what the maximum latency would be.
Still, at a fixed 30fps, the latency would be:
You’ve just invented time travel.
The basic flow is
[user input -> render 33ms -> frame available]
It is impossible to have a latency lower than this, a newer frame simply does not exist yet.
But with interpolation you also need consistent time between frames. You can’t just present a new frame and the interpolated frame instantly after each other. First you present the interpolated frame, then you want half a frame and present the new frame it was interpolated to.
So your minimum possible latency is 1.5 frames, or 33+16=59ms (which is horrible)
One thing I wonder tho… could you use the motion vectors from the game engine that are available before a frame even exists?
Oops, you’re right. Got carried away 😅
Hm… you mean like what video compression algorithms do? I don’t know of any game doing that, but it could be interesting to explore.