Why pixel counting never went away.
For this blog though, it’s actually a supporter question that prompts this article.
We often refer to that latter tech as TAAU - a combination of upscaling and temporal anti-aliasing.
And the more pixels you upscaled, the less the return.
Another great example isInfinity Ward’s upscaler in Call of Duty.
And yet, a couple of years on, Digital Foundry is still pixel-counting the old-fashioned way.
Games with poor anti-aliasing, MSAA or even FXAA could be relatively easy to count.
TAAU make things a lot more difficult because those edges became harder and harder to find.
AMD’s FSR 2 complicated matters still further.
At this point, we’ve reached the conclusion that there’s no programmatic way to express image quality.
The picture can look poor and the base resolution helps explain why.
Is there any route forward in eliminating pixel counts for good?
We were recently pointed towards a machine learning model used by Netflix and others to judge video encoding quality.
And at first, results looked interesting.
But the more content we tested, the less convinced we were by the results.
It turns out that using a video-based model just wasn’t a good fit.
It’s likely that thousands of comparisons like this were done and the model took shape from there.
So, perhaps something similar could be done by comparing the various upscaling techniques available today?
It’s all about a subjective analysis better informed by figuring out base resolution the old-fashioned way.