Zum 3DCenter Forum
Inhalt




ATI's filtering tricks

November 21, 2003 / by aths / page 2 of 3 / translated by 3DCenter Translation Team


   Base filter simplifications

A trilinear filter is a linear interpolation between two bilinear samples, requiring a weight between 0 and 1. ATI allocate five bits for this weight, which matches Direct3D's reference rasterizer (however, higher precision is allowed by Direct3D and in fact desirable). In OpenGL, SGI - who spearheaded the inception of this API - use eight bits. That's also the standard that's followed by, eg, Nvidia's GeForce range that implements the 8 bit linear interpolation weight for both OpenGL and Direct3D.

These three additional bits result in an eightfold increase in definition. Do we "need" that? In our opinion, at least six bits of "LOD fraction" are desirable to minimize banding artifacts. Five bits are okay for most cases, while four bits are definitely too few. Eight bits may be slightly overkill but then there's no disadvantage to precise texture filters. Anyway, textbook quality is eight bits, this guarantees zero banding and also constitutes SGI's recommendation for OpenGL.

"5 bit LOD fraction" issues are hard to demonstrate using screen shots, significantly harder than pointing out the issues with Nvidia's "brilinear" filtering. We still can't help wondering why it would be necessary to make savings in this area, while at the same time there's pixel shading hardware operating on 96 bits (4x 24 bits floating point), and 6x sparse, gamma corrected multisampling anti-aliasing. ATI apparently went for maximum savings in texture filtering logic. Even bilinear filtering fell victim to these "optimizations" (MouseOver effect using Javascript, alternative: click opens both screenshots):


MouseOver effect using Javascript, alternative: click opens both screenshots

We're not quite sure what causes these block artifacts. But we do know how a bilinear filter should look, and that the competition does offer textbook quality - as would be expected from any current graphics card. While we're at it, this "optimization" has been done since R200 at the latest.

Creating the bilinear sample requires knowledge of the exact sampling position for the filter kernel. Similar to the borderline acceptable lod fraction precision, ATI appear to have made some savings in this area. The sample coordinates' fraction bits are used to calculate a weighting matrix for the source texels. Maybe this calculation was subject to complexity reductions, involving lookups to skip some arithmetic.

These artifacts won't be seen with tiny textures, we can only speculate at this time about the reasons. We don't know where and how savings were made, we only know that they have been made. In short: current Radeon based cards can't deliver textbook quality bilinear filtering in certain circumstances. GeForce based cards don't have these issues.

This also applies to the weight matrix's quantization. While GeForce chips implement eight bits, ATI chips have to make do with six. More than eight bits wouldn't make much sense because the textures' and framebuffer's color channels are only eight bits wide. However, if even a single bit less is used, ie seven bits, the full color range of the frame buffer can't be used anymore. There would still be more than 2^7=128 gradients in total, but that doesn't help with pixel lines that sit exactly between two texels. To wrap it up: less than 8 bits of resolution lead to block artifacts under heavy zoom (MouseOver effect using Javascript, alternative: click opens both screenshots):


MouseOver effect using Javascript, alternative: click opens both screenshots

This, too, is very hard to prove with "realistic" (ie in-game) screen shots. As long as there are higher resolution mip map levels to use, the bilinear filter will at most zoom a mip level to double size, obscuring these artifacts. To reveal them, a high contrast, high frequency texture must be heavily magnified.

So these issues are hard to spot, doesn't that make it alright? We don't think it's okay to go below textbook quality on bilinear filters. These are the foundation for eg function lookups used in pixel shading. The better the bilinear filter, the better the end result - pretty obvious. The most simple of all texture filters is bilinear. We'd expect it to be implemented without compromise.

Unfortunately ATI set their priorities a bit differently, as demonstrated by mip map LOD calculation (MouseOver effect using Javascript, alternative: click opens both screenshots):


MouseOver effect using Javascript, alternative: click opens both screenshots

The GeForce card exhibits imperfections the size of a quad in this example (a quad is a 2x2 pixel block, and lod calculations are performed per quad, not per pixel). On the other hand the Radeon card produces a chaotic pattern, with wildly varying LOD. Apparently the LOD calculation was implemented with as few transistors as possible, sacrificing accuracy. GeForce cards aren't outright perfect either, we can produce situations where they, too, show "dithering" patterns. Still, the Radeon cards start doing it much earlier.






comment this article in our english forum - registration not required Zurück / Back Weiter / Next

Shortcuts
nach oben