Zum 3DCenter Forum
Inhalt




Will "brilinear" filtering persist?

October 26, 2003 / by aths / page 1 of 2 / translated by 3DCenter Translation Team


   Introduction

In the meantime the expression "brilinear" has been established for Nvidia's towards bilinear filtering shifted pseudo-trilinear filtering. Although this term is semantically nonsense, we use it in this column.

There is an alarming development for a while: Newer Cards and/or newer drivers lower the image quality without being asked to do so. On a GeForce4Ti 4200 a better texture image quality (via anisotropic filtering) can be produced with current drivers (compared to a GeForceFX 5900 Ultra). On the other side the following is correct: The 4x-FSAA-Quality of the GeForce256 is by default better as with GeForce3 (and also as with GeForce 5900 Ultra). Because here stands 4x ordered grid supersampling vs. 4x ordered grid multisampling. But here it's not called "a step back".

Before we rush into comparisons, a clean definiton of the bilinear and trilinear filter should take place.

The same textures can be supplied in different resolutions, in each case scaled down by factor 2 (per axis). These "sub-textures" are called MIP maps. Example: A 512x512 base-texture has before-computed MIP maps in the quantities 256x256, 128x128, 64x64, 32x32 etc. Since the amount of data decreases for each new MIP map (you could also say "MIP texture"), the additional memory consumption in the graphics card's RAM is tolerable: about a third of the memory used by the base-texture.

An uncompressed 32-Bit-texture needs nevertheless 1 MB RAM for the size 512x512, all its MIP maps together need additional 0.33 MB RAM. Bigger textures than 512x512 are encountered quite rarely, since you need higher texture resolutions also only with very high screen resolutions. And we are in the middle of the topic already: Because if an far-off object is texturized, it consists of fewer pixels, therefore (dynamically selected and computed by the 3D-Hardware) a smaller texture resolution is taken automatically. GeForce cards recalculate the MIP stage in each triangle for each 2x2 pixel block, which is accurate enough.

Bilinear filtering means to mix 4 texel from the "most suitable" MIP map to one color value. In fact so that each of the four values gets a suitable "weight", which is described here in detail. Now during texturizing, when the texture is put over the polygon, no texture value may be "forgotten" in the long run. Accordingly the size of the MIP texture is selected.

If the next larger MIP texture is applied, texture shimmering (so called aliasing) would appear, since the texture would be scanned already too coarse meshed with four samples per color value. Here the first disadvantage occurs, if you filter from only one MIP map: The "Level of Detail" has to be determined in such a way, that in any case aliasing is avoided, cause no one would (of course) accept texture shimmering. Distributed over the polygon, the pixels get more or less texture details depending upon that.

If then you resort to the next smaller MIP map, "MIP banding" occurs. This is an artifact, which results from the fact that textures with high and low resolutions adjoin in the picture. In motion, these "bandings" are moved like a bow wave in front of you. The LOD formula provides altogether the highest possible sharpness degree, which is evenly achievable with bilinear filter. In addition, that means that the first MIP Band will be found relatively far "in front" of the picture and that those artifacts accordingly annoy.

Also for trilinear filtering there are accurate definitions. Microsoft specifies for Direct3D: "The texture filter to use between mipmap levels is trilinear mipmap interpolation. The rasterizer linearly interpolates pixel color, using the texels of the two nearest mipmap textures."

The OpenGL definition even indicates a mathematical formula, but we are content with a periphrasis: The trilinear filter "crossfades" the two most adequate MIP maps evenly and gently. So, every pixel got the same sharpness (exception: skewed represented textures, for this anisotropic filtering is necessary, which however should not be our topic for today). The formula for the trilinear filtering is of course selected in such a way, that the textures are as sharp as possible, but texture shimmering just does not appear.

The advantage of trilinear filtering is not only the consistent level of texture details. By the way, the over-all sharpness is not higher, "only" better distributed. The great advantage immediately evident in motion lies in the fact that the MIP Banding disappears: By a smoothly "crossfade" of the MIP textures there are no more sudden changes in the detail degree. Thus the bow waves are practically eliminated.

Let’s have a look at the "brilinear" filter now.


   "Brilinear" - A wise performance boost?

Instead of the appearing MIP band artefact during the use of trilinear filtering the whole area is overdrawn with a so called "Tri-Band". The zone where actually two MIP stages are used was meant with that. There is just a very small band where just one MIP texture determines the colour of the pixel.

"Brilinear" filtering likewise interpolates textures to suppress MIP banding. The tri-band is significantly reduced though. Within broad ranges solely bilinear filtering is used. Please keep in mind that "brilinear" is just an artificial word without any official use. "Pseudo-trilinear" sounds more technical in the first instance but it describes the filter just as insufficient. The effect shall be illustrated with images by imitating colored MIP maps with a paint application.


Bilinear filtering causes hard bands at the point where different detail resolutions adjoin.


Trilinear filtering always blends smoothly between two different detail resolutions.


This "range of blending" ("tri-band") is obviously smaller during "brilinear" filtering – broad segments are excessively filtered bilinear.

Rolling back some time: When the GeForce256 was released the GeForce3 and its multisampling anti-aliasing which went easy on resources were already in development. Later, when the GeForce3 was available the GeForce FX was already in development. The special logic of the filter which is needed for brilinear filtering had to be implemented into the hardware, i.e. brilinear filtering is no compromise solution but a planned device. Before evaluating this filter and its future we should take a look at another feature.

For sure, we focus our attention on anti-aliasing. Supersampling was exchanged for multisampling. While using the same subpixel mask supersampling provides better quality because for each subpixel the texture is sampled again (respectively the pixelshader is passed through). Multisampling calculates per triangle per pixel just one single value for the colour that applies to all subpixels which are covered by the triangle in that pixel. Consequently there is no improvement of the texture quality as it was possible with supersampling. In addition multisampling does not eliminate alphatest-artefacts – supersampling does. You could imply in a sneering way that multisampling hardware was developed to "cheat" better performance in anti-aliasing.

In spite of that it is worth to take a closer look: It was not pure performance that was achieved with the GeForce3 series because it still drops about 40% during the use of 2x anti-aliasing. When that card was released it was disproportional fast compared to available CPUs wherefore this weakness was not noticed intensively. Supersampling causes a drop of 50% - why sacrificing quality for a little performance boost?

The great attainment of the GeForce3 utters in the more efficient sample raster in 2x anti-aliasing. Beside the little gain of performance there was an obvious hike of quality achieved. The fact that multisampling does not increase the quality of textures can be unattended because the GeForce3 facilitates 8° anisotropy during the filtering of textures (GeForce256 and GeForce2 just about 2°).

There still remains the advantage of supersampling when using alpha textures: Well, alpha testing is a "bad method" anyway aggravating the cooperation with anisotropic filtering. As anisotropic filtering sways in future the extinction of alpha testing is apparent. Additionally mentioned the geometric force of modern graphic cards gets more and more powerful i.e. complex structures will be modelled and not realised with alpha textures. Then multisampling will act absolutely efficient and everybody will be satisfied.

Let us draw a conclusion: The replacement of supersampling by multisampling meant approximately to accept disadvantages in quality. Using other quality features which act independently of anti-aliasing provides better quality with less hardware raw power. Required is a hardware support for these features that needs relevant space of the chip architecture. Rakishly reworded you could say: "A clever mind saves power".

How does that work with "brilinear" filtering? Modern TMUs (units that handle texture sampling) supply only one bilinear sample per clock. To realise trilinear filtering either two TMUs or two clocks are required. Consequently trilinear filtering halves the fill rate theoretically. "Brilinear" filtering works in broad ranges excessively bilinear thus seemingly a lot of work can be saved. It looks very different in reality however.






comment this article in our english forum - registration not required Weiter / Next

Shortcuts
nach oben