Zum 3DCenter Forum

Inside nVidia NV40

April 14, 2004 / by aths / page 4 of 6

   64 bit framebuffer - ready or not?

You may remember the ancient times, when 16 bit framebuffers were replaced by 32 bit framebuffers. Many, quite heated discussions ensued about "how much" better 32 bit really was. Of course, there is no simple answer here. Old games optimized for 16 bit (such as Thief 2 ) look only slightly better with 32 bit rendering. Also, the whole situation was somewhat API-dependent - Unreal looked great with 16 bit, using Glide, but quite shabby with the Direct3D API. Anyhow, most of today's games really depends on having a 32 bit framebuffer.

In fact, we are comparing 16 bit color to 24 bit color. 16 bit framebuffer means RGBA 5 6 5 0, while 32 bit means RGBA 8 8 8 8. So there are 8 bit "destination alpha" for some (seldomly used) special effects.

Now, GeForce 6800 Ultra introduces an optional 64 bit floating point framebuffer. That is, RGBA FP16 FP16 FP16 FP16. We already covered its higher range compared to FX8. While nVidia calls it "high dynamic range" (HDR), it is in fact medium dynamic range (MDR). HDR needs at least 32 bit for each single value, NV40 has 16 bit. Anyway, FP16 is a great leap forward. The main advantage is: no longer do we have to render to an FP texture for MDR rendering. Just activate the FP16 framebuffer and you'll get an aceptable MDR render target. NV40 also supports all the alphablending stuff with its 64 bit framebuffer.

Of course, conventional 32 and even 16 bit framebuffers are still supported. There is not much extra logic for the fixpoint blending-operations. While FP16 utilizes a mantissa of 10 bit, a slightly modified FP16-logic can easily perform FX8-calculations. (Because there is an implicit bit in any common FP-format, we have 10 + 1 = 11 bit precision in FP16.)

But there are some drawbacks: With 64 bit framebuffer, no multisampling antialiasing is supported. This is due to limited bandwidth. FP64 alone (without antialiasing) should be slow enough, but we haven't seen any such demos yet, so perhaps nVidia will surprise us.

   A look at the improved RAMDAC

The RAMDAC creates the signal for the monitor, using the framebuffer as its reference. A recent article details the story about the RAMDAC (at this time, german language only.) As with previous NV boards, the RAMDAC employs a 10 bit gamma ramp. (This prevents color banding if you have 8 bit for every channel.) Since NV40 allows FP16 for each color channel in the framebuffer, at least 11 bit in the gamma ramp would have been nice. We'd even go as far as to wish for a 12-bit linear gamma ramp. (Windows supports up to 16 bit, by the way.)

The new RAMDAC is now able to apply tone mapping. This is a technique used to modify an image with modifying color values, to fit HDR (or MDR) image content into the displayable 0-to-1 range. Tone mapping via RAMDAC saves both fillrate and bandwidth. We don't know yet how flexible this new RAMDAC-mapping really is. There are a couple of nice effects playing with the exposure (resp. the aperture), but in the end, we have to wait and see whether NV40's RAMDAC really can offload the pixelshader, because otherwise the shaders have to do the tone mapping calculation.

   "Adaptive" trilinear

There is not much to talk about: You can turn it off. We are absolutely satisfied with it. If you want to have full trilinear quality, turn the optimization off. If you are less sensitive to "brilinear" and rather prefer more performance, turn it on. You have the choice - that is the way it's meant to be configured.

But sadly, the isotropic texture filter quality was lowered. ("Isotropic filter" means texturing without anisotropic improvements.)

Isotropic filtering as it should be.

GeForce 6800 Ultra : Lower isotropic quality at angles of about 45°.

We added some black lines to the images to show the segments the isotropic 360°-mip-patterns each consists of.

   Poor anisotropic quality

2x AF looks similar to the isotropic pattern, but is shifted by one MIP-map level. (So besides the lower quality at 45°, 2x AF delivers full anisotropic.) But with 4x AF and higher, only 90° and 45°-angles receive the full AF-degree, 22.5° and 67.5° angles are 2x AF only. This pattern looks similar to R300's AF, but it's not exactly the same.

To be frank, we were totaly shocked as we experienced the new anisotropic filter quality first-hand. While it still is slightly better that what current Radeons offer, compared to previous GeForce chipsets, the 6800 Ultra delivers poor anistropic filtering.

AF-quality was always a big criticism of ATIs parts. We are simply appalled that nVidia now sacrifices texture quality for some performance. Such "optimizations" should always be optional. (In fact, such ATI-style angle-dependencies result in lower overall quality for a given fillrate.) We don't see the point in using such "optimizations" for highend cards like the GeForce 6800 Ultra.

Benchmarks with AF enabled should not be compared to previous GeForces'. This would be as pointless as comparing AF-benchmarks between Radeon and previous GeForce based chipsets.

To enthusiasts looking for the best texture quality available, GeForce used to be the first choice. These times are over.

comment this article in our english forum - registration not required Zurück / Back Weiter / Next

nach oben