Interview with nVidia´s Luciano Alibrandi
May 9, 2004 / by aths / Page 1 of 1
We got the opportunity to interview Luciano Alibrandi regarding the NV40. Because Mr. Alibrandi is quite busy, we kept our interview short.
3DCenter: Hello Mr. Alibrandi, welcome to 3DCenter.org. Please introduce yourself and tell us about your job at nVidia.
Luciano Alibrandi: My name is Luciano Alibrandi and I am the European Product PR Manager at NVIDIA. I have been working at NVIDIA for 3 years now. Previous to NVIDIA I worked at 3DFX and STB Systems on different roles, mostly related to Press support.
3DCenter: As you are in PR , we'd like to ask a question about GeForce FX 5200. Many of our readers consider the FX 5200 a "DirectX8 board" because the execution speed of DirectX9 shaders is rather slow. Many games with DX9 shader support automatically treat the FX 5200 as DirectX8 level hardware. Can you tell us the reasoning behind Nvidia providing the full CineFX feature set even on entry level products with very poor overall 3D performance?
Luciano Alibrandi: The GeForce FX product line is full featured DX9 – top to bottom. In the cases where a DX9 feature can be emulated with multiple passes in DX8, it is possible that our lower end products may actually perform better in this mode. However, for those effects that can only be done in DX9, even the least expensive GeForce FX can support them.
3DCenter: With the new GeForce 6800, nVidia does not only reduce clock speeds compared to the Ultra version, but also the number of pipelines. In the end, is the 6800 non-Ultra the same chip just with four pipelines deactivated but the full six vertex engines of the Ultra version, or is the difference between Ultra and non-Ultra larger?
Luciano Alibrandi: In the $499 segment we have the 6800 Ultra is a new architecture, has support for next gen features (SM 3.0, 64bit texture filtering/blending), and has overclocking headroom. Because of this our recommendation for someone willing to spend $499 is to go with the6800 Ultra, as it's a better long term investment, and an over clocked version of the board
The second board is the GeForce 6800 GT. This board offers 16 full speed pipes, 256mb DDR3 all the next gen features as the GeForce 6800 Ultra, and as was proven by the overclocking headroom of the Ultra, should have room for overclockers. It has all the features with a great price: 16 pipes, support for SM 3.0 and 64bit texture filtering/blending, single slot thermal solution and single power dongle.
Then we have the GeForce 6800 (standard) which has exactly the same feature set of the above, but has 12 pipes and 128 MB of DDR1.
3DCenter: Are board manufactures (AIB) free to ship 6800 non-Ultra boards with 256 MB RAM? Can a 6800 non-Ulta operate with GDDR3, or is it limited to DDR1 on the chip side?
Luciano Alibrandi: GeForce 6800 standard boards will be 128MB. The Ultra boards will be 256MB.
3DCenter: The pattern of the anisotropic filtering of the GeForce 6800 range is similar to that of another well-known company. Formely, Nvidia offered much better quality (with a more severe performance hit, of course). Is this new pattern fixed in hardware, or can we expect an option for full, "old-school"-anisotropic filtering in future driver releases?
Luciano Alibrandi: The GeForce 6800 hardware has a simplified hardware LOD calculation for anisotropic filtering, compared to the GeForce FX. This calculation is still fully compliant with the Microsoft DirectX specification and WHQL requirements, as well as matching the requirements for OpenGL conformance. This optimization saves both cost and performance, and makes the chip run faster for anisotropic filtering as well as be less expensive for consumers. Consequently, it may not be possible to re-enable the GeForce FX calculation in this hardware generation. We'll look into it.
I'd like to point out, though, that when our anisotropic quality was FAR better than the competitive product, no one in the website review community praised us for our quality of filtering – they only complained about the performance. In the future, we will plan to make a variety of choices available from maximum (as nearly perfect as we can make) quality to the most optimized performance. It's also interesting to note that although you can run tests that show the angular dependence of LOD for filtering, it is extremely difficult to find a case in a real game where the difference is visible. I believe that this is a good optimization, and benefits consumers.
3DCenter: How many percent of the NV40's transistors are logical gates, and how many are used for cache and/or registers?
Luciano Alibrandi: 85% of the transistors are logic the remaining is ram for caches, registers, fifo's, etc.
The most detailled and most important answer refers to anisotropic filtering (AF.) Obviously we cannot expect classic GeForce-AF for the GeForce 6800 generation. We believe NV40 loses important, potential advantages over its competitors with that step. If the performance is approximately on par, many of our readers would chose the card with better texture quality. The NV40 has still an advantage here, but is far from the GeForceFX.
Apparently, the option for quasi-perfection will not be lost forever. The hint regarding the size of the chip is quite interesting. While ATI traditionally "economize" the texture filtering, Nvidia seemed also to be forced to accept a trade-off and was not able to offer both the "optimized" and the traditional version. By the way, GeForce4 Ti also has an issue regarding AF. While the full quality was kept, the performance is quite low. It's quite obvious we previously underestimated the complexity to calculate AF-texture samples.
However, we are looking forward for the following generation and wish for full quality like we seen alread 2001 with GeForce3.