SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : 3DFX -- Ignore unavailable to you. Want to Upgrade?


To: Ben Wu who wrote (10669)2/13/1999 4:24:00 PM
From: Chip Anderson  Read Replies (1) | Respond to of 16960
 
OK, I'm becoming convinced that the V3 3500 will hit the "sweet spot" of the hard core gaming community. That "sweet spot" is >40fps @ 1600x1200. The first retail card that delivers that performance level reliably will spark a wave of upgrades from the hard core gaming community.

32-bit color is still a wild-card, but Half-Life (which is displacing Q2 as the favorite game among the hard core) doesn't show dramatic improvements from that feature. It looks like the card will arrive next month - before the next wave of new software, so performance on the current crop of games will be key.

Gamers that have already upgraded to a TNT card will wait for TNT2, but if its 1600x1200 framerate isn't above 40fps, they will switch back also IMHO.

Next month should be fun!
Chip
coolhistory.com



To: Ben Wu who wrote (10669)2/15/1999 4:33:00 PM
From: timbur  Read Replies (1) | Respond to of 16960
 
Re: MTexels vs MPixels - one interpretation

OK, I was pretty sure "GrandMaster B" probably explained this at one point, but I couldn't find a definition of it on voodooextreme's "Ask GrandMaster B" search engine, probably because it doesn't go back past 10/1/98. Look at the bottom of this message for the cut/paste I did.

A MegaTexel (MT) refers to the number of dual-textured pixels produced.
A MegaPixel (MP) refers to the number of pixels produced, usually single-textured.

Texel is a trademarked term by 3dfx.

For single-textured games, the equivalent MP and MT numbers should yield equivalent performance.
For multi-(ie, dual)-textured games, the MP would need to be twice the MT to achieve equivalent performance. See the explanation below, please.

Conclusion: All else being equal, V3's 366MT/s will kick the TNT2's 250MP/s butt, especially in multi-textured games (read: Quake II based). Are we sure these numbers are correct???

From "Ask Grandmaster B" column in VoodooExtreme:
(http://www.voodooextreme.com/ask/askmenu.html)
(I reformatted this for easier reading. In the original post, the answer and question were not separated.)

I went straight to the horse's mouth on this one. This is from David Kirk (architect of the Riva TNT).

GMB

Here's the question

GMB,

To my understanding the TNT is capable of rendering 2 pixels per clock even when multi-texturing is not being used. I have read this off of various PR's from Nvidia and other OEMs. This would mean that the fill rate would be close to that of the fill-rate when it is doing multitexturing (180Mtps), but as we can see in 3DMark99 and other benchmarks the fill rate is just below the theoretical fill rate of one texture unit (90Mtps) this would seem to me that the TNT is only using one of its Texture units at a time. But that was one of the differences that Nvidia had claimed, that while the Voodoo2 could only use both texture units in multitexture situations, the TNT had a "smart" pipeline that would allow both texture units to work simultaneously on separate pixels in single texture environments. Why is this not evident in benchmarks? Does the programmer of the Engine need to optimize for this or is it a D3D, OGL issue?

Here is the answer

TNT can render 2 pixels per clock in single texture mode, if trilinear filtering is not requested. So, in this mode, the fill rate would be approximately 180M PIXELs/second (not TEXELs). If multitexturing or if trilinear filtering is requested, then both pipelines cooperate to produce a single pixel, so the effective draw rate is approximately 90M dual-textured or trilinearly filtered pixels/second. There are benchmarks that show this, but in many benchmarks you don't get a pure measure of a single operation; you get a mix. 3DMark, 3DWinBench, and other benchmarks often mix a bunch of operations together, and then report them as a single thing. As an example, in last year's Final Reality benchmark, the "data transfer" benchmark was done using BLTs and creating and destroying DirectDraw surfaces as well as copying data, and it turned out that for most graphics cards, you were measuring the driver speed at creating and destroying DirectDraw surfaces in D3D, not data transfer. In both 3DMark and 3DWinbench99, there is texture modification and texture downloading mixed in with the rendering - these activities are slower than drawing and require synchronization between the CPU and graphics. So, often the benchmarks aren't actually measuring pure triangle rate or pure fill rate. If there was a benchmark that JUST drew single textured, bilinear LARGE triangles, you would see about 180M pixels/second, but how useful is that? What really matters is throughput for real games, so overall the benchmarks are pretty good, if they do the mix of operations that game developers do. The faster peak fillrate contributes to the overall performance

David Kirk