SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Silicon Graphics, Inc. (SGI) -- Ignore unavailable to you. Want to Upgrade?


To: Woody_Nickels who wrote (6124)5/27/1999 7:13:00 AM
From: Alexis Cousein  Read Replies (3) | Respond to of 14451
 
Anti-aliasing is a method by which a pixel colour is determined by sampling more than one subpixel (with fractional coordinates) and averaging. It eliminates "jaggies" and also creeping of jaggies and textures when objects move, without *having* to go to absurd resolutions (as an example, a TV has only roughly 500 lines of video, but no aliasing artefacts -- and you'll note that you neither see jaggies or creepies when watching a movie ;) ).

I'm not talking about anti-aliasing lines (every OpenGL card does this in hardware nowadays) or even polygons, but full scenes -- and there is a difference; if you have hardware that just anti-aliases the edges of polygons, you can't accurately blend an adjacent polygon correctly (certainly if you use the Z buffer to remember what's "in front" -- a Z buffer doesn't keep info to tell you that a pixel is "in front in the left upper corner, but not drawn elsewhere"), and if you texture, you want anti-aliasing everywhere.

There are several methods for full-scene anti-aliasing, each with their drawbacks. N (the number of samples) is usually taken as 3 or 4 for low-end things, and 8 or 16 (out of 64 or more -- see below) for good quality.

-a) accumulation buffer.

You render the scene different times, and average the current image with a buffer where you keep the average of all the "previous" images you rendered.

Drawbacks:
-a fast accumulation buffer needs extra hardware (i.e. if you pull
things in main memory to accumulate, you either need a UMA
architecture with a fast copier, or you're *even* slower )
-you process geometry information as well as pixels N times for N
samples.

Max. Performance:

Always more than N times slower for N samples.

Advantanges:
-Very flexible -- you can even use this to simulate other things like
motion blur and focus, and you can do as many samples as you want.

2) Grid supersampling.

You just render your "normal" scene, but at double the resolution, in a separate buffer. Then you average each 2x2 region into one pixel. This is called 4-out-of-four sampling (you use each point in the subgrid you define).

Drawbacks:
-usu only used for 2x2 supersampling, and the fact that the subpixels
are aligned on a grid can introduce artefacts, so it's a poor man's
anti-aliasing (but better than nothing -- usu good enough for games
and e.g. getting rid of the most blatant artefacts when you're doing
3D gfx at video resolution). Certainly not good enough for vis-sim.
-You're still processing N times as many pixels (but at least the
geometry is handled once only, unlike an accu buffer).
-Larger framebuffer requirements (Hint #1: what NT machine lets you
configure any part of main memory for the gfx?)
-Not as flexible as an accumulation buffer -- i.e. can't do things
like motion or focus blur.

Advantages:
-Rather cheap -- all you have to do is render in an off-screen buffer,
then use that as a texture source to draw a square. Hint #2: what
company produces machines with off-screen rendering, high-quality and fast texturing engines, and OpenGL extensions to allow you to source
textures from an off-screen buffer without having to do copying? ;)
-Geometry is handled only once.

Max. Performance:
Slower by more than 1 for geometry-bound things (but of course, your fill requirements us make the app fill-limited pretty soon for growing N)

Slower by more than N if you were fill-limited in your application without anti-aliasing.

3) Multisample anti-aliasing.

Here, you have a raster engine that can fill a framebuffer with more than one sample per pixel, and display hardware that can interpret that data correctly.

Drawbacks:
-Expensive. You need a large framebuffer, and lots of specialized
hardware.
-A lot of this is patented (no prizes for guessing by whom).
-Not as flexible as an accumulation buffer, unless you make the
hardware flexible as well (but flexible and fast hardware is even
more expensive). For plain "anti-aliasing" hardware, you can't do
e.g. motion or focus blur).

Advantages:
-High-quality. You can sample e.g. 8 points on a 64 8-by-8 grid, and
differently for each pixel, eliminating artefacts.
-High-performance -- as long as you make the pipes in the rasterizer
to the framebuffer wide enough, and have enough engines, you can lose
only a constant factor (e.g. 1.5) for a large number of samples).

In short, accumulation buffers are used for non-real-time things in software, 2x2 supersampling is beginning to become a feature of mid-end 3D cards (high-end NT, mid-end Unix) that can be used in a small set of applications to improve visual quality somewhat, and most professional apps still need multi-sample anti-aliasing because all other methods are too slow for most real-world R/T-gfx apps and/or of lower quality-- which is why SGI still dominates high-end graphics set-ups nowadays.