11/22/2008

11-22-08 - Rasterization

I just found this pretty sweet introduction article on barycentric rasterization from 2004. It's not super advanced, but it starts at the beginning and works through things and is very readable. There are some dumb things in the block checking, so if you care go to the last page and see the posts by "rarefluid".

BTW the "edge equations" are really 2d plane equations (edge cross products). Checking just the edge planes against 8x8 blocks is only a rough quick reject. You can have blocks that are right outside of one vertex at an acute corner, and those blocks are "inside" all the planes but outside of the triangle. The code they have posted also checks against the bounding box of the whole triangle which largely fixes this case. At most they will consider one extra 8x8 block which doesn't actually contain any pixels.

(it's also really not yet a full barycentric rasterizer, he's just doing the edge tests that way; from his other posts I figure he's doing interpolation using the normal homogenous way, but if you're doing the edge-tests like that then you should just go ahead and do your interpolation barycentric too).

This kind of block-based barycentric rasterizer is very similar to what hardware does. One of the nice things about it is the blocks can easily be dispatched to microthreads to rasterize in parallel, and the blocks are natural quanta to check against a coarse Z buffer.

The old Olano-Greer paper about homogenous coordinate rasterization is now online in HTML. Some other random junk I found that I think is just junk : Reducing divisions for polygon mappers & Triangle Setup .

This blog about Software occlusion culling is literally a blast from the past. As I've often suggested, if you care about that stuff, the SurRender Umbra technical manual is still the godly bible on all things occlusion. (or you can read my ancient article on it ). But I also still think that object-based occlusion like that is just a bad idea.

Coarse Z that's updated by the rasterizer is however a good thing. Doing your own on top of what the card does is pretty lame though. This is yet another awesome win from Larrabee. If we/they do a coarse Z buffer, it can get used by the CPUs to do whole-object rejections, or whole-triangle rejections, or macroblock rejections.

Apparently the guy who wrote that top article is Nicolas Capens ; he wrote "swShader" which was an open source DX9 software rasterizer, which got taken down and is now a commercial product (which was a silly thing to do, of course any customer would rather buy Pixomatic !!). I learned this from a random flame war he got in. Don't you love the internet?

2 comments:

  1. "sw" used to stand for "SoftWire" which is was what he named his C++ code generator (also taken down). I used to think it was pretty cool since it did stuff using C++ metaprogramming. It was kind of like libSh (libsh.org) except instead of GPU shaders it did x86 assembly.

    libSh was kind of neat. It started from Mike McCool from Waterloo, and it spun off into a company which is now called RapidMind (the old name, Serious Hack, didn't last).

    I like McCool's treatment of the Olano-style rasterization better than the original:

    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.5738

    ReplyDelete
  2. Barycentric rast is clever, and Nick got most of the way, but you need to grind on it a bit more and then you get to (a) a hierarchy and (b) you realise that everything is just a bunch of integer adds, and you only need <32 bits of precision at each level. The last point is somewhat tricky, but very cool once you figure it out.

    And that's how you get to Larrabee's rasterisation: http://s08.idav.ucdavis.edu/

    The hierarchy is somewhat similar in spirit to Ned Greene's paper "Hierarchical Polygon Tiling with Coverage Masks". We did actually look at homogenous-space rast, and it's very clever, but the precision constraints get very scary very fast. I actually ran into Marc Olano at Siggraph08, which was pretty cool.

    ReplyDelete