10/28/2008

10-28-08 - 2

I think the realtime raytracing guys are rather too optimistic. The thing is, first hit raytracing against solid objects is just not interesting. It gives you absolutely zero win over rasterization, and rasterization is way way faster. They brag about 100 M rays a second, but rasterization is doing many Billions of pixels per second now.

I should back up and say they are doing very fast raytracing for coherent rays using packets that they run through SIMD, and they can do stuff like break the screen into tiles for good parallelism on multi-core or multi-proc systems. Those kind of things are the exact same optimizations we do in rasterization, and they only work for plain old rendering a big image from the camera. Also the raytracing stuff is only really fast on static scenes which is a huge problem; we need to be moving towards everything-is-dynamic worlds.

In fact, if you like, you can think of rasterization as a very clever optimization for first hit ray-tracing. They share the exact same strengths and weaknesses - they both only work well when you're shooting a ton of "rays" from the same spot with lots of coherence. They both exploit parallelism by making quads and doing them at the same time down a SIMD path, and they both fall apart if you do lots of different objects, or rays in random directions etc.

Raytracing is of course very interesting, but the raytracing that you really *want* to do is also the exact thing that's very slow. You want N bounce raytracing, and for it to really be a win over current rasterization techniques you need to be able to do good sampling at the really hard edge-on cases where your rays diverge badly and you need to shoot lots of rays and do monte carlo integration or something, eg. to get better fresnel reflections on the edge-on surfaces of objects.

It also totally tilts me the way raytracing guys grossly overstate the importance of refraction and caustics. Improving the realism of our wine glass rendering is about 1000th on the list of things we need to make games look better.

Things that would actually help :

1. Better basic lighting & shadowing. This is something that hybrid raytracing could help in the near term. I mean eventually we'd like full realtime GI, solving the rendering equation every frame, but in the near term even just better shadow maps that don't have any sampling problems would be awesome, which is something hybrid raytracing could do. If you could render your scene with rasterization and also cast 100M rays per frame for shadows, you could probably get some semi-soft shadow area light source effects, which would be cool.

2. Volumetric effects, and light & shadow through objects. Dust clouds with lighting attenuation and scattering. Light through cloth and skin and leaves.

You don't really get the cool benefits from raytracing until you can do something like 1000 non-coherent rays per pixel (to cast around and get lighting and reflections and scattering and so on). At 1000x1000 at 100 fps, that's 100 Billion rays per second (and non-coherent). We're very far away from that. Even on something like Larrabee 3 you'll be better off rasterizing and maybe just using rays for shadows.

3 comments:

castano said...

This is basically what the NVIDIA GPU raytracer does. Traditional rasterization is used for primary rays, and raytracing is used for shadows and reflections.

There's some info about the raytracer in the following presentation:

http://developer.nvidia.com/object/nvision08-IRT.html

Accurate soft shadows, indirect illumination, and volumetric effects are the kind of things that raytracing is useful for.

The interesting problem is how to extract coherency from non-coherent raytracing.

cbloom said...

Yeah good presentation.

Though "accurate soft shadows" requires tons of rays.

The hard thing about getting more coherence from random rays is that tracing a single ray is so cheap, you can't afford to do a lot of work to create your ray packets. And if that uses shared storage or causes thread syncs you almost certainly lose.

There was a paper about doing raytrace parallelism a different way - rather than sending ray packets down a kdtree one node at a time, they send a single ray down both sides of each kd node, so you wind up marching the same ray down 4 or 8 paths of a kdtree at the same time.

castano said...

Yes "soft shadows", "depth of field" and other similar effects require a ton of rays, but are easy to express and obtain accurate results using raytracing.

In the short term I expect people will keep using current image based methods, though.

Achieving execution and data coherence in raytracing is an interesting area research. In my opinion hardware needs to be designed to schedule coherent tasks together at very low cost.

old rants