3/31/2009

03-31-09 - GDC Middleware

There's a game engine for every day of the week now. Personally I don't think big game engines is the way to go. It just puts you as a developer too much at the mercy of the engine developer, and if it has major flaws that you don't like you're screwed. I'd much rather license lots of little pieces I can tack together as I see fit.

SpeedTree's tools get better and better but their runtime is still not awesome. That seems to be a pretty common thread. It's kind of weird because for me the tool is the hard part that I never want to do. Hell I could write the runtime for *all* these middlewares if they would write the tools and the do the sales and support. Some other kind of similar ones :

Fork Particle Middleware has a pretty nice tool. It's funny I was talking to Sean at the show about what other middlewares RAD might do someday and I mentioned particles. We figured you'd want to have a nice tool for artists where they could compose arbitrary particle systems, then the ideal thing would be to output a sort of "particle HLSL" for each system, then you could compile that to CPU code or GPU code or SPU code. The idea there is that all the toggles hierarchy and such that the artists set up, you don't have in the code. That is, you want to avoid winding up with code that's like :


if ( m_particleSystemDef->m_doSpin )
{
    SpinParticle();
}
... more ifs for each possible effect ...

instead you just compile down optimized code for each system type. You also would want to automatically provide particle LOD and system scalability so it can target different machine capabilities.

Anyway, I doubt it's a good RAD product because it's too tool and artist heavy, it's not a big enough piece of pure technology. The Fork demos look pretty super-duper asstastic, like shockingly bad, but I don't know if that's just because they have bad artists setting it up or because their tech actually sucks. I do think there's a lot of value in actually making an impressive good looking demo - it proves that it's at least possible to do so with your system.

Allegorithmic Substance was at the Intel booth showing their procedural texture middleware. This is another one that I had the idea of doing some day maybe, so I wanted to see their action. Procedural texturing is obviously compelling in the future era of 8+ CPU cores and SVT infinite textures and so on. And you clearly need a nice artist-driven tool to define them.

Allegorithmic has a really super nice polished tool for writing shaders. I have no idea how fast their runtime is; I tried to ask some questions about how they run the shaders and the guy demoing didn't seem to be an actual programmer who knew WTF was going on. Again just like particles you would want to be running code generation so that you don't have like 100 per-texel branches for all the options. In fact, this is really just like rasterization in a software rasterizer like Pixo. The procedural texture code is just like a pixel shader and you want to output some kind of HLSL and have it compiled to a CPU/GPU/SPU shader.

I also don't think their way of defining shaders is right. You want something that is intuitive to artists and easy for them to tweak visually. You don't want to have to hire someone who is a procedural texture specialist. The Allegorithm shaders are just like Renderman shaders (or as Sean pointed out - like the stuff that demo coders do for 64k demos). You get a bunch of functions that you can compose and chain together and you tweak them to make it look like something. For example you can take Perlin Noise and apply curves and powers to it and threshold it and all that kind of stuff. Or you take a "mother wavelet" shape kernel and tile it, randomly rotate and offset it, etc.

That stuff is powerful and all, but it's just not intuitive and hard to tweak and not good for artists. If I was doing procedural texturing it would be example-driven with little bits of functional stuff. Some stuff you could do is all the shape-by-example synthesis stuff. You can do "detail textures" the correct way by using different tiling textures at different frequencies for the different wavelet levels of the output. You can use multiple tiling source textures and randomly compose them using Perlin Noise or an artist supplied blend texture. You can do things like the old "splatting" technique from Surreal with blend-alpha channels in tiling textures. You could even do things like Penrose Tiles where you have the artists paint the tiles and then you can create infinite non-repeating tilings. Or you can use sample textures to create tiles automatically like the Hoppe lapped texturing stuff. etc.

It just seems like the Allegorithmic tool is aimed at an audience that doesn't exist. It's too technical for real game artists to use. But if you're going to have a technical artist / programmer write the procedural textures, they would rather just have a text HLSL type language like Renderman. And having text shaders like that is much better because it's easier to share them on the web. In fact, the only way a tool like this could ever really take off is if a community develops and they post shaders on the web and share them, because writing all the shaders yourself is too much work. (in raytracing there are tons of prewritten shaders that are free or sold in commercial procedural texture kits).

Actually what you want from the tool is just a general attribute plug editor. You want to write text shaders like Renderman but have them take various parameters as scalars, colors, or images. Then you want to expose those to the artists with nice GUI tools and show them how they affect the shader with realtime preview. Something like :


input parameter brick_color : type color;
input parameter mortar_color : type color;

color diffuse_shader(vec2 uv, vec3 worldpos)
{
    ... do shader maths using uv, brick_color, mortar_color ...
}

In fact having a generic parameter editor set is a handy thing that all studios should have and everyone reinvents, but again it's too small of a piece to sell.

5 comments:

Thatcher Ulrich said...

Re procedural texturing -- Carmack has some rant about how it's better to compress big arbitrary source data than try to do stuff procedurally, then the artists can use whatever tools they want, etc.

But, it also seems to me that artist time is limited, so maybe there is room for a compressor-expander kind of tool, that takes example art from the artist, analyzes it, and uses it as a seed to create more detail and extend beyond the edges of the example. Kind of like your image doubler.

castano said...

I think allegorithmic's tool does some sort of texture splatting. You basically procedurally control the location and shape of particles that are composed on the texture. You can then select any of the splats and tweak it, so you have very explicit control as well. In any case, that doesn't change the fact that the whole process is not very artist friendly anyway.

I agree that example based texture generation would be more interesting. You could provide brushes to change environment factors or automatically assign them based on the geometry.

That would be a cool middleware, and hey, you could use it as an excuse to work on parameterization algorithms too.

cbloom said...

" Re procedural texturing -- Carmack has some rant about how it's better to compress big arbitrary source data than try to do stuff procedurally, then the artists can use whatever tools they want, etc."

Yeah, I basically totally disagree with Carmack's approach and I think that you are saying the same thing :

basically the way that artists would make huge unique texturing for big worlds is by doing a semi-procedural thing from examples, blending them together and whatnot.

I mean if Ryan got on it the first thing he would do is write mel scripts to paint the textures automatically from height and angle and surface type and whatnot.

So if your artists are doing procedural generation anyway, then why not just do that instead of loading pre-baked data!

Maybe for 2-core CPUs it's better load, but for the 8-thread+ CPUs of today it's better to run code.

Also Id keeps claiming that the source data for their procedural textures is bigger than the output, which is just a canard.

Also one supposed advantage of pre-baking is that you can bake in the lighting, but again I think that's spurious because you really want more than just a diffuse color in your lightmaps, and also you want different sampling.

cbloom said...

"I think allegorithmic's tool does some sort of texture splatting. You basically procedurally control the location and shape of particles that are composed on the texture."

Yeah I was trying to describe that as the "mother wavelet" rotated and translated thingy.

ryg said...

You don't even need to get fancy to get good results. There's lots of cool example-driven texture synthesis algorithms out there, but even the most basic things help a lot. One very cheap thing you can do is Hoppe-style "lapped textures", which boils down to irregularly shaped splats in texture space once you have a parametrization. About as easy as it gets, the splat boundaries are (interestingly) not really visible in the final image, and even with just one normal-sized source image you get decent results. Because you're piecing everything together from one image, you still have a tiling-like effect in the sense that you see the same details everywhere, but there's no regular grid pattern which really is a huge improvement visually.

I've been toying around with this a bit and it's really quite a joy to use - easy to explain to artists, very few knobs to tweak to get good results, and you still have full local control where you need it (just splat something else on top).

"Also one supposed advantage of pre-baking is that you can bake in the lighting, but again I think that's spurious because you really want more than just a diffuse color in your lightmaps, and also you want different sampling."
Well, if you're pre-baking everything and you have lightmaps, it just makes little sense not to bake them in (the diffuse part, anyway). Storing them separately would be more data, not less, and DXT(texture*lightmap) results in significantly better quality than DXT(texture)*DXT(lightmap). Also, it allows you to have a lightmap with the same resolution as your diffuse albedo (nice in theory but impractical in terms of rendering times), or alternatively use better-than-bilinear sampling to apply a lower-resolution lightmap (bilateral upsampling or any other edge-aware filter could really help with the blocky shadow edges you get in lightmaps, for example).

I still prefer the artist-directed procedural synthesis approach though. More elegant and it should scale better in the long run.

old rants