1/08/2009

01-07-09 - Direct X WTF

The DX9 / DX10 issue is a pain in the butt. I guess it's kind of nice that they wiped the slate clean, but it means that you have to support both for the foreseeable future, and they're very different. For a sweet little while now you've been able to get away with just supporting Dx8 and then just supporting Dx9 , which sure is nice for a small indie game developer.

Now every app has to try both and support both and fuck all.

I *really* can't believe they changed the half pixel convention. I know a lot of people think it was "wrong" and now its "right" , but IMO there is no right or wrong, it's just a fucking convention (it was a small pain that GL and D3D did it differently). The only thing that matters in a convention is that you pick one way to do it, document it clearly, and then *keep it the same* WTF hyachacha.

Rasterization rules DX9

Rasterization rules DX10

Anyway, I guess it's about time I look into the improved DX10 resource management capabilities. In particular, the ability to do resource updates on a thread, so that hopefully all that horrible driver resource swizzling will be done off the main thread, and secondly better support for subresource updates (which theoretically lets you page mips of textures and such).

Mmm update : apparently the good multithreaded resource management isn't until DX11 :(

And I'm a little unclear whether DX10 for XP even really works? The real features seem to require Vista (?). Mmm.. seems like it's not worth the trouble, between XP and not having hardware that will run DX10 at home, I give up.

I also had the joy of discovering that to even compile stuff without the new Platform SDK you have to do this :


// CB : VS2003 patches :
#if 1
#define WINAPI_INLINE   WINAPI
#define __in
#define __out
#define __inout
#define __out_bcount_opt(val)
#define __in_bcount_opt(val)
#define __in_opt
#define __out_opt
#define __inout_opt
#define __in_ecount(val)
#define __in_ecount_opt(val)
#define __out_ecount(val)
#define __out_ecount_opt(val)
#endif

Yay. Well, that was a waste of time. It sucks cuz texture updates take for freaking ever in Dx9 and lower.

Maybe we can just ignore Vista and DX10 and just wait for Windows 7 and DX11. At that point to just build a "hello world" app you'll have to install the "Managed Hello World Extensions" which is a 2 GB package.


I've had a very disappointing night of not finding disappeared video games.

First off I found that Paul Steed's masterwork Mojo Master was "dicontinued" by Wild Tangent. Waah.

Now I just learned that Shizmoo and Kung Fu Chess are GONE !? WTF !? I was thinking about dusting off my KFC skills, but I guess I won't.

I had an idea for "Stratego Chess" - meh it's pretty self explanatory.


"DownLoadThemAll" has the option to set the file's modtime to the server's time for the file. This option is ON by default. That's so fucking retarded. I download papers, and then go to the direct and sort by "last modified" and expect it to be at the bottom ... NO! It's got a date from fucking 2002 or something. Modtime should always be the last modtime *on my machine*. One thing that I've always been unsure about is whether "copy" should copy the modtime data or whether it should make the dest's modtime now - by default it moves the modtime, which is another reason why just using modtime is not strong enough for Oodle - when somebody copies new data on top of an existing file, the modtime can actually go *backward*. Oodle with file hashes now sees that the modtime has changed and updates the file hash.

17 comments:

castano said...

I think the reason why MS changed the conventions in DX is because they wanted to implement OpenGL on top of it, and the only possible way of doing it right was changing the rasterization rules so that they matched.

Tom Forsyth said...

What's wrong with just doing DX9 for a while? It still works just fine on Vista. In fact the desktop uses DX9, so it's probably still the most robust path!

NeARAZ said...

DX10 works on on Vista and later. So yeah, the number of people who can use it is pretty small, especially if you're small indie developer who does not target hardcore gamers (we have some stats for that). DX11 will also be Vista and later, but at that point Vista will probably be much more widespread.

Just supporting DX9 is still quite a viable path, unless you want a marketable "DX10 !!!1" bullet point in your feature list.

Tom Forsyth said...

"whether DX10 for XP even really works?" Er... there's no doubt about it - it doesn't. Most people are indeed skipping from DX9 to DX11. Note that DX11 will work on Vista, and a lot of DX11 will work on DX10 hardware, so it's not unreasonable to do so.

I've had the rasterisation rules argument with The Abrash many times, and he wins - the DX9 rules were just flat wrong. I'm glad they're dead. Ignacio's theory would be great, except we still have to support DX9's rules, so it's not exactly simplified the driver - if anything it's made it even more complex.

cbloom said...

"What's wrong with just doing DX9 for a while?"

nothing, the problem is if you actually want to make a game for DX10 it's a huge pain to make a fallback mode for older systems - basically you have to write a whole separate render pipe.

With DX9 you can just have different shaders for cards that support the various VS/PS capabilities. (yes that's a pain too, but at least the basic structure is the same and it's mainly data toggles)

cbloom said...

There is some stuff around the net about running DX10 on XP. Apparently MS sent out a Beta SDK at some point with a DX10 install for XP and some hacker kiddies grabbed the DLLs and somebody made a setup for it. Anyway, not something I want to try.

castano said...

DX11 does not only work on Vista, but also has profiles with DX9, DX10, DX10.1 and DX11 feature levels. So, DX11 does not only work on DX10 hardware, but also on some DX9 hardware as well.

castano said...

My point is that one of the reasons why MS changed the rasterization rules was to enable them to implement OpenGL on top of D3D. Remember that at some point they were planning to ship an OpenGL 1.5 driver with Vista so that IHVs didn't have to implement OpenGL drivers? That plan obviously failed.

nothings said...

Isn't the difference in the rasterization rules just a matter of adding +0.5 to the screen coordinates (or subtracting, depending on which way you're going).

That doesn't seem very hard to integrate into a game... it's just a small change to your projection matrix... and it doesn't seem like it would be THAT hard to do to an opengl wrapper that compensates (although maybe a tad subtler there since it'd be visible to the vertex shader, so you'd have to get heroic there and actually do it at the end of the vertex shader).

castano said...

There are also differences in the texture addressing. You need to add 0.5 texels when accessing each mipmap. The only way of accomplishing that is doing the filtering yourself in the shader.

nothings said...

I could have sworn the rasterization locations were off by half a pixel, but the texture filtering rules aligned the same as OpenGL... I remember I spent a bunch of time staring at this when I worked on Braid (which is the only time I've ever done D3D programming)... maybe I'm misremembering, though, but since one of the things I had to do was draw a post-processing effect and make sure it exactly aligned with the original, I'd have thought I'd gotten that right.

castano said...

Sean, OpenGL uses a convention in which (0,0) is the texel center, whereas Direct3D<10 used a convention in which the texel center is at (0.5, 0.5). See this paper by Sim Dietrich:

http://developer.nvidia.com/object/Texture_Addressing_paper.html

If you were using nearest filtering, maybe you were lucky and hit the right texel when sampling at the boundary. However, not all hardware handles that case the same way. See this page for an example of what could happen:

http://msdn.microsoft.com/en-us/library/bb147229(VS.85).aspx

cbloom said...

99% of games handles mips and texture filtering and addressing wrong and
people rarely notice it.

For example, for textures that are supposed to tile, you should run your
mip filter around the edges, not clamp at the edge. People don't, and
the result is that in the distance you can see slight discontinuities
appear where the textures tile.

The other big thing is when laying two textures side by side that are
supposed to be seamless, like one texture has [123] and the neighbor had
[456] and you want them to be [123][456] - what you should do is set up
the uv's so that they lie on the pixel centers at a *low* mip - not at
the top res. Some games actually do the right thing and align them at
texel centers in the high mip ; eg. they make [1234] and [3456] and then
line them up at the 3.4 - but that creates a discontinuity when you mip
down, what you actually should do is copy in 8 or 16 pixels at the edge
and line up the uv to back that off.

cbloom said...

doh 3.4 should be 3.5 stupid noneditable comments

castano said...

> not clamp at the edge

That's only a problem if you do something more complex than a regular "single phase" box filter.

BTW, NVTT handles that correctly and lets the user indicate the texture wrapping mode they want to use.

nothings said...

Sean, OpenGL uses a convention in which (0,0) is the texel center, whereas Direct3D<10 used a convention in which the texel center is at (0.5, 0.5).

That's wrong re: OpenGL, or at least it was in 1.x. I've actually read the specifications for both; I don't know why Sim Deitrich's paper claims otherwise, unless we're using the terminology differently.

From the 1.3 spec:

"Let u(x,y) = 2^n * s(x,y) [...] If TEXTURE_MIN_FILTER is NEAREST [...] this means the texel at location (i,j,k) becomes the texture value, with i given by i = floor(u)..." So texel 0 lies in [0..1).

"If TEXTURE_MIN_FILTER is LINEAR [...] i0 = floor(u-1/2) ... i1 = (i0+1) mod N, alpha = frac(u-1/2)..." The maximum weight is assigned if u=1/2, thus the center is at 1/2.

CLAMP_TO_EDGE is defined as: "Texture coordinates are clamped to the range [min,max]. The minimum value is defined as min = 1/(2N) where N is the size of the texture image in the direction of clamping". This means the center of the edge pixel falls at 0.5*1/N.

As I understand it, OpenGL's texture convention and rasterization convention are the same (centers at 1/2), whereas D3D used to use an inconsistent definition (texture centers at 1/2, rasterization centers at 0) but it sounds like they've switched to be consistent with each other (or with OpenGL).

castano said...

Yeah, what I said doesn't make any sense. I just checked some apps, and it appears that you are right. So, I stand corrected.

old rants