9/20/2008

09-20-08 - 2

Macros are really the fucking devil. It's unfortunate that they still are faster than inline functions. Maybe on MS compilers with LTCG the inline function is pretty close, but most compilers just don't inline like they should. Macros are hard to use, you have no argument info, when you pass in the wrong thing the error messages are impossible, they aren't type safe which can make them fail very mysteriously, you can't trace in with the debugger; I'm finding myself wasting a lot of time debugging stupid things because I'm using the macros wrong. If I never use another macro again in my life it would be a happy life indeed. It would be ideal to just write inline functions and have them act like macros in the compiler.

I really fucking hate unsigned types. I've got unsigned types all over for no good reason at the moment and they keep giving me bugs where I try to count backwards or something, like


for( ; pos>=0 ; pos--)
{
}

Hey I just wrote an infinite loop cuz freaking pos is unsigned and I wasn't thinking about that and I don't want to have to worry about that all the time. IMO it's best to just use signed types unless you specifically need to allow two billion - and as Won points out if you actually are trying to handle two billion in an unsigned int you have to be super careful (eg. never add two numbers that can cause a carry ,etc.).

I really really like parameter checking, but people mostly do it wrong and don't get the point. Checking pointers for NULL, for example, is pretty worthless (unless it's a release-mode check that throws; I'll assume your parameter checking is just an assert). When someone passes in a NULL they will get a very obvious crash, the system provides a nice message for that already, and you can trivially see what caused the problem by going "hey that pointer is NULL". The good parameter checking is for conditions that will cause non-obvious failures that may be hard to debug and may not cause a crash at all, just incorrect operation. For example, things like :


U32 ShiftRight(U32 value, U32 shift)
{
	ASSERT( shift >= 0 && shift < 32 ); // !! 32 is not valid !
	return ( value >> shift );
}

or

void memcpy256(void * dest,void * src,int count)
{
	ASSERT( abs( (int)dest - (int) src ) >= 32 ); // must be >= 256 bits apart !!

	// ... do 256 bit memcpy
}

The other thing people often do wrong in parameter checking is go the opposite way and check too strictly on conditions that are just how they think things should be, but aren't actually necessary for the function to get right. Stuff like :


void Matrix::Transpose()
{
	ASSERT( IsOrthonormal() );

	// ...
}

where some jackass put too strict an assert because he was thinking you wanted the inverse of the matrix, but that's not the only reason to use transpose. Asserts should check for cases that will cause failure, not checks for being in the expected bounds of operation. Another common kind of bogus parameter check I see stuff like :

float lerp(float from,float to,float t)
{
	ASSERT( t >= 0 && t <= 1 );

	return from + (to - from) * t;
}

Umm, no, this parameter check is all wrong. It doesn't prevent any bugs, extrapolating is perfectly valid and will return a reasonable and correct answer.

I also think that result checking is perhaps more important than parameter checking. Especially in cases where a function does something really complex that you're not sure you got right, but is easy to check. For example :


uint ilog2ceilasm(uint val)
{
	__asm
	{
		FILD val
		FSTP val
		mov eax,val
		add eax,0x7FFFFF // 1<<23 - 1
		shr eax,23
		sub eax,127
	}
}

uint ilog2ceil(uint val)
{
	ASSERT( val > 0 );

	uint ret = ilog2ceilasm(val);

	ASSERT( (1 << ret) >= val && val > (1 << (ret-1)) );

	return ret;
}

The result check does more than just make sure I wrote my code right. It also implicitly checks the parameters because invalid parameters will cause the result verification to fail. It also provides a constraint for modifications. It makes it clear what contract this function has made with its clients, so that if you try to optimize in the future you know exactly what you must satisfy, and the assert tells you if you screw up. (obviously we'd love compile time automatic dynamic analysis instead of relying on the actual bad case to run and trigger the assert)

Of course asserts and parameter checking and such should never be used at all on stuff that comes from users or artists. That stuff should be manually checked with code and they should be informed about bounds violation etc. We had a pretty good rule at Oddworld I think - any condition violation that can only be generated by coders errors is an assert and you can halt the game if necessary, any condition violation that can be caused by bad data or artist error should be a log/throw/warning & you try to keep running despite the error.

No comments:

old rants