1. Obviously you all know the best practice of using your own data types (S32 or whatever) and making macros for any kind of common operation that the standards don't handle well (like use a SEL macro instead of ?: , make a macro for ROT, etc). Never use bit-fields, make your own macros for manipulating bits within words. You also have to make your own whole macro meta-language for things not quite in the language, like data alignment, restrict/alias, etc. etc. (god damn C standard people, spend some time on the actual problems that real coders face every day. Thanks mkay). That's background and it's the way to go.
Make your own defines for SIZEOF_POINTER since stupid C doesn't give you any way to check sizeof() in a macro. You probably also want SIZEOF_REGISTER. You need your own equivalent of ptrdiff_t and intptr_t. Best practice is to use pointer-sized ints for all indexing of arrays and buffer sizes.
(one annoying complication is that there are platforms with 64 bit pointers on which 64-bit int math is very slow; for example they might not have a 64-bit multiply at all and have to emulate it. In that case you will want to use 32-bit ints for array access when possible; bleh)
Avoid using "wchar_t" because it is not always the same size. Try to explicitly use UTF16 or UTF32 in your code. You could make your own SIZEOF_WCHAR and select one or the other on the appropriate platform. (really try to avoid using wchar at all; just use U16 or U32 and do your own UTF encoding).
One thing I would add to the macro meta-language next time is to wrap every single function (and class) in my
code. That is, instead of :
int myfunc( int args );
do
FUNC1 int FUNC2 myfunc(int args );
or even better :
FUNC( int , myfunc , (int args) );
this gives you lots of power to add attributes and other munging as may be needed later on some platforms.
If I was doing this again I would use the last style, and I would have two of them, a FUNC_PUBLIC and FUNC_PRIVATE
to control linkage. Probably should have separate wrapper macros for the proto and the body.
While you're at it you may as well have a preamble in every func too :
FUNC_PUBLIC_BODY( int , myfunc , (int args) )
{
FUNC_PUBLIC_PRE
...
}
which lets you add automatic func tracing, profiling, logging, and so on.
I wish I had made several different layers of platform Id #defines. The first one you want is the lowest level, which explicitly Id's the current platform. These should be exclusive (no overlaps), something like OODLE_PLATFORM_X86X64_WIN32 or OODLE_PLATFORM_PS3_PPU.
Then I'd like another layer that's platform *groups*. For me the groups would probably be OODLE_PLATFORM_GROUP_PC , GROUP_CONSOLE,
and GROUP_EMBEDDED. Those let you make gross characterizations like on "GROUP_PC" you use more memory and have more debug systems
and such. With these mutually exclusive platform checks, you should never use an #else. That is, don't do :
#if OODLE_PLATFORM_X86X64_WIN32
.. some code ..
#else
.. fallback ..
#endif
it's much better to explicitly enumerate which platforms you want to go to which code block, and then have an
#else
#error new platform
#endif
at the end of every check. That way when you try building on new platforms that you haven't thought carefully about yet, you get
nice compiler notification about all the places where you need to think "should it use this code path or should I write a new one".
Fallbacks are evil! I hate fallbacks, give me errors.
Aside from the explicit platforms and groups I would have platform flags or caps which are non-mutually exclusive. Things like PLATFORM_FLAG_STDIN_CONSOLE.
While you want the raw platform checks, in end code I wish I had avoided using them explicitly, and instead
converted them into logical queries about the platform. What I mean is, when you just have an "#if some platform"
in the code, it doesn't make it clear why you care that's the platform, and it doesn't make it reusable.
For example I have things like :
#if PLATFORM_X86X64
// .. do string matching by U64 and xor/cntlz
#else
// unaligned U64 read may be slow
// do string match byte by byte
#endif
what I should have done is to introduce an abstraction layer in the #if that makes it clear what I am checking
for, like :
#if PLATFORM_X86X64
#define PLATFORM_SWITCH_DO_STRING_MATCH_BIGWORDS 1
#elif PLATFORM_PS3
#define PLATFORM_SWITCH_DO_STRING_MATCH_BIGWORDS 0
#else
#error classify me
#endif
#if PLATFORM_SWITCH_DO_STRING_MATCH_BIGWORDS
// .. do string matching by U64 and xor/cntlz
#else
// unaligned U64 read may be slow
// do string match byte by byte
#endif
then it's really clear what you want to know and how to classify new platforms. It also lets you reuse
that toggle in lots of places without code duping the fiddly bit, which is the platform classification.
Note that when doing this, it's best to make high level usage-specific switches. You might be tempted to try to use platform attributes there. Like instead of "PLATFORM_SWITCH_DO_STRING_MATCH_BIGWORDS" you might want to use "PLATFORM_SWITCH_UNALIGNED_READ_PENALTY" . But that's not actually what you want to know, you want to know if on my particular application (LZ string match) it's better to use big words or not, and that might not match the low level attribute of the CPU.
It's really tempting to skip all this and abuse the switches you can see (lord knows I do it); I see (and write) lots of code that does evil things like using "#ifdef _MSC_VER" to mean something totally different like "is this x86 or x64" ? Of course that screws you when you move to another x86 platform and you aren't detecting it correctly (or when you use MSVC to make PPC or ARM compiles).
Okay, that's all pretty standard, now for the new bit :
2. I would opaque out the system APIs in two levels. I haven't actually ever done this, so grains of salt, but I'm pretty convinced it's the right way to go after working with a more standard system.
(for the record : the standard way is to make a set of wrappers that tries to behave the same on all systems, eg. that tries to hide what system you are on as much as possible. Then if you need to do platform-specific stuff you would just include the platform system headers and talk to them directly. That's what I'm saying is not good.)
In the proposed alternative, the first level would just be a wrapper on the system APIs with minimal or no behavior change. That is, it's just passing them through and standardizing naming and behavior.
At this level you are doing a few things :
2.A. Hiding the system includes from the rest of your app. System includes are often in different places, and often turn on compiler flags in nasty ways. You want to remove that variation from the rest of your code so that your main codebase only sees your own wrapper header.
2.B. Standardizing naming. For example the MSVC POSIX funcs are all named wrong; at this level you can patch that all up.
2.C. Fixing things that are slightly different or don't work on various platforms where they really should be the same. For example things like pthreads are not actually all the same on all the pthreads platforms, and that can catch you out in nasty ways. (eg. things like sem_init always failing on Mac).
Note this is *not* trying to make non-POSIX platforms look like POSIX. It's not hiding the system you're on, just wrapping it in a standard way.
2.D. I would also go ahead and add my own asserts for args and returns in this layer, because I hate functions that just return error codes when there's a catastrophic failure like a null arg or an EHEAPCORRUPT or whatever.
So once you have this wrapper you no longer call any system funcs directly from your main codebase, but you still would
be doing things like :
#if PLATFORM_WIN32
HANDLE h = platform_CreateFile( ... )
#elif PLATFORM_POSIX
int fd = platform_open( name , flags )
#else
#error unknown platform
#endif
that is, you're not hiding what platform you're on, you're still letting the larger codebase get to the low level calls,
it's just the mess of how fucked they are that's hidden a bit.
3. You then have a second level of wrapping which tries to make same-action interfaces that dont require you to know what platform you're on. Second level is written on the first level.
The second level wrappers should be as high level as necessary to opaque out the operation. For example rather than having "make temp file name" and "open file" you might have "open file with temp name", because on some platforms that can be more efficient when you know it is a high-level combined op. You don't just have "GetTime" you have "GetTimeMonotonic" , because on some platforms they have an efficient monotonic clock for you, and on other platforms/hardwares you may have to do a lot of complicated work to ensure a reliable clock (that you don't want to do in the low level timer).
When a platform can't provide a high-level function efficiently, rather than emulate it in a complex way I'd rather just not have it - not a stub that fails, but no definition at all. That way I get a compile error and in those spots I can do something different, using the level 1 APIs.
The first level wrappers are very independent of the large code base's usage, but the second level wrappers are very much specifically designed for their usage.
To be clear about the problem of making platform-hiding second layer wrappers, consider something like OpenFile(). What are the args to that? What can it do? It's hopeless to make something that works on all platforms without greatly reducing the capabilities of some platforms. And the meaning of various options (like async, temporary, buffered, etc.) all changes with platform.
If you wanted to really make a general purpose multi-platform OpenFile you would have to use some kind of "caps" query system, where you first do something like OpenFile_QueryCaps( OF_DOES_UNBUFFERED_MEAN_ALIGNMENT_IS_REQUIRED ) and it would be an ugly disaster. (and it's obviously wrong on the face of it, because really what you're doing there is saying "is this win32" ?). The alternative to the crazy caps system is to just make the high level wrappers very limited and specific to your usage. So you could make a platform-agnostic wrapper like OpenFile_ForReadShared_StandardFlagsAndPermissions(). Then the platforms can all do slightly different things and satisfy the high level goal of the imperative in the best way for that platform.
A good second level has as few functions as possible, and they are as high level as possible. Making them very high level allows you to do different compound ops on the platform in a way that's hidden from the larger codebase.
5 comments:
Unfortunately I have a good amount of experience in porting code. I like many of the things you say in #1, such as the platforms, platform groups, and not ever allowing fallback cases.
I think something like
FUNC_PUBLIC_BODY( int , myfunc , (int args) )
{
FUNC_PUBLIC_PRE
is overkill. You can go this route of trying to turn everything into a macro language and it will make your code ugly and unwieldy. (Anytime someone starts to really heavily rely on the C preprocessor instead of a language I suspect something is wrong).
For 2 and 3, you may have a different perspective than I because I am more of an application programmer and I believe you are writing libraries, but I think that is too many levels. I would tend to forget about 2 and just use something that is more or less 3. I think with 2 and 3 you will end up with convoluted code that ends up doing unnecessary work on some or all platforms. If you support the operations in 3, and are free to code longer sequences of platform-specific operations in a platform-idiomatic way, the code will be clearer and more efficient.
In reality making a set of hard and fast rules about multiplatform programming may be futile. There's no substitute for good taste and professional judgement about each situation.
Well at a minimum you need private/public and the two-piece func markups (for visibility, dlls, linkage, calling convention, etc). So the alternative is something like :
FUNC_PRIVATE1 int FUNC_PRIVATE2 myfunc( int args )
{
}
that's what I'm using right now, and it's ugly enough that I figure going to
FUNC_PRIVATE( int, myfunc, (int args) )
is not really worse and is even more flexible.
This isn't really a porting issue, but a related thing that I might consider doing at the same time if I was to redo everything is to have macros for all the function args in the preamble, like :
{
ARGCHECK( arg1, 0, 10 , return false );
}
which takes the valid range and what to do if the arg is out of range. That macro might not necessarily compile into an arg check; sometimes it might just be an assert on the arg range; in some modes it might log the args so you have full call tracing with arg values.
You said:
"I would also go ahead and add my own asserts for args and returns in this layer, because I"
The anticipation is killing me. Because you ???
Ha, totally lost train of thought there.
Funny timing too because I just wrote a post about those kind of wrappers and stopping un-continuable error returns.
Post a Comment