06-14-12 - API Design with Auto-Complete

I'm working on tweaking the Oodle API and a few things strike me.

One is that my personal preference for API's has been heavily influenced by my reliance on Browse Info tips and auto complete. Some examples :

1. I like really long function names. I want the function name to tell me as much as possible about the function. If the function has some weird side effect that is not right there in the name, I consider that a failure of the API. I like function names like "OodleLZ_OpenFileReadHeaderDecompressAsync" ; it tells me exactly what it does. I don't care how long it is because after I type a few character I control-space and the rest comes out.


that's okay

Package_WriteFile(); // also deletes source

WTF crazy unexpected side effect that's not in the name.  Not okay.

Basically any time I have to go to the header or the docs to figure out what a function does, I consider the API a failure. (of course the biggest failure is when I'm using my own code and can't figure out exactly what a function does without going and reading the implementation)

2. Related to that, I like file names and function names to be alphabetically front-loaded. That is, I want the first few characters to distinguish the function as much as possible, because I really want to do a few characters and then be able to ctrl-space it. I hate the fact that I have to start all my function names with Oodle_ because it makes for a lot of typing before I can ctrl-space it. In my ideal world, the first few letters are a summary of the action; like in the previous example I'd rather see "LZDecIO_OpenFileReadHeaderDecompressAsync".

3. For function arguments, I of course rely on the browse info to tell me what the args are. Because I can see the types of the args and their names, I want the types and names to be super descriptive. A well designed function does not need any docs at all, the names of the args and the name of the function tell you everything about it.

3.A. It's very easy to distinguish "in" and "out" args by use of const, when the auto-complete tells you the type of the args. Furthermore in C++ I would always make "in" args be references and "out" args be pointers. Basically anything you can put in the types of the variables is documentation that is visible to me right there when I use the function.

3.B. Because of this I sort of prefer a mess of bool args to enums (*) or flags. (* = an enum just to give the bool names is nice). If I start typing a function and the browse info pops up and tells me the args are

 EnumDir(const char * dirPath,bool recurseDirs,bool caseSensitive,bool filesOnlyNoDirs) 
then I can see that as I type and I know exactly what it does, But if I start typing and the browse info pops up
 EnumDir(const char * dirPath,U32 flags) 
that sucks because now I have to go to the header and try to figure out what the flags are. Basically any API where you have to go read the docs or scan the header, I don't like. Unfortunately the bool way results in really ugly code after you've written it :
you have no idea WTF that does, so that sucks. There is a special type of enum which I believe gives the best of both worlds. First of all, the bad type of enum is one that you can't tell what values it has unless you go look at the docs or the header, so like, this is bad :
 EnumDir(const char * dirPath,enum EnumDirOptions options) 
that sucks; but if you just use the enums as a way of naming bools, like :
 EnumDir(const char * dirPath,enum RecurseDirs recurseDirs,enum CaseSensitive caseSensitive,enum FilesOnly filesOnlyNoDirs);

then I think you have the best of both worlds, in the sense that reading the function after it's written is totally clear, and you can write the function (with browse info) without looking at docs or headers. This only works if all your enums are reliably just _Yes _No simple enums, if you try to be fancier and make the names like "RecurseDirs_Recursive" or whatever custom non-standard names, it makes it unpredictable.

Okay, now we'll get into some issues that aren't so browse-info related.

4. I've always believed that API's should be obvious about correct or incorrect usage. They should be "wysiwyg" in the sense of, if you read the code written to the API, it should do what intuitive English reading suggests it does. In particular, if it looks right, it should *be* right. There are some ways you can fuck this up in API design :

4.A. Near-synonyms that aren't obvious. Somebody writes code like :

  Widget * w = Widget_Create();
it's fine, right? Nope. They should have used Widget_Destroy(w) there, Widget_Delete is for something else. That's very bad API, you have near-synonyms that seem to be interchangeable, but aren't, and it leads to code that reads like it should be fine but isn't.

4.B. One of the most insiduous forms of this is API's that will still work if you use them wrong, but fall into super-low-performance mode. Of course most of us are familiar with the terrible 3d graphics API's where just setting some seemingly inoccuous flag suddenly makes all your speed go away (D3D has perhaps gotten a bit better about this in recent years), but it's fucking ridiculous, I shouldn't have to profile my code every time I change a flag on CreateVertexBuffer or spend years learning about the hacks. The fast paths and slow paths should be clearly separated and if I do something slow when using the fast functions, it should be a failure. It's much worse to give it a slow fallback than to just make it fail.

A similar case is when API's have a lot of unclear dependencies to get on the fast path. Trying to think of an example, one that comes to mind is stuff like the Win32 "overlapped" async file stuff. ReadFile can be async or not async depending on whether you pass an overlapped structure (bad - I hate API's that massively change their action in an unclear way), if it is async then the requirements on the buffer and size and such are totally different (must be sector aligned), and both the sync and async act on the exact same object (HANDLE from CreateFile) so that when you are give a file handle you have no idea whether it is a valid async io handle or not. Similarly when you CreateFile it's totally not obvious how to make sure you are making one that's value for async io.

Any time you are thinking about writing docs like "if you call this function and then this one, and if you pass just the right flags and only use them in this specific way, then the entire behavior of the thing changes and does this other thing". WTF? No. That's bad. Make that functionality a separate set of functions.

(Aside whiney rant : sometimes I feel like I'm the only person in the world who has this philosophy : When somebody uses my API wrong, I consider that not their failing, but my failing. What did I do wrong in the API that made them think that was the correct usage? How could I add more compile-time clarity to make incorrect usage be a compile failure? I consider it the API's reponsibility to make it as easy as possible to use correctly, and as hard as possible to use incorrectly. Whenever I use someone's API wrong and I ask "why is this not doing what I expect?", I generally get the response of "duh you're using wrong". When someone hands you a knife with the blade pointed towards you, it's kind of their fault when you cut your hand.)

((though I guess that's just a sub-point of an even bigger whiney rant, which is that I often feel like I'm the only person in the world who actually cares about their own code or takes pride in it. If somebody finds a bug in my code, or points out a flaw, or even tells me that something could be done better, then I want to know, I want to fix it. I believe my code is great and if there is any reason to suspect it is not, I want to remedy it. When I point out bugs or bad algorithms in others' code the response I usually get is "meh". And on a broader level, when people ask about difficult algorithmic questions I'll usually point them at some reference pages or some academic papers, and I sort of suspect that nobody that I've referred things to has actually ever read them; I guess as any professor knows, you have to be satisfied with something like a 1% hit rate. When a smart kid comes to your after class and says "I'd like to learn more about interpretations of quantum mechanics" and you are excited that a kid is actually interested in learning, and you say "sure, I'd love to talk to you about that, here's a good introductory paper to get you started, then come and see me at office hours", chances are you'll never see that kid again)).

When someone uses my API badly it reflects badly on my code. A lot of API's have the very bad design property that using them wrong is much easier than using them right. If people use your API and never check error codes, maybe it's your fault because you made it too hard to check error codes. Correct usage should be as automatic as possible.

Designing API's for yourself, or even your own company, is very different than tossing API's out into the void. When it's for a small group, a certain amount of quirkiness is okay, because you learn the quirks and then they aren't a problem. In fact, as many game companies sadly know, it's much more valuable to leave the quirky code alone, because it's better to have something familiar than to "fix it" and make something that is cleaner but unfamiliar.

A certain amount of fragility is also okay in internal API's, because if you use it wrong then you hit an assert or a crash, and you just fix your usage. In a public API that's not so okay because the crash is down in the library and it's not their code.

The biggest difference is that internal API's can force a use pattern. You can make an API that assumes a certain model for things like memory allocation and lifetime management and such; you can say "use it this way and if you try to use it other ways it won't work". With a public API you can't do that, it has to work decently any way it's used, and different people may have very different ideas.

At RAD an issue I've never really dealt with before is that I have to be careful to design the API so it's good for *me* as well as good for the customers. Normally I would just try to make the API as easy and good to use as possible. In particular, something that I often do in my own code like cblib is to wrap up lots of helper functions that do all the common stuff for you. Personally I really hate "minimal" API's where you have to write the same bunch of code over and over to use them. If I have to use it in a certain way, then just fucking wrap that up for me and put that in a helper.

But there are disadvantages to that for the API maintainer. For one thing, just the bigger the API is, the more work it is, you have to document every entry point, you have to stress test each entry point, and then once it's added you can't remove it without causing chaos.

The other related issue is API's that are dangerous or hard to use. There are some things where bugs in usage are so likely that it's not worth exposing them.

The most obvious example for me in Oodle is all the low level lock free stuff. I personally think there's immense value in that stuff, but it's highly unlikely that we make sales based on it, and it's very hard to use and very easy to get weird crashes with incorrect use. Because of that, it won't be exposed (in the first release anyway), and we're trying to make the visibility to it only through safe API's.

Another example is exposing access to internal structures. It's slightly more efficient to be able to access the Oodle system structs directly, to read or change values, but that's also much more risky. No operating system ever allows that, for example. The safer way is to provide accessors that change the values safely. For example on Win32, any time you get an OS struct, you actually have a copy of the OS structure, not a direct pointer; so first you have to copy the struct out to your own, fiddle with it, then copy it back. This is basically the way I'm trying to go with Oodle in the first rev.


Anonymous said...

Yeah, I agree with many of these ideas. (You need to include the convenience wrappers in your APIs, though, even though it sucks.)

A few years ago I learned to put the units into function argument names to maximize no-header-lookup:


syntaxerror said...

This might be a little off topic, but I think it still adds to what you are already saying. I think a lot of APIs get the wrong idea about what it means to be cross-platform. Just wrapping up functions so that they have the same name isn't good enough, they also need to have the same behavior, or as close as possible on each platform. Java does this everywhere in their JDK. If I have to special case my code per OS, on top of an API that is already supposed to be cross-platform it almost defeats the point of using that API.

(this is Rhett btw)

Per Vognsen said...

A bit of an aside: For Emacs I've long used auto-complete modes that understand separators. For the mini-buffer and other Lisp-based modes that would be dashes, as in kill-buffer. That means I can just type kb and it knows I mean kill-buffer (or another command with those initials, but it sorts by popularity). I wish auto-complete modes in IDEs would understand natural identifier separators in a similar way. Natural ones for C would be underscores (as in kill_buffer or Kill_Buffer) and case boundaries (as in killBuffer or KillBuffer). One subtle benefit of this approach is that most word initials are consonants, so even a two-letter sequence (never mind longer sequences) like 'kb' is unlikely to clash with a prefix of a genuine identifier, which means you can have this initials-based auto-complete mode peacefully coexist with more traditional prefix-based auto-completion, which in turn can peacefully coexist with the usual type-whatever-you-want input mode.

cbloom said...

@ Rhett - the real nono IMO is making API's with the same name that don't have the same behavior. If you have to make the behavior different, then I'd rather see the function just not exist and other one with a different name.

Like say you have an IO_Cancel() and some platform doesn't support cancelling, then just make that proto disappear so that it's a compile error.

The other option that I think is sort of okay is to document the fallback behavior in the name, like :


to indicate it will cancel if possible or fallback to blocking.

In general I think almost everyone gets cross-platform wrong. There should be a core set of functions and data types which are *exactly* the same across platforms, and then fancier stuff which is platform-specific, just make it not exist on the platforms that can't do it.

cbloom said...

@Per - back when I did GIS software we used "Soundex Codes" for name matching. Soundex is basically only consonants, with double consonants removed (and also grouping of consonants that sound the same). It provides a very reliably match and is somewhat immune to common spelling mistakes.

So for example "klbf" would match "KillBuffer" reliably.

syntaxerror said...

Yeah, I definitely agree. I should have split internal from external behavior. External behavior should be, like you said, exactly the same across platforms, but it isn't always possible to have the same performance/memory/whatever characteristics. Sometimes the difference is enough to require an algorithm change internally, but the user of the API shouldn't care.

Miroslav said...

i wish more tools will copy from visual assist idea that user types not first completion symbols but unique ones. I never got to the point when VA would work for me so i miss that feature terribly.
I also wish someone could start scratching that threading api dead end where we are now. Sure its not profitable enough for say your company but any progress would be great. There are lots of apis and they are impossible to change and impose vendor lock-in on you.

old rants