# cbloom rants

## 1/22/2010

### 01-22-10 - Exponential

One of my all time pet-peeves is people who say things are "increased exponentially". Of course the worst use of all is just when people use it to mean "a lot" , eg. they're not even talking about a trend or a response curve, eg. "the 911 Turbo provides an exponential increase in power over the base spec". This abortion of usage is becoming more and more common. But even scientists and mathematicians will use "exponential" to describe a trend that's faster than linear (it's quite common in NYT math/economics/science articles for example).

Today I was reading this blog on software development organizations and my hackles immediately went up when I read :

"The real cost of complexity increases exponentially."

I started to write a snarky post about how he almost certainly meant "geometrically". But then I started thinking about it a bit more. (* correction : by "geometrically" I mean "polynomially").

Maybe software development time actually is exponential in number of features?

If you're trying to write N features and they are all completely independent, then time is O(N), eg. linear.

If each feature can be used with only one other feature, and that interaction is completely custom but independent, then time is O(N^2), eg. geometric.

Already I think there's a bit of a myth we can tackle. A lot of optimistic software devs think that they can get under even this O(N^2) complexity by making the features interact through some generic well defined pathways. eg. rather than specificially code how feature (A) and feature (B) and feature (A+B) work, they try to just write (A) and (B) but make them both aware of their circumstances and handle various cases so that (A+B) works right. The problem is - you didn't really avoid the O(N^2). At the very least, you still had to test (A+B) to make sure it worked, which meant trying all N^2 ways, so your time is still O(N^2). The code might look like it's O(N) because there's only a function for each feature, but within each function is implicit O(N^2) logic !!

What I mean by implicit logic is the blank lines which testing reveals you don't have to write! eg. :

```
void Feature_A( )
{

DoStuff()

if ( SelectionMode() )
{
// ... custom stuff here to make (A+C) and (A+D) work
}

// !!! blank lines here !
//  this is implicit knowledge that (A+B) and (A+E) don't need custom code

}

```

You might argue that this is slightly less than quadratic complexity for the developer, and tester time is cheaper so we don't care if that's quadratic. But in any case it's geometric and above linear.

But is it actually exponential? What if all the features could be enabled or disabled, so the state of your code is a binary string indicating what features are on or off; eg. 1100 = A on, B on, C off, D off. Then there are in fact 2^N feature states, which is in fact exponential.

Another possibility is if the features can only be enabled one by one, but they have lingering effects. You have some shared state, like a data file you're processing, and you can do A then C then B then E on it. In that case the number of sequences is something like N! which is exponential for large N (actually super-exponential)

Let's concretely consider a real video game coding scenario.

You're trying to write N features. You are "sophisticated" so rather than writing N^2 hard-coded interactions, you make each feature interact with a shared world state via C "channels". (in the old Looking Glass speak, these C channels might be a set of standard "properites" on objects and ways to interact with those channels; in the old Oddworld Munch codebase there were C "component" types that could be on objects such as "SoundTrigger "Pressable" etc.). So your initial code writing time something like O(N*C).

But for the N features to really be meaningful, the C is ~= N (roughly proportional). (or at least C ~= log(N) , but certainly C is not constant as N increases - as you add features you always find you need more channels for them to communicate with each other). So just code writing time is something between O(NlogN) and O(N^2).

But your features also affect shared state - e.g. the "world" that the game takes place in, be that physical state, or state variables that can be flipped on other objects. If you have N objects each with K internal states, this creates K^N world states that have to be tested. Even with very small world state, if the features are order dependent, you're back to N! test cases.

If the bug rate was a constant percentage of test cases (eg. 0.1% of test cases produce a bug), then you are back to exponential number of bugs = exponential coder time. But I'm not sure that model of bugs is correct. If the bug rate was a constant percentage of lines of code, then bug rate would only be geometric.

Autodidactic Asphyxiation said...

One semi-related way to think about exponential complexity in programs is test coverage. Getting 100% test coverage is pretty clearly O(size of program), but what does that buy you? It doesn't cover each scenario for which your code is used.

Something truly exhaustive would be path-complete testing. The number of paths in your program is big-O exponential, but that isn't a very tight bound. In reality, not all of your paths are independent. Perhaps one path excludes another, or 2 paths commute because they affect independent state.

I think the essence of good software design is making your different paths as orthogonal (there's another word that can become a pet peeve...) as possible, to keep the non-linear term as small as possible. This, of course, matches my intuition, or maybe is just a roundabout way of justifying it.

ryg said...

Nitpick first: Geometric growth is exponential, not quadratic (The definition of a geometric progression is that x_(n+1) / x_n = const for all n). I don't think there's a commonly used synonym for quadratic growth.

On the actual subject: Games have the big advantage that most of their essential (behavior-determining) interactions take place between components that are part of the codebase. The same isn't true for most web apps, for example, and I suspect that goes for most business apps too, though I have no experience on the matter. Web apps are completely dependent on several black boxes (web server, DB engine, network and IO subsystems of the kernel, ...), usually with several caching layers inbetween that accumulate hidden state all over the place (some of them completely out of your control, like in the browser), and all of these pieces interact behind your back in nonobvious ways.

This kind of construction is most definitely a bug magnet, and basically everyone has given up on fixing all but the most obvious bugs. Basically all big websites I know will occasionally barf, displaying an error message in the vein of "so this didn't work, but just hit Refresh to make me try the same thing again and I guess everything will be okay". If this kind of shit would happen in one of my "real" programs, I'd be terrified.

(PS: Blogger just told me that "Your request could not be processed. Please try again." while trying to post...)

nothings said...

Yeah, I agree with ryg, there's an unfortunate redunancy between 'geometric' and 'exponential'. To refine his comment, I'm not sure how "geometric growth" is officially defined, but there is absolutely a thing in math called "a geometric sequence", the elements of which can be defined with an exponential function.

But none of this changes the fact that almost nothing is exponential growth. (Or, thus, even geometric growth.)

Anyway, there is absolutely no way I have tested anything resembling N^2 features of my current project. The size of my tests are probably closer to O(N).

I'd argue that the use of abstractions and black boxes are designed to bring complexity to be something like O(N log N); i.e. you're making a hierarchical tree of abstractions.

You get some cross terms, the tree isn't binary and may cross levels, but I think that's true to a first approximation.

Whether that means you really need O(N log N) tests or O(N^2) or O(2^N) tests I dunno.

Another way to do this is to look at the math. If you can have M bits of input, then you have at most 2^M possible test cases. To be 100% sure of tests, you have to test all 2^M test cases, and this is true regardless of how big N is. How far short of that are you willing to stop?

Anyway, my guess is that software complexity for N features is something like O(N log N), and # of bugs is something like O(LOC * something-that-grows-slower-than-log-N), but the problem is how much testing you need to find those bugs.

cbloom said...

Yeah I should definitely be saying "polynomial" instead of "geometric".

I think if you could measure your dev complexity vs. your feature count that would be a good measure of how good you are as a coder. That is, if your abstractions are perfect, you can do N features in NlogN dev cost, if your abstractions suck it takes N^2 dev cost.

Also, Sean, I don't think your data point is valid yet. Pretty much all decent devs can initially write N features in NlogN time, but most people underestimate the long term cost of features in bugs and support over the life of a product. Does that become N^2 over the long run? Hard to say. (for example, how does the # of stupid questions from clients scale? is that N^2 ?)

Thatcher Ulrich said...

Besides lines of code to implement N features I think there is another superlinear cost -- designing a reasonable UI. That can be pretty vicious. Dunno if it's actually exponential.

Sean said...

Pretty much all decent devs can initially write N features in NlogN time

To clarify, I absolutely didn't mean NlogN time, just NlogN in size (or some complexity metric). And I agree that maintainence and support time should be measured when considering time (or equivalently cost).

Also, O(NlogN) size isn't from just the current project, that's my intuition drawn from a lot of projects. I only brought up my current project to point out that I doubt anybody does O(N^2) testing.

(stupid openID server be borked)