In the most extreme case, a sparse context might only have 1 previous occurance. Let's take the binary case for clarity, so we have {n0=1,n1=0} or {n0=0,n1=1} in a single context.
There are two issues with a sparse context :
1. What should our actual estimate for P0 (the probability of a zero) be? There's a lot of theory about ideal estimators and you can go read papers on the unbiased estimator or the KT estimator or whatever that will give you formulas like
P0 = (n0 + k) / (n0 + n1 + 2k) k = 0 or 1/2 or whatever depending on the flavor of the monthbut this stuff is all useless in the sparse case. At {n0=1,n1=0} the actual probabilitiy of a 0 might well be 99% (or it could be 10%) and these estimators won't help you with that.
2. Making an estimate of the confidence you have in the sparse context. That is, deciding whether to code from it at all, or how to weight it vs other contexts. Again the important thing is that the statistics in the context itself do not really help you solve this problem.
Now let me take a second to convince you how important this problem is.
In classical literature this issue is dismissed because it is only a matter of "initialization" and the early phase, and over infinite time all your contexts become rich, so asymptotically it doesn't affect ratio at all. Not only is that dismissive of practical finite size data, but it's actually wrong even for very large data. In fact sparse contexts are not just an issue for the early "learning" phase of a compressor.
The reason is that a good compressor should be in the "learning" phase *always*. Basically to be optimal, you want to be as close to the end of sparse contexts as possible without stepping over the cliff into your statistics becoming unreliable. There are two reasons why.
1. Model growth and the edges of the model. The larger your context, or the more context information you can use, the better your prediction will be. As you get more data, the small contexts might become non-sparse, but that just means that you should be pushing out more into longer contexts. You can think of the contexts as tree structured (even if they actually aren't). Around the root you will have dense information, the further out you go to the leaves the sparser they will be.
For example, PPM* taught us that using the longest context possible is helpful. Quite often this context has only occurred once. As you get further into the file, you longest possible context simply gets longer, it doesn't get less sparse.
2. Locality and transients. Real files are not stationary and it behooves you to kill old information. Furthermore, even in dense contexts there is a subset of the most recent information which is crucial and sparse.
For example, in some context you've seen 50 A's and 20 B's. Now you see two C's. Suddenly you are in a sparse situation again. What you have is an insufficient sample sitation. Should I still use those 50 A's I saw, or am I now 99% likely to make more C's ? I don't have enough statistics in this context to tell, so I'm in a sparse situation again.
There are various attempts at addressing this, but like all the problems in this serious there's been no definitive attack on it.
2 comments:
http://encode.ru/threads/1128-Probability-estimation-for-near-empty-contexts
Indirect context models like in PAQ or ZPAQ handle this by looking at past instances of {n0=1, n1=0} in other contexts and counting how many times a 0 or 1 occurred next.
To handle nonstationary sources, the counts are bounded so that a bit history can be represented by 1 byte. You bound n0*n1 < ~30 and represent only the most recent bit. If you go outside this bound then scale n0 and n1 proportionally. This tends to get rid of old statistics while keeping enough to make good predictions.
Keeping only the most recent bit in your history means that the sequences 00111 and 11001 would make the same prediction but that 11100 would make a different prediction, even though you still have {n0=3, n1=2}.
Post a Comment