I read about a new way that is somewhat interesting : Probabilistic Sliding Windows. I saw this in Ryabko / Fionov where they call it "Imaginary Sliding Windows", but I like Probabilistic better.
This is for the case where you want to have a large window but don't want to store the whole thing. Recall a normal sliding window mean update is :
U8 window[WINDOW_SIZE]; int window_i = 0; int accum = 0; // add in head and remove tail : accum += x_t - window[window_i]; window[window_i] = x_t; window_i = (window_i + 1) & (WINDOW_SIZE-1); mean = (accum/WINDOW_SIZE); // WINDOW_SIZE is power of 2 for speed of coursethat's all well and good except when you want WINDOW_SIZE to be 16k or something. In that case you can use histogram probabilitic sliding window. You keep a histogram and accumulator :
int count[256] = { 0 }; int accum = 0;at each step you add the new symbol and randomly remove something that you have :
// randomly remove something : r = random_from_histogram( count ); count[r] --; accum -= r; //add new : count[x_t] ++; accum += x_t;
It's very simple and obvious - instead of knowing which symbol leaves the sliding window at the tail, we generate one randomly from the histogram that we have of symbols in the window.
Now, random_from_histogram is a bit of a problem. If you just do a flat random draw :
// randomly remove something : for(;;) { int r = randmod(256); if ( count[r] > 0 ) { count[r] --; accum -= r; break; } }then you will screw up the statistics in a funny way; you will draw unlikely symbols too often, so you will skew towards more likely symbols. Maybe you could compute exactly what that skewing is and compensate for it somehow. To do an unbiased draw you basically have to do an arithmetic decode. You generate a random in [0,accum-1) and then look it up in the histogram.
Obviously this method is not fast and is probably useless for compression, but it is an interesting idea. More generally, probabilistic approximate statistics updates are an interesting area. In fact we do this already quite a lot in all modern compressors (for example there's the issue of hash table collisions). I know some of the PAQs also do probabilistic state machine updates for frequency counts.
There's also a whole field of research on this topic that I'd never seen before. You can find it by searching for "probabilistic quantiles" or "approximate quantiles". See for example : "Approximate Counts and Quantiles over Sliding Windows" or "On Probabilistic Properties of Conditional Medians and Quantiles" (PDF) . This stuff could be interesting for compression, because tracking things like the median of a sliding window or the N most common symbols in a window are pretty useful things for compressors.
For example, what I would really like is a fast probabilistic way of keeping the top 8 most common symbols in the last 16k, and semi-accurately counting the frequency of each and the count of "not top 8".
That's bloody marvellous! A whole new areas of algorithms to look into. Thank you.
ReplyDelete