tag:blogger.com,1999:blog-5246987755651065286.post5523471213294369459..comments2024-09-10T00:59:11.682-07:00Comments on cbloom rants: 02-25-14 - ANS Applicationscbloomhttp://www.blogger.com/profile/10714564834899413045noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-5246987755651065286.post-55599783635160009652015-04-04T18:07:05.176-07:002015-04-04T18:07:05.176-07:00My background is in game development, and huffman ...My background is in game development, and huffman is widely accepted as the gold-standard in compressing game information before letting it loose across the network/internet connection to other machines participating in a game.. Personally, I came up with my own huffman/range-encoder hybrid that encodes/decodes like huffman but generates bitcodes using a binary search on a space of the symbol probabilities to generate bitcodes. Maybe there's something to that worth looking into? Just me fumbling around with little math background.<br /><br />I've been studying more about range/arithmetic coders in an attempt to get closer to entropy with my little bastardized hybrid when I discovered ANS/FSE and the comments about it being a viable alternative, with better compression to boot.<br /><br />I've been struggling to grasp the core concept of ANS/FSE, and after staying up into the wee morning hours reading Janek's paper and Yann's blog, and the intercourse of ideas on linked forums, I finally had a visualization that made sense, thanks to your blog!<br /><br />My application benefits most from speed and compression (figures) and expenditure of memory is virtually a non-issue. I will be performing some further fumbling in an attempt to get my own FSE up and running.<br /><br />The typical approach with networked multiplayer games is to utilize a static huffman encoding, using a fixed pre-determined/measured table of byte value probabilities. My approach with the binary-search generated bitcodes operates the same way, but with a different means of extracting bitcodes from symbol probabilities. My hope (because a lack of math background leaves you with only blind hope) was to get the best of both worlds with huffman/range. Instead I seem to have gotten the worst, and then some.<br /><br />If I can get huffman speeds with range-coding compression, that is a definite win. From what I've been able to glean, it would seem that the table-based tANS is what I should be aiming for, if speed is a top priority, because pixels need cycles too!<br /><br />Any input for a layman is appreciated! I have to thank you for taking the time to operate your blog, it is an invaluable resource. Thanks!deftwarehttps://www.blogger.com/profile/13361822983119836854noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-4079615051942603192014-02-25T14:53:33.656-08:002014-02-25T14:53:33.656-08:00Might be worthwhile to figure out if there's a...Might be worthwhile to figure out if there's a nice way to get universal ANS codes with low redundancy.<br /><br />E.g. uniform is straightforward from the construction, but already quite useful. An efficient closed-form solution for geometric with parametric p (for some range) would be really handy; you'd need an escape once your ranges get too small, but that's not too horrible.<br /><br />A lot of the binary models in e.g. H.264 just end up being used for mostly geometrically distributed run lengths. A good parametric "fractional Golomb" coder could replace that usage (while still adaptively estimating the expected run length) without necessarily giving up anything.Fabian 'ryg' Giesenhttps://www.blogger.com/profile/13685994980026854143noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-20602896678669327162014-02-25T13:45:01.734-08:002014-02-25T13:45:01.734-08:00Fast arithmetic "range" coders, (where y...Fast arithmetic "range" coders, (where you only periodically update your symbol counts) can all be ANS now.Anonymousnoreply@blogger.com