tag:blogger.com,1999:blog-5246987755651065286.post8508433470165012646..comments2021-10-04T07:14:49.947-07:00Comments on cbloom rants: 05-15-09 - Trellis Quantizationcbloomhttp://www.blogger.com/profile/10714564834899413045noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-5246987755651065286.post-28901203734646028932010-04-25T10:14:20.841-07:002010-04-25T10:14:20.841-07:00See also :
http://cbloomrants.blogspot.com/2010/0...See also :<br /><br />http://cbloomrants.blogspot.com/2010/01/01-14-10-small-note-on-trellis.htmlcbloomhttps://www.blogger.com/profile/10714564834899413045noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-71473968771517680952010-04-24T20:08:35.664-07:002010-04-24T20:08:35.664-07:00Thank you for the great explanation!
RogerThank you for the great explanation!<br /><br />RogerRogerhttps://www.blogger.com/profile/03920887773498445569noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-24164904489520053612009-05-19T14:05:09.788-07:002009-05-19T14:05:09.788-07:00Okay, sorry, should've checked that. The part abou...Okay, sorry, should've checked that. The part about all the other algorithms being basically the same still stands though.ryghttps://www.blogger.com/profile/03031635656201499907noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-14638817485452046972009-05-18T18:23:00.000-07:002009-05-18T18:23:00.000-07:00sure sounds like someone rediscovered A* there, wh...<I>sure sounds like someone rediscovered A* there, which they could've had 40 years earlier.</I>Maybe not quite as dumb as you think; from the intro to 'A Fast Maximum-Likelihood Decoder for Convolutional Codes':<br /><br /><I>The A* decoder combines the reliability and performance of the Viterbi algorithm while running as efficiently as a sequential decoder when the SNR is high. However, previous descriptions of A* decoders do not apply to continuous streams of data, and they do not address certain implementation problems that are critical to the practicality of the algorithm. Specifically, under high noise conditions the implementations detailed in the literature lead to a running time asymptotically even worse than Viterbi’s.<br /><br />In this paper, we extend the A* approach to apply to continuous streams of data, and we solve the implementation problems. Specifically, we introduce the lazy Viterbi decoder...</I>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-12605971526422598662009-05-18T13:23:00.000-07:002009-05-18T13:23:00.000-07:00Oh, and about the Viterbi Algorithm: I had the exa...Oh, and about the Viterbi Algorithm: I had the exact same problem when I was reading up on this some time ago. I don't get why there has to be <A HREF="http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" REL="nofollow">twenty</A> <A HREF="http://en.wikipedia.org/wiki/Dynamic_time_warping" REL="nofollow">different</A> <A HREF="http://en.wikipedia.org/wiki/Longest_common_subsequence_problem" REL="nofollow">names</A> <A HREF="http://en.wikipedia.org/wiki/Viterbi_algorithm" REL="nofollow">for</A> <A HREF="http://en.wikipedia.org/wiki/Levenshtein_distance" REL="nofollow">the</A> <A HREF="http://en.wikipedia.org/wiki/Chain_matrix_multiplication" REL="nofollow">same</A> <A HREF="http://en.wikipedia.org/wiki/Knapsack_problem" REL="nofollow">algorithm</A> (yeah, exaggerating a bit here) - but seriously, all applications of DP for combinatorial optimization look exactly the same, just say what the subproblems are and how to combine them (and if it's a fancy application, how to enumerate them), but don't introduce tons of terminology that only makes things harder to understand without adding anything substantial. This is particularly obvious with that Viterbi mess - on the Wikipedia page, they mention that "An alternate algorithm, the Lazy Viterbi algorithm, has been proposed recently. This works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result" - sure sounds like someone rediscovered A* there, which they could've had <B>40 years</B> earlier if they hadn't insisted on their weird terminology.<br /><br />I wonder how the Trellis Quant<->Viterbi association came up in the first place; my best guess is that the first papers to use this approach were written by EE guys which are probably more familiar with the Viterbi algorithm (which is used for decoding in all kinds of comm systems) than they were with Dijkstra (or A*).ryghttps://www.blogger.com/profile/03031635656201499907noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-92143775617962064872009-05-15T20:11:00.000-07:002009-05-15T20:11:00.000-07:00There's some work on removing at least a few of th...There's some work on removing at least a few of the heuristics from the encoding process: <A HREF="http://ita.ucsd.edu/workshop/06/papers/185.pdf" REL="nofollow">this paper</A>, for example. There's very little in terms of original ideas there - they first formulate the encoding process as minimization problem in the obvious fashion, then directly jump to a simple iterative alternating minimization strategy that's definitely not fit to provide bounds on "the best rate distortion performance that an H.264 encoder can possibly achieve" like they claim in the abstract. Oh, and they use "Soft Decision Quantization" to mean Trellis Quantization. The good thing about confusing terminologies is that there's so many to choose from! The main reason why the paper is interesting is because it gives some actual numbers about how much gain can be expected by actually considering the coupling between motion vectors and quantized coefficients during RD optimization - namely, up to 25% rate reduction at the same PSNR compared to the original H.264 reference encoder (when using CAVLC, i.e. without the arithmetic coder). And that's <B>still</B> using iterative optimization that's bound to get stuck in local minima, and also without the arithmetic coder so there's no adaptation to the source statistics.ryghttps://www.blogger.com/profile/03031635656201499907noreply@blogger.com