tag:blogger.com,1999:blog-5246987755651065286.post3082912789717775888..comments2024-02-22T16:15:42.388-08:00Comments on cbloom rants: 12-17-14 - PVQ Vector Distribution Notecbloomhttp://www.blogger.com/profile/10714564834899413045noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-5246987755651065286.post-40152549048423232292014-12-19T07:28:40.127-08:002014-12-19T07:28:40.127-08:00"For correlated values you should have more c...<i>"For correlated values you should have more codebook vectors with neighboring values similar. eg. more entries around {0, 2, 2, 0} and fewer around {2, 0, 0, 2}. The PVQ codebok assumes those are equally likely."</i><br /><br />You could apply the same argument to scalar quantization. You make up for it in the probability modeling in both cases (though what we're currently doing there could certainly be improved).<br /><br />One of the things that takes a bit to wrap your head around is that we're not using VQ here for any of the traditional "memory advantage", etc., reasons people normally use VQ. We want to do gain-shape quantization entirely for perceptual reasons, and we use VQ for the shape almost entirely to avoid adding an extra (redundant) degree of freedom.Anonymoushttps://www.blogger.com/profile/13290846323379814515noreply@blogger.com