tag:blogger.com,1999:blog-5246987755651065286.post5713450720004120063..comments2024-02-22T16:15:42.388-08:00Comments on cbloom rants: 12-15-08 - Denoisingcbloomhttp://www.blogger.com/profile/10714564834899413045noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-5246987755651065286.post-45651827995096266422008-12-21T20:23:00.000-08:002008-12-21T20:23:00.000-08:00Found some nice paper on the topic:http://www.busi...Found some nice paper on the topic:<BR/><BR/>http://www.busim.ee.boun.edu.tr/~mihcak/publications/spl99.pdfEthatronhttps://www.blogger.com/profile/10014459901168712472noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-37306648686652432862008-12-16T19:44:00.000-08:002008-12-16T19:44:00.000-08:00"Noise is an artifact of inadequate super-sampling..."Noise is an artifact of inadequate super-sampling, the best denoiser is allways downscale the image by a factor big X."<BR/><BR/>No, not at all, not by my definition. That certainly is an okay way to reduce noise, but it also removes tons of good information. I want algorithms that preserve as much of the original information as possible. (and in fact that is not even the best way to remove noise even if you are willing to give up lots of information).<BR/><BR/>Down-sampling with weights on the pixels where the weight is proportional to the confidence in the pixel's correctness is a pretty good denoiser if you are willing to give up some spatial information (eg. to compare a 15 MP noisy camera picture to 10 MP).<BR/><BR/>"lossy image compressors and denoising- algorithms are just two sides of the same medal" <BR/><BR/>Yes they are very similar. In general all these things are just trying to create an algorithmic model for the theoretical "image source". However, there are some major differences, as I tried to write about in my post and have also talked about in the past on Super-Resolution.<BR/><BR/>Part of the key difference comes from how the quality of the result is measured.<BR/><BR/>If you use normal image compression techniques for super-resolution or denoising you wind up making images that look very smoothed out. This is because the compressors want to pick the lowest entropy prediction, which is in the middle of the probability bump. That's not the same goal as a denoiser or super-resolution algorithm, which wants to pick the single maximum likelihood value. <BR/><BR/>This is a lot like the difference between median and average.cbloomhttps://www.blogger.com/profile/10714564834899413045noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-83353153455653703672008-12-16T19:28:00.000-08:002008-12-16T19:28:00.000-08:00Noise is an artifact of inadequate super-sampling,...Noise is an artifact of inadequate super-sampling, the best denoiser is allways downscale the image by a factor big X.<BR/><BR/>Fighting noise is fighting too little camera-shots, too little stochastic renders, too little information.<BR/><BR/>When NASA made the Ultra Deep Field, they actually sampled each pixel 170 times, not spacially in that case (because it would not have solved the problem) but through repetition and averaging.<BR/><BR/>Indeed would that be one easy approach to not-inthefirst-noising in digital cameras, if they could just make multiple shots fast enough.<BR/><BR/>On the topic of do-afterthedamagehasbeendone-denoising I think lossy image compressors and denoising-algorithms are just two sides of the same medal (or model :), obviously any good denoising-algorithm is a good predictor for lossy image compression (fe. neural-networks), and vice versa. Both are and must also be imperfect.Ethatronhttps://www.blogger.com/profile/10014459901168712472noreply@blogger.com