BEGIN review if you want to approximate f(t) by Sum_i P_i * synthesis(t-i) you can find the P's by : P_i = Convolve{ f(t) analysis(t-i) } END reviewa note on the method :
BEGIN note the H overlap matrix was computed on a 9x9 domain because my matrix inverse is ungodly slow for sanity checking I compared to 11x11 a few times and found the difference to be small for example : linear filter invH 9x9 : 0.0038,-0.0189,0.0717,-0.2679,1.0000,-0.2679,0.0717,-0.0189,0.0038 linear filter invH 11x11 : -0.0010,0.0051,-0.0192,0.0718,-0.2679,1.0000,-0.2679,0.0718,-0.0192,0.0051,-0.0010 (ideally I would use a very large matrix and then look at the middle row, because that is where the boundary has the least effect) for real use in a high precision environment you would have to take the domain boundary more seriously also, I did something stupid and printed out the invH rows with the maximum value scaled to 1.0 ; the unscaled values for linear are : -0.0018,0.0088,-0.0333,0.1243,-0.4641,1.7320,-0.4641,0.1243,-0.0333,0.0088,-0.0018 but, I'm not gonna redo the output to fix that, so the numbers below have 1.0 in the middle. END note
For box synthesis, analysis is box.
linear : invH middle row : = 0.0038,-0.0189,0.0717,-0.2679,1.0000,-0.2679,0.0717,-0.0189,0.0038, -0.0010,0.0051,-0.0192,0.0718,-0.2679,1.0000,-0.2679,0.0718,-0.0192,0.0051,-0.0010,
(note: we've see this linear analysis filter before when we talked about how to find the optimum image such that when it's bilinear interpolated you match some original as well as possible)
quadratic invH middle row : = 0.0213,-0.0800,0.1989,-0.4640,1.0000,-0.4640,0.1989,-0.0800,0.0213,
gauss-unity : invH middle row : = 0.0134,-0.0545,0.1547,-0.4162,1.0000,-0.4162,0.1547,-0.0545,0.0134,
note : "unity" means no window, but actually it's a rectangular window with width 5 ; the gaussian has sdev of 0.5
sinc half-width of 20 :
sinc-unity : invH middle row : = 0.0129,-0.0123,0.0118,-0.0115,1.0000,-0.0115,0.0118,-0.0123,0.0129,
note : obviously sinc is its own analysis ; however, this falls apart very quickly when you window the sinc at all, or even just cut it off when the values get tiny :
sinc half-width of 8 :
sinc-unity : invH middle row : = 0.0935,-0.0467,0.0380,-0.0354,1.0000,-0.0354,0.0380,-0.0467,0.0935,
lanczos6 : invH middle row : = 0.0122,-0.0481,0.1016,-0.1408,1.0000,-0.1408,0.1016,-0.0481,0.0122,
lanczos4 : invH middle row : = 0.0050,-0.0215,0.0738,-0.1735,1.0000,-0.1735,0.0738,-0.0215,0.0050,
Oh, also, note to self :
If you print URLs to the VC debug window they are clickable with ctrl-shift, and it actually uses a nice simple internal web viewer, it doesn't launch IE or any such shite. Nice way to view my charts during testing.
ADDENDUM : Deja vu. Rather than doing big matrix inversions, you can get these same results using Fourier transforms and Fourier convolution theorem.
cbloom rants 06-16-09 - Inverse Box Sampling
cbloom rants 06-17-09 - Inverse Box Sampling - Part 1.5
cbloom rants 06-17-09 - Inverse Box Sampling - Part 2
In the previous post you show how to construct an optimal grid of discrete samples for a source signal when those samples will be bilinear filtered back into a continuous signal:
ReplyDelete(signal) <analysis filter> (optimal samples) <matching synthesis filter> (good approximation of the signal)
What if this analysis was applied to a 3D rendering pipeline, represented as a series of filters?
(source texture signal) <prefilter> (discrete mip chain) <GPU bilinear/trilinear/anisotropic filter> (screen pixels) <screen synthesis filter> (light)
1. Is this at all sensible? I have no background in signal processing.
2. Given a synthesis filter for the screen, does the ideal prefilter exist and have any interesting properties?
3. Could a shader running a customized filter kernel do a better job matching the light output to a source signal? Based on your posts, I would think the mapping from texture samples to screen pixels could be made optimal by reconstructing with the prefilter's synthesis function and then convolving with the screen's analysis function. I doubt this will be equivalent to any GPU filtering mode.
The short answer is yes, sort of.
ReplyDeleteIf you knew the entire pipeline from your pixel to the output, you could figure out the ideal reconstruction filter, and what reconstruction was actually being done, and then solve for how to compensate for that.
In practice there are a lot of problems with doing that at the texture level. The output filter depends on where/how the texture is used, so it can't be done statically, you would have to tell the texture fetch shader how&where the pixel was to be used. It also couldn't compensate for anything that happens across triangle edges.
The other option is to run a full screen post-process, and that's sort of what all the MLAA filter type of stuff is doing - compensating for bad filtering in the rendering pipeline to try to make the output more like what it should be.
For "isotropic" (i.e. no nonuniform scales or perspective transforms) 2D-only you're restricted enough to do some general prefiltering with predictable results. Not very helpful for general 3D rendering, but useful for font rendering and things like that.
ReplyDelete"The other option is to run a full screen post-process, and that's sort of what all the MLAA filter type of stuff is doing - compensating for bad filtering in the rendering pipeline to try to make the output more like what it should be."
Hmm not really. Multisampling and signal processing based approaches are kind of orthogonal to MLAA and other post-filtering approaches, because really they have very different goals.
The sampling view is "physical" in the sense that there's an objective ground truth and we're trying to reproduce it as closely as possible.
Postfilters don't care about that at all; their POV is that some bad sampling approximations create objectionable visual artifacts, and they remove those artifacts. They don't have any underlying model of a reality they're trying to map to the closest representable approximation, but they do have an underlying appearance model that tells them what looks bad and how to fix it up.
That may make the image closer to what it should be, or move it further away - the post-filters don't care. They're not trying to move the image closer to the ground truth, they just have some "reality-independent" cost function for images that they're trying to minimize.
To give an example, say you're looking through a mosquito net. If you do this IRL, you see a clear moire effect. A perfect "physical" renderer would reproduce this exactly. A post-filter would probably eliminate the effect, mistaking it for a sampling artifact, and opt to just uniformly darken the whole area.
"Multisampling and signal processing based approaches are kind of orthogonal to MLAA and other post-filtering approaches, because really they have very different goals."
ReplyDeleteI don't agree with that / that's not true under how I'm using those words.
I see what you're saying and I agree, but I think the distinction between "signal processing" and more hacky techniques is artificial.
The whole point of signal processing for me is to make something that looks good. I don't constrain myself to only linear filters that are interpolating or whatever.
So when I say "do some filtering on the final frame" I am including techniques such as MLAA. I also include things like bilateral filters, "augural zooming", and other types of adaptive filters.
I see what you're saying and I agree, but I think the distinction between "signal processing" and more hacky techniques is artificial.
ReplyDeletePoor choice of wording on my part, I certainly don't mean to suggest that linear filters are the only way to go.
The distinction I mean is between what is called "unbiased" and "biased" methods in physically-based rendering. Unbiased renderers can give you incrementally better error bounds (with high probability) the more computing resources (time, memory) you're willing to throw at the problem - e.g. all the various Path Tracing variants. Biased methods *can* give you better results for more work, but they don't come with any convergence guarantee - the example here would be Photon Mapping and friends.
The same distinction can be made for various filtering/postprocessing operations, and that's not just a distinction between linear and non-linear. Most linear approaches happen to be unbiased, but there's also unbiased nonlinear/adaptive filters (e.g. most anisotropic filtering variants).
Biased/unbiased is not a "value judgment" - they both have their advantages and disadvantages; that's precisely why I think the distinction is useful - it's not very enlightening to have two classes of solutions when one of the two is obviously superior in every important way.