Well, I finally got through it. Their paper is pretty ugly. Let me briefly explain their method :
They, like my old OBB stuff, take heavy advantage of the fast rotating-calipers method to find the optimal rectangle of a convex hull in 2d (it's O(n)). Also finding the convex hull is O(nlogn). What that means is, given one axis of an OBB, you can find the optimal other two axes in O(nlogn). So the problem just comes down to finding one of the optimal axes.
Now, as I mentioned before, the number of actual axes you must consider to be truly optimal is O(n^2) , making your total run time O(n^3). These axes are the face normals of the convex hull, plus axes where the OBB is supported by two edges and a vert (I don't know an easy way to even enumerate these).
It's now pretty well known around the industry that you can get a very good OBB by just trying a bunch of scattered initial axes instead of doing O(n^2). If you try some fixed number of axes, like say 256, it doesn't count against your big-O at all, so your whole OBB fit is still O(nlogn).
Well this is exactly what the Barequet & Har-Peled method is. They try a fixed number of directions for the seed axis, then do the rectangle fit for the other two axes. The main contribution of the paper is the proof that if you try enough fixed directions, you can get the error of the bbox within whatever tolerance you want. That's sort of intuitively obvious - if you try more and more fixed directions you must get closer and closer to the optimal box. Their construction also provides a specific method for enumerating enough directions.
Their enumeration goes like this :
Start with some seed box S. They use the box that is made from taking one axis to be the "diameter" of the point set (the vector between the two most seperated points). Using that box is important to their proof, but I don't think which seed box you use is actually terribly important in practice.
The seed box S has normalized edge vectors S.x , S.y, S.z (the three axes that define the box).
Enumerate all sets of 3 (non-negative) integers whose sum is <= K , that is {i,j,k} such that (i+j+k) <= K
Construct the normal N = i * S.x + j * S.y + k * S.z ; normalize it, and use this as the direction to fit a new OBB. (note that there are a lot of points {ijk} that generate the same normal - any points that are integer multiples of another; those can be skipped).
Barequet & Har-Peled prove that the OBB made this way is within (1/K) to some power or other of optimal, so as you increase K you get ever closer.
Now, this is almost identical to my old "OptimalOBBFixedDirections" which tried various static directions and then optimized the box from there. My OptimalOBBFixedDirections always tests 42 directions in the {+,+,+} octant which I made by subdividing an octahedron. I have found that the Barequet & Har-Peled method does in fact find better boxes with fewer tests, but the difference is very very small (thousandths of a percent). I'll show numbers in a second.
First I want to mention two other things.
1. There's a "common wisdom" around that net that while it is bad to make an OBB from the covariance matrix of the *points* , it is good to make an OBB from the covariance matrix of the *volume*. That is, they claim if you have a closed mesh, you can use the Mirtich (or Eberly or Blow/Binstock) method to compute the covariance matrix of the solid body, and use that for your OBB axes.
I have seen no evidence that this is true. Yes, the covariance matrix of the points is highly dependent on the tesselation, while the covariance matrix of the solid is more a property of the actually shape of the object, so that is intuitively pleasing. In practice it appears to be completely random which one is actually better. And you just shouldn't use the covariance matrix method anyway.
2. Barequet & Har-Peled mention the iterative refinement of OBB's using the caliper-fit. This is something I've known a while, but I've never seen it published before; I think it's one of those gems of wisdom that lots of people know but don't consider worth a paper. They mention it almost in passing, but it's actually perhaps the most valuable thing in their whole paper.
Recall if you have one axis of the OBB fixed, you can easily find the optimal directions of the other two axes using rotating calipers to fit a rectangle. The thing is, once you do that, you can then hold one of those new axes fixed, and fit the other two. So like, fix X, then caliper to get YZ, then fix Y, and caliper to get XZ. Each step of the iteration either improves your OBB or does nothing. That means you descend to a local minimum in a finite number of steps. (in practice I find you usually get there in only 3 steps, in fact that might be provable (?)).
Assuming your original seed box is pretty close to optimal, this iteration is kind of like taking your OBB and trying to spin it along one axis and pinch it to see if you get a tighter fit; it's sort of like wiggling your key as you put it into a lock. If your seed OBB is close to being right, but isn't supported by one of the necessary support conditions, this will wiggle it tighter until it is supported by a face or an edge pair.
The methods shown in the test below are :
True convex hull (within epsilon) : note that area optimization is usually what you want but volume shows a bigger difference between methods Hull simplificiation : I simplify the hull by just doing PM on the triangles Then convert the triangles to planes Push the planes out until all points are behind them Then clip the planes against each other to generate new faces This is a simpler hull that strictly contains the original hull k-dop : Fits 258 planes in fixed directions on the sphere Just pushes each plane to the edge of the point set Clips them all against each other strictly this is O(n) but in practice it's slower than the true convex hull and much worse quality seems pointless to me The rating we show on all the OBB's is surface area AxialOBB : axis-aligned box OBBByCovariance : vertex covariance matrix sets axes OBBByCovariance+it : OBBByCovariance followed by iterative greedy optimization OBBByCovarianceOptimized : like OBBByCovariance+it, but tries all 3 initial fixed axes OptimalOBBFixedDirections : tries 42 fixed directions OptimalOBBFixedDirections+it : tries 42 fixed directions, takes the best, then optimizes OptimalOBBFixedDirections opt : tries 42 fixed directions, optimizes each one, then takes the best OBBGoodHeuristic : takes the best of OBBByCovarianceOptimized and "OptimalOBBFixedDirections opt" OBBGivenCOV : this is OBBByCovarianceOptimized but using the solid body covariance instead of points OptimalOBB : tries all face normals of the convex hull (slow) I'd like to also try all the edge-support directions here, but haven't figured it out OptimalOBBBarequetHarPeled 5 kLimit : 5 , numBuilds : 19 BarequetHarPeled method with (i+j+k) <= 5 causes it to try 19 boxes optimizes each one, then picks the best very similar to "OptimalOBBFixedDirections opt"
And the results :
----------------------------------------- dolphin.x : Made Hull with 206 faces hull1 volume : 1488557 , area : 95330 Making hull from k-dop planes... Made Hull with 142 faces hull2 volume : 2081732 , area : 104951 Making OBB... AxialOBB : 193363.109 OBBByCovariance : 190429.594 OBBByCovariance+it : 179504.734 OBBByCovarianceOptimized : 179504.719 OptimalOBBFixedDirections : 181693.297 OptimalOBBFixedDirections+it : 181693.297 OptimalOBBFixedDirections opt : 176911.750 OBBGoodHeuristic : 179504.719 OBBGivenCOV : 178061.406 OptimalOBB : 176253.359 kLimit : 3 , numBuilds : 3 OptimalOBBBarequetHarPeled 3 : 179504.703 kLimit : 5 , numBuilds : 19 OptimalOBBBarequetHarPeled 5 : 178266.047 kLimit : 10 , numBuilds : 160 OptimalOBBBarequetHarPeled 10 : 176508.109 kLimit : 20 , numBuilds : 1222 OptimalOBBBarequetHarPeled 20 : 176218.344 kLimit : 50 , numBuilds : 18037 OptimalOBBBarequetHarPeled 50 : 176116.156 ----------------------------------------- teapot.x : hull1 faces : 612 hull1 volume : 3284935 , area : 117470 simplified hull2 faces : 366 hull2 volume : 3384222 , area : 120357 Making hull from k-dop planes... Made Hull with 234 faces hull2 volume : 3761104 , area : 129271 Making OBB... AxialOBB : 253079.797 OBBByCovariance : 264091.344 OBBByCovariance+it : 222514.219 OBBByCovarianceOptimized : 220723.844 OptimalOBBFixedDirections : 219071.703 OptimalOBBFixedDirections+it : 218968.844 OBBGoodHeuristic : 218968.844 OptimalOBB : 218968.844 OBBGivenCOV : 220762.766 kLimit : 3 , numBuilds : 3 OptimalOBBBarequetHarPeled 3 : 220464.766 kLimit : 5 , numBuilds : 19 OptimalOBBBarequetHarPeled 5 : 219540.203 kLimit : 10 , numBuilds : 160 OptimalOBBBarequetHarPeled 10 : 218968.000 kLimit : 20 , numBuilds : 1222 OptimalOBBBarequetHarPeled 20 : 218965.406 kLimit : 50 , numBuilds : 18037 OptimalOBBBarequetHarPeled 50 : 218963.109 -----------------------------------------
Some highlights :
OBBByCovariance is quite bad. OptimalOBBFixedDirections is the only other one that doesn't do the iterative optimization, and it can be bad too, though not nearly as bad.
Any of the methods that do the iterative optimization is perfectly fine. The differences are very small.
"OptimalOBBBarequetHarPeled 7" does about the same number of tests as "OptimalOBBFixedDirections opt" , and it's very microscopically better because of the way the directions are distributed.
OBBGivenCOV (the solid mass covariance) is worse than OBBByCovarianceOptimized (point covariance) on teapot.
Also - the Convex Hull simplification thing I did was just pulled out of my ass. I did a quick Google to see if I could find any reference, and I couldn't find any. I'm surprised that's not a solved problem, it seems like something right up the Geometers' alley.
Problem : Find the convex bounding volume made of N faces (or N planes) that strictly encloses the original mesh, and has minimum surface area (or volume).
In general, the optimal N-hull can not be reached by greedy simplification from the full-detail convex hull. In practice I found my hacky PM solution to work fine for moderate simplification levels. To make it more correct, the "push out to enclose" step should be done in each PM collapse to keep the hull valid as you go (instead of at the end). Also the PM collapse metric should be the metric you are trying to optimize - surface area or volume (I just used my old geometric error collapser).
The main thing I was interested in with convex hull simplification was eating away highly tesselated bits. The mesh I mainly tested on was "fandisk" because it's got these big flat surfaces, and then some rounded bits. If you imagine a mesh like a big cube minowski summed with a small sphere, you get a cube with rounded edges and corners. If the sphere is highly tessleated, you can get a hull with tons and tons of faces, but they are very unimportant faces. You want to sort of polygonate those corners, replaces the rounded sphere with a less tesselated one that's pushed out.
The hull simplification reminded me to the progressive hulls in Hoppe's silhouette clipping paper.
ReplyDeleteI know it's not exactly the same, but they reference a related construction for the case of convex sets:
DOBKIN, D. P., AND KIRKPATRICK, D. Determining the separation of preprocessed polyhedra – a unified approach. ICALP-90, LNCS 443 (1990), 400–413
Oh yeah, I knew I'd seen that before but couldn't find it.
ReplyDeleteCan you plot the octant of axes tested vs. obb area? I'm curious how smooth the plot is; maybe it would be possible to do gradient descent or something like that, instead of brute force.
ReplyDelete