5-23-04I'd like to set up a real-time mocap system that can drive straight to my game characters and be captured in real time. I'd like to do it with a little SDK and off-the-shelf hardware. Seems pretty easy, you have the performer wear black or white, then you put colored markers on them; you watch them in real-time with 4-6 video cameras doing real-time capture using videocap cards; you track the markers and figure out the 3d animation of all the joints. You have to have some tool where the markers are assigned to the pre-made character skeletons (or the tool could automatically find a pretty good fit). You could run at 15 fps and still get pretty great quality. The video has to be decently high res to capture, like 640x480 is probably the minimum; you can almost use just higher-end webcams. You have to do a calibration at startup to figure out the positions of the cameras. You can do this just by having some stock marked object, like a colored cube or tetrahedron that you've measured exactly and entered into the system, so the system can just see that object and calibrate to figure out the 3d positions of all the cameras. To track the markers you have to scan 640*480*15*6 pixels a second; that's 30 million. This is probably doable on a 3GHz P4, but worst case you should be able to do it with a dual CPU box. You do need all the cameras to be in-sync for timing, which I would imagine should just happen automatically if all the hardware is the same and decent quality. The full mocap setup with computer should cost about $3000 for hardware. We could sell it for $5000, compare to $20k or more for current mocap systems.
Another fun project would be a full-object scanner using off-the-shelf parts. To do this you set up a rig that you're place your objects in. The rig is a frame with like 10 high resolution digital cameras (or you could just use one and have motors to rotate the rig around, but it's probably cheaper just to have more cameras), and a bunch of colored lights all around the rig that are computer controlled. You take lots of pictures of the object under different lighting conditions, with different colors and different angles. From this you can deduce the albedo texture on the object, and the BRDF, as well as the (visible) 3d geometry of the object! I say "visible" because this only really works on mostly-convex objects, you can't get into nooks and crannies on the hidden insides of the object.