If you just want to make a scan and don't care how it works, skip to Step 3! These first two steps are just some discussion of the technique.Triangulation from Inherent Features
Most 3D scanning is based on triangulation (the exception being time-of-flight systems
like Microsoft's "Natal
"). Triangulation works on the basic trigonometric
principle of taking three measurements of a triangle and using those to recover the remaining measurements.
If we take a picture of a small white ball from two perspectives, we will get two angle measurements (based on the position of the ball in the camera's images). If we also know the distance between the two cameras, we have two angles and a side. This allows us to calculate the distance to the white ball. This is how motion capture
works (lots of reflective balls, lots of cameras). It is related to how humans see depth, and is used in disparity
-based 3D scanning (for example, Point Grey's Bumblebee
).Triangulation from Projected Features
Instead of using multiple image sensors, we can replace one with a laser pointer. If we know the angle of the laser pointer, that's one angle. The other comes from the camera again, except we're looking for a laser dot instead of a white ball. The distance between the laser and camera gives us the side, and from this we can calculate the distance to the laser dot.
But cameras aren't limited to one point at a time, we could scan a whole line. This is the foundation of systems like the DAVID 3D Scanner
, which sweep a line laser across the scene.
Or, better yet, we could project a bunch of lines and track them all simultaneously. This is called structured light