Instructables

Structured Light 3D Scanning

Featured
Picture of Structured Light 3D Scanning
The same technique used for Thom's face in the Radiohead "House of Cards" video. I'll walk you through setting up your projector and camera, and capturing images that can be decoded into a 3D point cloud using a Processing application.

Point Clouds with Depth of Field from Kyle McDonald on Vimeo.



House of Cards Google Code project

I've included a lot of discussion about how to make this technique work, and the general theory behind it. The basic idea is:

1 Download ThreePhase
2 Rotate a projector clockwise to "portrait"
3 Project the included images in /patterns, taking a picture of each
4 Resize the photos and replace the example images in /img with them
 
Remove these adsRemove these ads by Signing Up

Step 1: Theory: Triangulation

If you just want to make a scan and don't care how it works, skip to Step 3! These first two steps are just some discussion of the technique.

Triangulation from Inherent Features
Most 3D scanning is based on triangulation (the exception being time-of-flight systems like Microsoft's "Natal"). Triangulation works on the basic trigonometric principle of taking three measurements of a triangle and using those to recover the remaining measurements.

If we take a picture of a small white ball from two perspectives, we will get two angle measurements (based on the position of the ball in the camera's images). If we also know the distance between the two cameras, we have two angles and a side. This allows us to calculate the distance to the white ball. This is how motion capture works (lots of reflective balls, lots of cameras). It is related to how humans see depth, and is used in disparity-based 3D scanning (for example, Point Grey's Bumblebee).

Triangulation from Projected Features
Instead of using multiple image sensors, we can replace one with a laser pointer. If we know the angle of the laser pointer, that's one angle. The other comes from the camera again, except we're looking for a laser dot instead of a white ball. The distance between the laser and camera gives us the side, and from this we can calculate the distance to the laser dot.

But cameras aren't limited to one point at a time, we could scan a whole line. This is the foundation of systems like the DAVID 3D Scanner, which sweep a line laser across the scene.

Or, better yet, we could project a bunch of lines and track them all simultaneously. This is called structured light scanning.
1-40 of 207Next »

Can someone explain to me the point of using 5v and a regulator to get to 3.3v when there is a 3.3v supply from the pi. I am fairly new to this stuff and am just trying to learn. Thanks.

motherprune1 month ago

Thats cold

Nicely done... Amazing instructable

mousepaper1 month ago

Tremendous...!!

awesome

Can someone explain to me the point of using 5v and a regulator to get to 3.3v when there is a 3.3v supply from the pi. I am fairly new to this stuff and am just trying to learn. Thanks.

Terrific...!!

amazedgreen1 month ago

Its wonderful

fastbobble1 month ago

Thats astounding

gorgeddamp1 month ago


Astonishing

Pustekuchen1 month ago

Hey Guys,

if I have just a "single-dot-laser" I can determine X, Y and Z in help of the Triangulation.

So Z is Z = (f*b*tan(alpha))/(f+x*tan(alpha))

where

f -focal length

b - Laser Camera Disance

alpha - Laser Angle

So I have a Problem there. I calculated x because I need it in mm. So I did the following:

x = (x_pixel/pizel_widh_Image-0.5)*SensorSizeX

But my measurement are wrong. I am sure the Formula for Z is right. But I am not sure about x. Is there some one who can help me?

Its remarkable

illrings2 months ago

Its nice

airbugger2 months ago

Its extraordinary


Its exceptional :)

headlymph2 months ago

nice

tealrink2 months ago

TOO GOOD

grousebandit2 months ago


Awe-inspiring

harechubby3 months ago

great

jiffymanager3 months ago

great

jiffymanager3 months ago

super

calmlunch3 months ago

good

VitaminX3 months ago

Nicely done... Amazing instructable

clapfilk3 months ago

super

good

nitoloz3 months ago

Hi Kyle,thats really very interesting,but i have faced the same problem as Erlud before
the problrm is with PriorityQueue:
PriorityQueue toProcess; Processing cannot find a class or type named "PriorityQueue".

Can anyone help me?

spongeraffle3 months ago

Thats interesting

workexaminer4 months ago


Beneficial

I saw this first on Open Processing! I am glad I found how to do it, this is amazing! Thanks

clickyummy4 months ago

good

bearblue5 months ago

good

bearblue5 months ago

good

varun20215 months ago

Great post, what was the depth resolution you could get ? Can this method be used to reproduce fine details ?

gazumpglue5 months ago

Thank you for the post Kyle, awesome work.

lozza19507 months ago
Very interesting and informative. I have a cnc router if the camera and projector where mounted on the "y " axis could a series of images be taken along the length of a wide relatively flat carvings nd the series of images be inter grated so the original be reproduced. Thanks Laurie
ErLud7 months ago
Hi Kyle
My name is Erich. I tried to run the program in Processing but I recei un error over PriorityQueue toProcess; that processing cannot find a class or type named "PriorityQueue".
What am I doing wrong?
Mariska Botha10 months ago
Very nice Kyle
BunnyRoger11 months ago
Quite cool.
Amanda Culbert11 months ago
Wow, super cool Kyle!!
MAApleton11 months ago
Thank you for the post Kyle, awesome work.
1-40 of 207Next »