Instructables
Picture of Structured Light 3D Scanning
The same technique used for Thom's face in the Radiohead "House of Cards" video. I'll walk you through setting up your projector and camera, and capturing images that can be decoded into a 3D point cloud using a Processing application.

Point Clouds with Depth of Field from Kyle McDonald on Vimeo.



House of Cards Google Code project

I've included a lot of discussion about how to make this technique work, and the general theory behind it. The basic idea is:

1 Download ThreePhase
2 Rotate a projector clockwise to "portrait"
3 Project the included images in /patterns, taking a picture of each
4 Resize the photos and replace the example images in /img with them
 
Remove these adsRemove these ads by Signing Up

Step 1: Theory: Triangulation

If you just want to make a scan and don't care how it works, skip to Step 3! These first two steps are just some discussion of the technique.

Triangulation from Inherent Features
Most 3D scanning is based on triangulation (the exception being time-of-flight systems like Microsoft's "Natal"). Triangulation works on the basic trigonometric principle of taking three measurements of a triangle and using those to recover the remaining measurements.

If we take a picture of a small white ball from two perspectives, we will get two angle measurements (based on the position of the ball in the camera's images). If we also know the distance between the two cameras, we have two angles and a side. This allows us to calculate the distance to the white ball. This is how motion capture works (lots of reflective balls, lots of cameras). It is related to how humans see depth, and is used in disparity-based 3D scanning (for example, Point Grey's Bumblebee).

Triangulation from Projected Features
Instead of using multiple image sensors, we can replace one with a laser pointer. If we know the angle of the laser pointer, that's one angle. The other comes from the camera again, except we're looking for a laser dot instead of a white ball. The distance between the laser and camera gives us the side, and from this we can calculate the distance to the laser dot.

But cameras aren't limited to one point at a time, we could scan a whole line. This is the foundation of systems like the DAVID 3D Scanner, which sweep a line laser across the scene.

Or, better yet, we could project a bunch of lines and track them all simultaneously. This is called structured light scanning.
1-40 of 213Next »

Very nice. Thanks

yieldlymph13 days ago

I am also getting the error "Cannot find class or type named PriorityQueue"

Any ideas?

yieldlymph13 days ago

nice

OlegKitFace1 month ago

Updated for processing 2.X:

https://github.com/kippkitts/3PhaseScanning/tree/master/ThreePhase

ChrisH13 months ago

I am also getting the error "Cannot find class or type named PriorityQueue"

Any ideas?

JKPina ChrisH12 months ago

Same issue (also with LinkedList in ThreePhase-1): I put:

import java.util.*;

In any source, but a new error appeared: java.lang.NullPointerException :/

paverphalange4 months ago

Can someone explain to me the point of using 5v and a regulator to get to 3.3v when there is a 3.3v supply from the pi. I am fairly new to this stuff and am just trying to learn. Thanks.

motherprune5 months ago

Thats cold

cobbledbeard5 months ago

Nicely done... Amazing instructable

mousepaper5 months ago

Tremendous...!!

awesome

Can someone explain to me the point of using 5v and a regulator to get to 3.3v when there is a 3.3v supply from the pi. I am fairly new to this stuff and am just trying to learn. Thanks.

Terrific...!!

amazedgreen5 months ago

Its wonderful

fastbobble5 months ago

Thats astounding

gorgeddamp5 months ago


Astonishing

Pustekuchen5 months ago

Hey Guys,

if I have just a "single-dot-laser" I can determine X, Y and Z in help of the Triangulation.

So Z is Z = (f*b*tan(alpha))/(f+x*tan(alpha))

where

f -focal length

b - Laser Camera Disance

alpha - Laser Angle

So I have a Problem there. I calculated x because I need it in mm. So I did the following:

x = (x_pixel/pizel_widh_Image-0.5)*SensorSizeX

But my measurement are wrong. I am sure the Formula for Z is right. But I am not sure about x. Is there some one who can help me?

clearedeager5 months ago

Its remarkable

illrings6 months ago

Its nice

airbugger6 months ago

Its extraordinary


Its exceptional :)

headlymph6 months ago

nice

tealrink6 months ago

TOO GOOD

grousebandit6 months ago


Awe-inspiring

harechubby6 months ago

great

jiffymanager7 months ago

great

jiffymanager7 months ago

super

calmlunch7 months ago

good

VitaminX7 months ago

Nicely done... Amazing instructable

clapfilk7 months ago

super

good

nitoloz7 months ago

Hi Kyle,thats really very interesting,but i have faced the same problem as Erlud before
the problrm is with PriorityQueue:
PriorityQueue toProcess; Processing cannot find a class or type named "PriorityQueue".

Can anyone help me?

spongeraffle7 months ago

Thats interesting

workexaminer7 months ago


Beneficial

I saw this first on Open Processing! I am glad I found how to do it, this is amazing! Thanks

clickyummy8 months ago

good

bearblue9 months ago

good

bearblue9 months ago

good

varun20219 months ago

Great post, what was the depth resolution you could get ? Can this method be used to reproduce fine details ?

gazumpglue9 months ago

Thank you for the post Kyle, awesome work.

1-40 of 213Next »