# Structured Light 3D Scanning

The same technique used for Thom's face in the Radiohead "House of Cards" video. I'll walk you through setting up your projector and camera, and capturing images that can be decoded into a 3D point cloud using a Processing application.

House of Cards Google Code project

I've included a lot of discussion about how to make this technique work, and the general theory behind it. The basic idea is:

2 Rotate a projector clockwise to "portrait"
3 Project the included images in /patterns, taking a picture of each
4 Resize the photos and replace the example images in /img with them
Remove these ads by Signing Up

## Step 1: Theory: Triangulation

If you just want to make a scan and don't care how it works, skip to Step 3! These first two steps are just some discussion of the technique.

Triangulation from Inherent Features
Most 3D scanning is based on triangulation (the exception being time-of-flight systems like Microsoft's "Natal"). Triangulation works on the basic trigonometric principle of taking three measurements of a triangle and using those to recover the remaining measurements.

If we take a picture of a small white ball from two perspectives, we will get two angle measurements (based on the position of the ball in the camera's images). If we also know the distance between the two cameras, we have two angles and a side. This allows us to calculate the distance to the white ball. This is how motion capture works (lots of reflective balls, lots of cameras). It is related to how humans see depth, and is used in disparity-based 3D scanning (for example, Point Grey's Bumblebee).

Triangulation from Projected Features
Instead of using multiple image sensors, we can replace one with a laser pointer. If we know the angle of the laser pointer, that's one angle. The other comes from the camera again, except we're looking for a laser dot instead of a white ball. The distance between the laser and camera gives us the side, and from this we can calculate the distance to the laser dot.

But cameras aren't limited to one point at a time, we could scan a whole line. This is the foundation of systems like the DAVID 3D Scanner, which sweep a line laser across the scene.

Or, better yet, we could project a bunch of lines and track them all simultaneously. This is called structured light scanning.
 1-40 of 178 Next »
twentyrunner says: 1 month ago
Informative
saltyrummage says: 1 month ago
Its tremendous :)
monkeyglom says: 1 month ago
Awesome
gamystuffed says: 1 month ago

Extremely good
cheesethumb says: 2 months ago
Its fantastic
EerstehulpSEO says: 2 months ago
excellent
fingerflamingo says: 2 months ago
good one
Mariska Botha says: 3 months ago
Very nice Kyle
BunnyRoger says: 3 months ago
Quite cool.
Amanda Culbert says: 3 months ago
Wow, super cool Kyle!!
MAApleton says: 3 months ago
Thank you for the post Kyle, awesome work.
My Diet Area says: 4 months ago
Thats remarkable
hilukasz says: 10 months ago
how do you feel about using processing vs c++ (oF) I feel like processing is REALLY laggy when dealing with things like this. Maybe I am missing something?
Dalia Nabil says: 1 year ago
hi kyle
I recently tried this code , its amazing how u get a very good scan with just 3 pictures
I still didn't get a good result for my scan, I was wondering if you can help me with it
but I don't know how to paste picture on Instructables , can anyone help ??

luckyyyyyy says: 1 year ago
Real-time Structured Light Virtual 3D Scanner scans virtual 3D objects in the virtual environment. http://3dracs.real3d.pk
procam says: 1 year ago
Hi kyle,
I am working on an academic project which includes development of a 360 degree 3d phase shift scanner & doing its error analysis.

I have a doubt related to optical triangulation:-

I have to compute the coordinates of center of projection of camera             sensor,which requires me to know its sensor size since camera  calibration gives me the distance between sensor & camera lens but in terms of no. of pixels per unit length along X & along Y direction.

But in general,such detailed information about cameras may not be available(e.g. logitech quickcam sphere AF),so is there any other approach by which i can do optical triangulation without need of camera's physical parameters.
kylemcdonald (author) in reply to procam1 year ago
hi -- in general i find that the absolute units (like millimeters) do not come from the sensor size. i use information about the calibration pattern to determine absolute units. this same kind of information can be used to optical axis of the camera.

for a more advanced explanation, check out the byo3d notes http://mesh.brown.edu/byo3d/
sunlight305 says: 1 year ago
Hi Kylemcdonald:
I have a question, is it necessary to calibrate the camera and projector,did you calibrate your camera and projector ,if you had calibrated ,which tools did you use to calibrate.
can you tell me your email,i wish to communicate with you in future.
Thank you. Merry christmas to you!
kylemcdonald (author) in reply to sunlight3051 year ago
to get metric data (in millimeter values, or some other real-world values) you need to calibrate. if you just want something approximately correct, you don't need to calibrate. i don't have any code that handles calibration here, but i will be releasing some other code soon that does.
sunlight305 says: 1 year ago
Hi Kylemcdonald:
I have some questions about camera and projector calibration,I used MATLAB toolbox(toolbox_calib) to calibrate the camera and projector,but there were errors :"Warning: View #1 ill-conditioned. This image is now set inactive.
Error: There is no active image. Run Add/Suppress images to add images".
Do you know the reason. thanks
kylemcdonald (author) in reply to sunlight3051 year ago
i haven't used the matlab toolbox before so i don't know why it says that, sorry!
sunlight305 in reply to kylemcdonald1 year ago
thank you all the same.
sunlight305 says: 2 years ago
Hi Kylemcdonald:
I do this with yangjun1222,I have tried as what you said.It is now a little better,but it is still not good enough.These are the pictures:
kylemcdonald (author) in reply to sunlight3052 years ago
that looks awesome! here's what i get when i load them into ThreePhase.

i don't know if i used the same file ordering as you, so you might need to flip the values from negative to positive.

if you're still worried about the 'waviness' in the scan, then you need to spend more time on gamma calibration. a linear gradient from the projector needs to appear linear to the camera. if you change the projector's mode to "cinema" (instead of "dynamic" or "graphics") this generally makes the gamma more linear.
yangjun1222 says: 2 years ago
Dear Kylemcdonald:
I have tried your code and do everything according to your suggestion. But the result is not good enough.(as the sample picture of the little boy), I used your horiential pattern.
Can u tell what camera and project you used? Is that a DLP or a common Project? Must the camera be a Digital single Iens reflex one? And the position
of the camera ,project,people?
I sit about 1meter away from project, My face is in the middle of the project image, My camera is very near to the project(30cm higher).
In the sample, there are about 17 stripes in the boy's face, so I sit very near to the project to get as much as stripes.

In my result, there are wavies. (In your sample, there are wavies in the boys face, but the peak is very low).
Thank u very much!

kylemcdonald (author) in reply to yangjun12222 years ago
hi -- it would be easiest for me to help if you can post the images you're getting.

a DLP is ok. you just need to have a shutter speed longer than 1/60 second.

non-DSLR camera is ok. you just need to be able to use a "manual" mode.
yangjun1222 in reply to kylemcdonald2 years ago
Here is the images I taken. about 3 pictures. (a student)
kylemcdonald (author) in reply to yangjun12222 years ago
it looks like there is a lot of movement between the images. you need to make sure the scene is still, and the camera and projector are completely still. use a tripod if possible. it also looks like the contrast on the projector needs to be turned down and the exposure on the camera needs to be shortened so it is not overexposed. if you fix these things you should have a decent scan. you could also try raising the camera some more -- right now it's slightly higher than the projector, but if you raise it you might get better depth resolution.
yangjun1222 in reply to kylemcdonald2 years ago
Thank u very much, I would use a white gypsum (plaster) doll.So there is no movement. (I used a tripod). My camera is not good, so I tried to borrow one.
I think gray code should be easy to get good results, because of it is "digital".
artoo3D says: 2 years ago
Hi, I have another question about the method.
Doesn't the focal length of the camera has to be taken in count?
I have the impression - this also seems to be logic in my mind - hat the resulting 3d-objects are percpectivly warped ( I don't know, if this the correct expression in english).
Or is there a way - if I know the focal length - to "rewarp" the 3d-data with a script (either in processing or in a 3d-application (in my case blender))?

Regards,

artoo3d

kylemcdonald (author) in reply to artoo3D2 years ago
focal length definitely needs to be taken into account. it's not described in the instructable because i didn't understand how to take it into account when i wrote this.

to have completely accurate data you need to calibrate for the positions, orientation, field of view/focal length, and distortion of both the camera and projector. the byo3d link at the end of the instructable takes all these things into account.
J3nc3k says: 3 years ago
Hi, I'm quite new in 3d reconstruction and I'm trying to use the unwrapping method you've introduced here, but I'm probably still missing something, even if the code's the same(just rewrote in C by me). My results... Wrapped phase(the first image) looks quite good I thinks. But the unwrapped one, it doesn't look like the unwrapped phases I've seen so far. I don't know what's wrong because as I've told I've used your code as a template. Could you give me some advices? One my opinion is, that maybe it's just because I'm displaying the image with different API, so the phase values are just recognized more as more brighter places than on your images(and because of that I can see only the white). In fact phase unwrapping should remove 2pi discontinuities form the image and that's maybe done. So what would you recommend me? Try to reconstruc tthe scene using the data I have right now and I'll see what will happend or these results are bad and I should make it better first? I already viewed some results in 3d, the point cloud, but it doesn't look like it should look. Thanks for replies.
kylemcdonald (author) in reply to J3nc3k3 years ago
Hi J3nc3k, your unwrapped phase looks very incorrect, but your wrapped phase also looks incorrect to me. You want it to look exactly like this image. Parts of it look correct, but other parts look incorrect. Make sure you're loading the images in the same order, and using the same atan2 function. Otherwise the results will be wrong.
J3nc3k in reply to kylemcdonald3 years ago
Ok, thanks :) I'll try to fix it and then I'll post here my results.
J3nc3k in reply to J3nc3k3 years ago
So I've played with it a little bit and I've found out few things... The first one... the wrapped phase image... now it's ok... the error was caused by bad data type conversion(precision loss when doing some computations with uchar and float). New wrapped phase image attached below(the left one). And about phase unwrapping. The algorithm was ok... I didn't want to believe that it's not because I've checked the flood fill algorithm principle in general and also red some articles about phase unwrapping, and I seemed to be lost, without a clue. But now I understand it all. I was half-right with the reason why the phase unwrapped picture looks like that, why is the unwrapped part all white. It was because in API I'm using(OpenCV) is good to convert the image data into some integral array and modify them by some proper value(to get values between 0 and 255) and then they'll be shown correctly(sure this is not the only way to display them correctly). So all problems were caused by some data types and conversions problems. Thanks a lot for guiding me Kyle :) The right image is the current unwrapped phase image....
JChol in reply to J3nc3k2 years ago
I'm quite new in this topic and I'dlike to ask you one question. I've rewritten the code in C++ and I've got a problem, because after the wrapping stage my "phase" image looks just as yours, but when it comes to unwrapping it doesn't look as it should. Here comes my question; what is the range between minimum and maximum value in the "phase" array before unwrapping stage? I tried to normalize the pixesls values to meake them keep within 0-to-1 boundaries, I've noticed that it influences the final result, but it still does not satisfy me... Now I don't know where the problem is. If it is caused by bad pixels values or mistake in my code.
kylemcdonald (author) in reply to JChol2 years ago
in general, atan2 returns phase values between -pi and pi.

if you want, you can normalize it to 0 to 1 after the atan, this can speed up the unwrapping.

here is my c++ implementation of a flood fill decoder for reference, maybe this will help: