Structured Light 3D Scanning
8 Steps
The same technique used for Thom's face in the Radiohead "House of Cards" video. I'll walk you through setting up your projector and camera, and capturing images that can be decoded into a 3D point cloud using a Processing application.

House of Cards Google Code project

I've included a lot of discussion about how to make this technique work, and the general theory behind it. The basic idea is:

2 Rotate a projector clockwise to "portrait"
3 Project the included images in /patterns, taking a picture of each
4 Resize the photos and replace the example images in /img with them
Remove these ads by Signing Up

## Step 1: Theory: Triangulation

If you just want to make a scan and don't care how it works, skip to Step 3! These first two steps are just some discussion of the technique.

Triangulation from Inherent Features
Most 3D scanning is based on triangulation (the exception being time-of-flight systems like Microsoft's "Natal"). Triangulation works on the basic trigonometric principle of taking three measurements of a triangle and using those to recover the remaining measurements.

If we take a picture of a small white ball from two perspectives, we will get two angle measurements (based on the position of the ball in the camera's images). If we also know the distance between the two cameras, we have two angles and a side. This allows us to calculate the distance to the white ball. This is how motion capture works (lots of reflective balls, lots of cameras). It is related to how humans see depth, and is used in disparity-based 3D scanning (for example, Point Grey's Bumblebee).

Triangulation from Projected Features
Instead of using multiple image sensors, we can replace one with a laser pointer. If we know the angle of the laser pointer, that's one angle. The other comes from the camera again, except we're looking for a laser dot instead of a white ball. The distance between the laser and camera gives us the side, and from this we can calculate the distance to the laser dot.

But cameras aren't limited to one point at a time, we could scan a whole line. This is the foundation of systems like the DAVID 3D Scanner, which sweep a line laser across the scene.

Or, better yet, we could project a bunch of lines and track them all simultaneously. This is called structured light scanning.
 1-40 of 166 Next »
hilukasz says: Feb 4, 2013. 10:07 AM
how do you feel about using processing vs c++ (oF) I feel like processing is REALLY laggy when dealing with things like this. Maybe I am missing something?
Dalia Nabil says: Nov 11, 2012. 1:11 PM
hi kyle
I recently tried this code , its amazing how u get a very good scan with just 3 pictures
I still didn't get a good result for my scan, I was wondering if you can help me with it
but I don't know how to paste picture on Instructables , can anyone help ??

luckyyyyyy says: Sep 29, 2012. 11:15 AM
Real-time Structured Light Virtual 3D Scanner scans virtual 3D objects in the virtual environment. http://3dracs.real3d.pk
procam says: Feb 14, 2012. 1:25 AM
Hi kyle,
I am working on an academic project which includes development of a 360 degree 3d phase shift scanner & doing its error analysis.

I have a doubt related to optical triangulation:-

I have to compute the coordinates of center of projection of camera             sensor,which requires me to know its sensor size since camera  calibration gives me the distance between sensor & camera lens but in terms of no. of pixels per unit length along X & along Y direction.

But in general,such detailed information about cameras may not be available(e.g. logitech quickcam sphere AF),so is there any other approach by which i can do optical triangulation without need of camera's physical parameters.
kylemcdonald (author) says: Feb 15, 2012. 12:36 AM
hi -- in general i find that the absolute units (like millimeters) do not come from the sensor size. i use information about the calibration pattern to determine absolute units. this same kind of information can be used to optical axis of the camera.

for a more advanced explanation, check out the byo3d notes http://mesh.brown.edu/byo3d/
sunlight305 says: Dec 15, 2011. 12:16 AM
Hi Kylemcdonald:
I have a question, is it necessary to calibrate the camera and projector,did you calibrate your camera and projector ,if you had calibrated ,which tools did you use to calibrate.
can you tell me your email,i wish to communicate with you in future.
Thank you. Merry christmas to you!
kylemcdonald (author) says: Dec 15, 2011. 7:24 AM
to get metric data (in millimeter values, or some other real-world values) you need to calibrate. if you just want something approximately correct, you don't need to calibrate. i don't have any code that handles calibration here, but i will be releasing some other code soon that does.
sunlight305 says: Dec 13, 2011. 12:50 AM
Hi Kylemcdonald:
I have some questions about camera and projector calibration,I used MATLAB toolbox(toolbox_calib) to calibrate the camera and projector,but there were errors :"Warning: View #1 ill-conditioned. This image is now set inactive.
Error: There is no active image. Run Add/Suppress images to add images".
Do you know the reason. thanks
kylemcdonald (author) says: Dec 13, 2011. 6:18 AM
i haven't used the matlab toolbox before so i don't know why it says that, sorry!
sunlight305 says: Dec 13, 2011. 5:50 PM
thank you all the same.
sunlight305 says: Dec 2, 2011. 10:34 PM
Hi Kylemcdonald:
I do this with yangjun1222,I have tried as what you said.It is now a little better,but it is still not good enough.These are the pictures:
kylemcdonald (author) says: Dec 3, 2011. 8:29 AM
that looks awesome! here's what i get when i load them into ThreePhase.

i don't know if i used the same file ordering as you, so you might need to flip the values from negative to positive.

if you're still worried about the 'waviness' in the scan, then you need to spend more time on gamma calibration. a linear gradient from the projector needs to appear linear to the camera. if you change the projector's mode to "cinema" (instead of "dynamic" or "graphics") this generally makes the gamma more linear.
yangjun1222 says: Nov 21, 2011. 5:40 PM
Dear Kylemcdonald:
I have tried your code and do everything according to your suggestion. But the result is not good enough.(as the sample picture of the little boy), I used your horiential pattern.
Can u tell what camera and project you used? Is that a DLP or a common Project? Must the camera be a Digital single Iens reflex one? And the position
of the camera ,project,people?
I sit about 1meter away from project, My face is in the middle of the project image, My camera is very near to the project(30cm higher).
In the sample, there are about 17 stripes in the boy's face, so I sit very near to the project to get as much as stripes.

In my result, there are wavies. (In your sample, there are wavies in the boys face, but the peak is very low).
Thank u very much!

kylemcdonald (author) says: Nov 22, 2011. 8:05 AM
hi -- it would be easiest for me to help if you can post the images you're getting.

a DLP is ok. you just need to have a shutter speed longer than 1/60 second.

non-DSLR camera is ok. you just need to be able to use a "manual" mode.
yangjun1222 says: Nov 23, 2011. 6:04 PM
Here is the images I taken. about 3 pictures. (a student)
kylemcdonald (author) says: Nov 23, 2011. 6:14 PM
it looks like there is a lot of movement between the images. you need to make sure the scene is still, and the camera and projector are completely still. use a tripod if possible. it also looks like the contrast on the projector needs to be turned down and the exposure on the camera needs to be shortened so it is not overexposed. if you fix these things you should have a decent scan. you could also try raising the camera some more -- right now it's slightly higher than the projector, but if you raise it you might get better depth resolution.
yangjun1222 says: Nov 23, 2011. 8:20 PM
Thank u very much, I would use a white gypsum (plaster) doll.So there is no movement. (I used a tripod). My camera is not good, so I tried to borrow one.
I think gray code should be easy to get good results, because of it is "digital".
artoo3D says: Sep 11, 2011. 10:42 AM
Hi, I have another question about the method.
Doesn't the focal length of the camera has to be taken in count?
I have the impression - this also seems to be logic in my mind - hat the resulting 3d-objects are percpectivly warped ( I don't know, if this the correct expression in english).
Or is there a way - if I know the focal length - to "rewarp" the 3d-data with a script (either in processing or in a 3d-application (in my case blender))?

Regards,

artoo3d

kylemcdonald (author) says: Sep 11, 2011. 7:14 PM
focal length definitely needs to be taken into account. it's not described in the instructable because i didn't understand how to take it into account when i wrote this.

to have completely accurate data you need to calibrate for the positions, orientation, field of view/focal length, and distortion of both the camera and projector. the byo3d link at the end of the instructable takes all these things into account.
J3nc3k says: Jul 20, 2010. 3:18 AM
Hi, I'm quite new in 3d reconstruction and I'm trying to use the unwrapping method you've introduced here, but I'm probably still missing something, even if the code's the same(just rewrote in C by me). My results... Wrapped phase(the first image) looks quite good I thinks. But the unwrapped one, it doesn't look like the unwrapped phases I've seen so far. I don't know what's wrong because as I've told I've used your code as a template. Could you give me some advices? One my opinion is, that maybe it's just because I'm displaying the image with different API, so the phase values are just recognized more as more brighter places than on your images(and because of that I can see only the white). In fact phase unwrapping should remove 2pi discontinuities form the image and that's maybe done. So what would you recommend me? Try to reconstruc tthe scene using the data I have right now and I'll see what will happend or these results are bad and I should make it better first? I already viewed some results in 3d, the point cloud, but it doesn't look like it should look. Thanks for replies.
kylemcdonald (author) says: Jul 20, 2010. 6:43 AM
Hi J3nc3k, your unwrapped phase looks very incorrect, but your wrapped phase also looks incorrect to me. You want it to look exactly like this image. Parts of it look correct, but other parts look incorrect. Make sure you're loading the images in the same order, and using the same atan2 function. Otherwise the results will be wrong.
J3nc3k says: Jul 20, 2010. 7:20 AM
Ok, thanks :) I'll try to fix it and then I'll post here my results.
J3nc3k says: Jul 20, 2010. 11:20 AM
So I've played with it a little bit and I've found out few things... The first one... the wrapped phase image... now it's ok... the error was caused by bad data type conversion(precision loss when doing some computations with uchar and float). New wrapped phase image attached below(the left one). And about phase unwrapping. The algorithm was ok... I didn't want to believe that it's not because I've checked the flood fill algorithm principle in general and also red some articles about phase unwrapping, and I seemed to be lost, without a clue. But now I understand it all. I was half-right with the reason why the phase unwrapped picture looks like that, why is the unwrapped part all white. It was because in API I'm using(OpenCV) is good to convert the image data into some integral array and modify them by some proper value(to get values between 0 and 255) and then they'll be shown correctly(sure this is not the only way to display them correctly). So all problems were caused by some data types and conversions problems. Thanks a lot for guiding me Kyle :) The right image is the current unwrapped phase image....
JChol says: Jul 1, 2011. 5:50 AM
I'm quite new in this topic and I'dlike to ask you one question. I've rewritten the code in C++ and I've got a problem, because after the wrapping stage my "phase" image looks just as yours, but when it comes to unwrapping it doesn't look as it should. Here comes my question; what is the range between minimum and maximum value in the "phase" array before unwrapping stage? I tried to normalize the pixesls values to meake them keep within 0-to-1 boundaries, I've noticed that it influences the final result, but it still does not satisfy me... Now I don't know where the problem is. If it is caused by bad pixels values or mistake in my code.
kylemcdonald (author) says: Jul 1, 2011. 7:44 AM
in general, atan2 returns phase values between -pi and pi.

if you want, you can normalize it to 0 to 1 after the atan, this can speed up the unwrapping.

here is my c++ implementation of a flood fill decoder for reference, maybe this will help:

JChol says: Jul 1, 2011. 1:52 PM
Dear Kyle

I'd like to ask you one more question. Does it make any difference if I pop the particular elements of toProcess vector from the beginning to its end or in reverse order? When I started to pop the elements from the beginning to the end of the vector the final image starts to look in quite reasonable way. Shouldn't it look like the one I've attached below?
kylemcdonald (author) says: Jul 1, 2011. 2:15 PM
this image looks perfect.

for flood fill, i don't think it matters whether you pop from the front or back. if one gives you better results, go for it!
JChol says: Jul 4, 2011. 10:48 AM
Thank you for your help, but isn't it strange, that the colors change gradually form black to white, while going from the bottom to the top of the image? Shouldn't the upper pixels (these about the top of the head) be a little bit more dark?
JChol says: Jul 29, 2011. 12:56 AM
Eventually I've almost forgot to thank you for your help. At last I managed to gain desired effect. Sadly I haven't generated my own pictures yet, that is why I decided to use your images, that I had found. I'm presenting my output poit cloud below. If enyone were interested in, I'm using the CImg library.
J3nc3k says: Jul 1, 2011. 7:09 AM
Hi JChol,

I hope you wouldn't mind me answering your question on 6th or 7th of July, I'm really sorry but I'm leaving for a holiday in few hours, so I have here some stuff to prepare and pack.
When I'll return, I'll check my application and tell you my output values immediatelly. I haven't used this code for a while so I'm not sure without running it again, but I think you shouldn't normalize the values for the computations, you should you the normalization only when showing the results, but not changing your original data that you want to use further. In many cases, when your output image does not look like some other, it's possible that it's just a matter of bad range.
Your resulting values could be in one range, but when showing the results for example represented by double values, you need to get all the values into the range 0.0-to-1.0(using proper normalization process) because every value lesser than 0.0 is black and every value greater than 1.0 is white.

If this does not help and my help will still be useful on 6th or 7th of July, I'll check my code and tell you more.

Or also.... you can check the results by compiling and running my code posted online.

Sincerely,
J3nc3k
JChol says: Jul 1, 2011. 9:14 AM
Of course I don't mind. I'm useing CImg in my implementation and it is, let's say tolerant with values lesser than 0.0 and bigger than 1.0, because when I display the image before and after normalization nothing changes at all. As I've written before, it matters later when it comes to unwrapping phase. To show you how it influences the final result I've attached images below. The first one is after wrapping, second is unwrapped one but without any modifications, and the last is after normalzation before the unwrap phase.
luckyyyyyy says: Apr 21, 2011. 10:35 PM
Hi.. I read your nice comments about wrapping and unwrapping. I am also using OpenCV. and I think i also have the same result as you had in your previous post. I am using the same atan2 function as described in the code. but got a little bit different result than u have. how did you make it correct.. which parameters are you talking about to modify. Where did you have conversion problem. ? kindly guide me it would be great if you just put some code relevant to conversion or something like ?
Thanks.
J3nc3k says: Apr 22, 2011. 11:30 PM
Hi luckyyyyyy,

the conversion problems weren't at the wrapping/unwrapping stage.
When using OpenCV, if you did not get an expected result, I'd recommend check the values in the image array you're trying to visualize. That's the place where my conversion problems shown up.
For example when you're trying to visualize an image represented by double values, it should have only values between 0.0 and 1.0. All the other values would not be correct for visualization.
Another thing is(but I'm not sure if this is your case, because I've tried multiple implementations of the method we're talking about here) that you have to check against 0 values before every division, sometimes I've forgotten to do this and it caused the image to look bad, because there were infinity values at some pixels.
Another thing that could cause the problem, you could have correct values but for some reason, they all could be close to 0 or close to 1, this could cause the image to look almost black/almost white, but when you scale the values into better range it could look all ok.
As you can see, all the problems I'm talking about are only visualization issues. It has nothing to do with the correct implementation of the method you're using. So if you're sure that you're showing an images with correct values, you should take another look at the implementation of the method, and if you're sure that the method's implementation is alright, then you should take another look at the visualization part.
And about the code. I'll discuss it with Kyle, ask him if there's a place where it could be published and then I'll tell you.

Sincerely,
J3nc3k
luckyyyyyy says: Apr 28, 2011. 12:03 AM
Hello J3nc3K....

i tried alot but still couldn't get the final result as you got. phase wrapping is ok i think so. just need to optimize the intensity. But the phase Unwrapping is really bad ...dont know what to do ? kindly correct me in opencv code if i am wrong ?___________________________________________________________

the code is really short, it will not take too much time... :) .. kindly review it. _____________and correct me specially in unwrapping. ________________

the code can be found in below link___________ :) ________________________

http://cid-cfc279f571a92006.office.live.com/view.aspx/.Public/code.docx ________________________________

could you also tell me how to optimize the intensity in phase wrapping._________________

really thanks.. :)

J3nc3k says: Apr 28, 2011. 12:36 AM
Hi luckyyyyyy,
about the images. The unwrapped phase map looks correct it's just displayed a little bit darker. I guess that this is not a problem of phase unwrapping, just try to brighten the image before displaying it. But change the values just for displaying. I'm sorry but I do not have a time to look at you code right now. I'll look at it in a week or two, if you do not need it ASAP and you won't solve it till then.

That's all I can do for you right now. Unfortunately, I'm in a rush these days.

Sincerely,
Svoboda Jan
luckyyyyyy says: Apr 30, 2011. 12:46 PM
hi Svoboda..

Thanks.
lucky
luckyyyyyy says: Apr 28, 2011. 1:22 AM
Hello Svoboda..

its really helpful for me.. thanks alot. Ya i think so... I need to make it little bit brighter just for the display.
that would be great if you upload your OpenCV code. Then i will make my code like that. I will be waiting for your code. Thanks. :)
J3nc3k says: May 2, 2011. 12:31 PM
Hi luckyyyyyy,

here's the code.
You'll be probably most interested in ReconstUtil.cpp file.
www.stud.fit.vutbr.cz/~xsvobo80/ReconstApp.zip
I cannot guarantee that you'll be able to compile it without any errors, it's NetBeans project designed in Ubuntu 10.04.

PS: I have to apology to you. I'm really sorry for this big delay, but I haven't stopped since Thursday last week. Last weekend wasn't weekend for me at all.

Sincerely,
Svoboda Jan

luckyyyyyy says: May 10, 2011. 4:17 AM
Thanks alot Jan... You helped me alot. I can't expect that i did what i want. your code really helped me so much. Actually, we needed to submit the progress report to project handlers. So that's why I am really sorry for the delay of my reply. And I also wanted to grab on your code. So, now I successfully implemented what i wanted to do. I really thankful to you. :) ..
and i think your field and mine is same, so why don't we share email addresses. mine is f_u_7@yahoo.com. :) .. we can exchange more ideas i think in future.
Tell me, have you tried to implement with unknown phase shift fringes/grids ? . and what did you do for the real time capturing and generation of 3D point cloud ? if you did then just share your ideas. or guide me... :)

Thanks alot.

Regards:
Lucky
luckyyyyyy says: Apr 23, 2011. 9:44 AM