Introduction: 3D Environment Laser Scanner From Scratch
Hello and welcome to my first instructable! Here we will go together through the affordable journey of how to make your very own 3D Laser Scanner.
If you like this project, then I'd be super grateful for your vote in the Full Spectrum Laser contest!!
There are a few (very good) 3D scanner tutorials on here already, but this one is a bit different for two reasons:
- No pre-made 3D scanning software was used. Everything in this is done using Arduino and Processing. Free to use open source microcontroller/graphics programs.
- Whilst most other scanners use a turntable to scan small objects, this one turns the camera and laser, making it able to scan around itself through 360 degrees.
This means that it can be used for things such as scanning a room or outdoor environment for level design in a game. It can also be used for scanning people's faces and bodies (3D selfies).
In addition to that, the fact that it's all made from scratch makes it very customizable. So for example you can change the physical size of the rig to scan a larger area. You can also play around with the scan resolution and speed.
If you don't fancy reading, then you can watch the movie version I prepared! I recommend turning on the auto generated subtitles if you're after a laugh.
If you are a keen maker, then you probably already have most of the
materials needed and this should cost around £20-£25 for the less common parts (stepper motor/driver, webcam and line laser). It doesn't have to be a fancy webcam, standard definition is better as it uses less processing power. An HD webcam is completely unnecessary, but if you have one already then use it!
I haven't put in prices of individual parts because as you all know it can vary hugely depending on how local you want to buy it. If you are prepared to wait over a month for all the parts, you could probably get everything for about £20 or £30.
And another thing, you don't actually need a stepper motor. You can use any motor, but it MUST have some kind of position feedback, like a servo motor or a DC motor with a rotary encoder. I eventually plan on using a geared DC motor with an encoder as this would give much better performance because it will have smoother movements than the stepper and provide more torque from the gearing.
And another other thing, you also don't need the LCD screen or the potentiometers. They are just for making the controls a bit easier.
- External webcam
- Line laser module
- NEMA 16 Stepper motor and pulley
- Timing belt (~10cm)
- L-shaped bracket
- Terry tool clip (9mm)
- 3 x AA battery pack
- Some strips of wood
- Nuts and Bolts
- Zip ties
- Arduino Uno
- 16x2 LCD screen (optional)
- 2 x Potentiometers (optional)
- Stepper motor Driver (A4988)
- 12v Power supply
- 100uF Capacitor
- Jumper wires
- 4 x Screw terminal block connector
- Wood saw
- Screw driver
- Wire clippers
- Processing 3.05b
This older version of Processing needs to be used until the 3D renderer is fixed for the new versions.
Step 1: Mechanical Build
The mechanical side of things is pretty straight forward and open to adaptation. Keep in mind that the exact dimensions at this stage don't really matter. However! Exact measurements of whatever you have done ARE important for later on. I'll put the dimensions I used in brackets, but you don't have to use the exact same ones. If you don't fancy reading this, the movie version is out, it's in PART 1 of the video series.
- Get a piece of wooden baton about 40cm in length (L40cm W3.5cm H1cm). This is the arm.
- Drill two 3mm holes 4cm from one end of the arm.
- Place the webcam on the end of the arm and use the two holes to secure it down using zip ties. Angle the camera towards the inside of the arm by about 30 or 40 degrees.
- Drill a 3mm hole 1cm from the opposite end and use an M3 bolt to fasten down the Terry tool clip.
- Snap the linear laser into the tool clip and align it to face forward as straight as possible.
- Drill two 5mm holes 9cm and 17cm from the laser end of the arm.
- Get a small chunk of wood that's at least 1.5cm wide (L10cm W1.8cm H1cm).
- Drill two 5mm holes 8cm apart and insert an M5 bolt of length 4cm into each.
- Place the 3xAA battery pack inside the two holes on the arm and clamp down using the block with bolts. Make sure you leave enough space for the battery switch if there is one.
- Twist together the laser wires to the battery pack wires (red to red and black to black). Or solder for a better connection.
- Switch on the laser and align the lens/laser so that it's focused on the subject and perpendicular to the arm.
- Drill an 4mm hole in the very centre of the arm.
- Insert the stepper motor shaft into the hole. Make sure it's a tight fit, you may need to adjust the hole size by wiggling the drill bit or using a bigger bit.
- Place the pulley on the motor shaft and tighten the grub screw into place so there is no slip.
- Drill two 3mm holes to the camera side of the arm for the L-bracket so that it sits about 2.5cm away from the motor shaft's centre.
- Fasten down the bracket using two M3 bolts.
- Get another small block of wood about 4cm in length (L4.2cm W1.5cm H1cm)
- Drill another two 3mm holes in the block so that it can be fastened to the inside of the L-bracket.
- Loosely attach the block to the bracket using M3 bolts.
- Take the length of timing belt. Wrap it around the stepper pulley, place the ends of the belt together on the tooth side and slide them between the bracket and the wooden block.
- Fasten down the wooden block making sure that the belt is tight around the pulley.
Put that all to one side for a minute, now we'll make a clamp for the stepper motor to sit in to attach it to the tripod.
- Cut two short lengths of wood (L7.5cm W2.5cm H2cm).
- On the wider side, drill two 4mm holes about 5.5cm apart (in both pieces).
- Get two M4 bolts 9cm long and push them through the holes connecting the two blocks of wood.
- Place nuts on the end and keep the blocks far apart
- Take a short length of the baton used to make the arm (L8.5cm W3.5cm H1cm).
- Place the blocks/bolts on the baton so that one block is a either baton end.
- Insert a two screws in the underside of the baton and into ONE of the two blocks to fasten in down. Leave the other block free to slide.
- Drill an M4/M5 hole (depending on the tripod mount) in the centre of the baton.
- Screw the assembly onto a tripod mount.
- Go back to the arm/webcam/laser assembly and insert the stepper motor in the newly made clamp and tighten the bolts to fix it into place.
It should look something like this [full build], if not then.... Start again.
Step 2: Electrical Build
Again, pretty straight forward. You're connecting a Arduino to a stepper motor driver, an LCD screen and a couple of potentiometers. This is also explained in PART 1 of the video series.
I'm using an I2C LCD screen, just because it uses less wires. Connect
the LCD's VDD to 5v, GND to GND, and match the SDA and SCL ports to the Aruino's SDA and SCL ports (written on the underside).
The potentiometers are just connected at 5v and ground at either side with the central pins in each going to analogue pins 0 and 1.
The stepper motor driver VDD and GND are connected to the Arduino's 5v and ground respectively. And VMOT and the other GND are connected to a 12v power supply with a 100uF cap in parallel to buffer the voltage spikes from the stepper. Connect RESET to SLEEP. And wire up STEP to pin 10 and DIR to pin 9.
For connecting the motor to the driver, find the matching coil wires by holding one pair together and turning the motor. If you feel resistance, then they're a pair. Connect one motor pair of wires to 1A and 1B and the other pair to 2A and 2B.
Step 3: Arduino Program
All we're doing here in Arduino is telling the stepper motor to move to a certain angle, then move back again. Whilst doing this, it will send it's current angle to Processing through serial communication. For more detail you can watch the video or read through the comments in the code. But I'll describe the basics here. Watch the video and you can skip reading this!
So if we skip through all the setting up and get straight into the loop, first we need to read from the first potentiometer and map it's value to the delay between each step and the speed for the LCD display. The delay and speed are basically the same thing, the delay controls the speed whereas the 'spd', also the speed, is just for displaying the speed on the LCD screen. Then we need to map the second potentiometer to 'angleEnd' which is the furthest angle we want the stepper to go to.
'pass' is the number of times that the scanner has rotated to it's final angle and back to it's starting point. So from 0deg to angleEnd is one pass, and back to 0deg is two passes. We need to check to see if the number of passes it's done is smaller than the number we want it to do. If so, then we set the stepper's direction. We can do this by using the modulo function '%', which allows us to check the number of passes is EVEN or ODD. If 'pass%2' is equal to 0, then it's 'pass' is EVEN and we need to rotate forward. If it's equal to 1, then it's ODD and we need to rotate it backwards.
In the loop that return it to it's starting point (the loop for an ODD number of passes), we can put a message to send to Processing telling it that we've done a pass and want it to stop what its doing, we're done scanning! So we can send the angle 500 because this obviously isn't a coordinate because it's more than 360.
So we have set the motors direction but not moved it anywhere.. To do that, we need to send a short pulse for 300 microseconds to the step pin on the stepper driver.
Next we can increment the 'pass' variable IF the angle has gone beyond either 'angleEnd' OR its starting position (zero degrees).
Then we actually send the current angle to Processing.
And then write a load of stuff for the LCD display. The reason I've put in a bunch of black spaces before each value is written is so that a large number like '150' is cleared before a smaller number such as '5' needs to be shown. If I didn't do this, then it would show up as '550' because the '150' is still lingering in the background. Go away big number.
Finally we can put in 'del' as the delay which determines how long it has to wait before doing all of that again. This determines the speed of the motor.
Step 4: 1st Processing Program
In the first Processing sketch we are going to look at what the webcam is seeing and extract all of the red pixels above a threshold and save them to another image as white pixels. Then we'll tidy it up a bit and end up with a thin white dotted profile of the scanned subject. We then save those white pixel X-Y coordinates to a text file along with the motor angle at each frame.
Again, you can just watch PART 3 of the video series or look through the comments in the sketch but I'll outline what's going on here.
Again, I'll skip to the important bit (draw loop). So first off, we need to create two images that are the same size as the webcam feed, 640 by 480. Set the background to black.
Now we need to print the motor angle to the motor angle to the text file, except we haven't read it from the Arduino yet? It's fine because we know it's zero to start with, and the next time round we will have read it! It needs to be done this way to avoid writing a load of Chinese characters to the text file.. BUT before we write this angle, we need to precede it with ';'. This is so the next processing program knows it's dealing with a new frame from the webcam when it sees a ';'.
Next we need to check to see if a new frame from the webcam is available, if there is then we read it! This saves a webcam frame to the video object and we can get individual pixel values using video.pixels[i] where 'i' is the pixel number running from the top left along to the right side of the screen and down. So this starts at 0 and goes up to 640*480 = 307200.
Here we read from the serial port to read the motor angle from the Arduino. First we check to see if there is four or more bytes in the serial buffer, a float is four bytes so we need to wait until the whole float is ready to read. We then read the incoming data until we hit a new line and save it as a string. After that, we need to convert it to a float and save it to 'motangle', but just before that, check to see if 'myString' actually saved anything.
Now the fun bit. Create a for loop that repeats once for every pixel on the webcam. Check to see if the current pixel has a RED value that is more than a certain threshold value. Here you determine the sensitivity of the scanner. The lower the threshold (as low as 1) the more sensitive the scan, used for very dark scenes where the laser light is very dim, turn out all lights for this. A higher threshold can be used where there is other light in the scene (keep it minimal though) and you only want the bright line to be picked out. SO! If the red is more than this value, then you place a white pixel at the same pixel coordinate in image 1. This gives a black/white image where the red bits are white and everything else is black.
If we used this image in the 3D program, it would almost certainly crash. It's far too many data points. It would also give a very thick surface finish to everything because the line is fuzzy and wide.
To fix this, make another loop the same as the previous for loop. Then make a while loop in that loop. The while loop only runs when it comes across a white pixel in image 1 AND it's still within the image AND if it's on a line that's dividable by 5. This last bit gives us the dotted part, you need to use the modulo function to do this. The number you use will determine the vertical resolution of the scan. Watch the video for a better explanation of this.
So in this while loop, all we do is count up using 'k' (and 'i' as well because the for loop isn't getting a chance to do it). So this loop will repeat as long as it's in a row of white pixels in image 1, so 'k' is counting how many are in a row. Once it hits a black pixel again, it jumps out of the loop and plops a white pixel in image 2 in the middle of that row it just ran through.
Then you need to need to print the X and Y-coordinates of this new white pixel to the text file! Remember to separate the coordinates with a ',' so the next sketch knows what's what. To print the X-coordinate, we need to use the modulo function again, it will give us the column that the pixel is on (i%640). And for the Y-coordinate we need to find the row it's on, do this just by dividing the pixel number by the width (i/640). This rounds down the answer which is perfect because it gives us the exact row its on!
Then we can print the images to our render window to see what's going on. I put the webcam feed on top, and image 2 below so I can see the final result. For placing the images, the first one starts at 0,0 (top left of the window) and the next image at 0,480 (half way down on the left).
We then need to check the motor angle and see if it's sending the secret code to tell us to stop the sketch. Remember we send an angle of 500 when it's wanting to stop, so we just check to see if it's more than 450. If it is, then we flush the text file buffer and close the file and exit the program.
Step 5: 2nd Processing Program
Remember all that 3D vector maths from school you thought you'd never use? We'll you still don't have to use it because I've done it for you.
In this final part we will write another Processing sketch that takes those white pixel coordinates and projects them onto the plane created by the linear laser. Each white pixel becomes an imaginary line coming out of the camera in 3D space. Where every line intersects with the imaginary laser plane, a data point will be plotted. This is repeated for every white pixel in every webcam frame, eventually building up a 3D point cloud. Just watch the video...
The only part of the setup I'll mention here is the 'rotcam' parameter. This is where you need to insert the angle of the webcam, where 0deg is facing straight forward, and 90deg is looking at the laser. It should sit between 10 and 40 degrees depending on what you're scanning. Small angles for far away objects and larger angles for close ones.
To start with in the draw loop we need to expand the clipping planes using 'perspective' this is to stop far away and very close things from dissapearing. Then we need to create 3 lines to represent the X, Y and Z-axis and make them red, green and blue.
Next we can set up our camera that we use to look around the 3D space. We can use sin and cos along with the mouse's X-position on the screen to create a turntable effect to view the subject more easily. Play around with this accordingly to get something comfortable to use.
So now we're jumping into the loop that does all the maths. However we only want it to run IF any key is pressed. This means it only has to do the calculations once instead of repeating over and over.
Here we need to load the text file with the white pixel coordinates and save it to a string vector 'myString'. Honestly, I don't know why it needs to be a vector, it only takes up the first element but it refuses to work if it's only a string. OKAY we then need to make massive empty float vectors for X,Y and Z coordinates where each will be filled with the corresponding XYZ component of every single data point in the point cloud.
'myString' needs to be divided up into each webcam frame using splitTokens for ';'. This has re-written 'myString' as a vector with coordinates from one camera frame in each element.
We can now jump into a for loop that repeats once for every camera frame saved in myString.
Next we need to make another string vector called 'stringPart' which splits 'myString' up even further. It splits it into individual X and Y white pixel coordinates now contained in each element. This is only saves one frame at a time.
The motor's current angle is saved to 'rotmot' and comes from the first element of the 'stringPart', vector converted to a float.
For this next bit, our 3D space starting point (0,0,0) is the centre of the motor's shaft on the surface of the arm.
Here we use a function called pushMatrix. This is used to change our local position in 3D space, it basically changes where the centre of our 3D universe is. We can then use the modelX,Y,Z functions to find the REAL global coordinates of a point after pushMatrix translations and rotations. Very useful stuff! So within this first pushMatrix, we rotate in Y by the motors current angle.
We then go into another pushMatrix (one within another) and translate from the motor shaft's centre along the arm to the laser. Here we can create the equation for the linear lasers plane. When I say laser plane, I'm talking about a plane where the laser can shine on any point along the plane. So the plane goes through the laser and through anything it shines on.
To make the plane equation, we need two points that make a line perpendicular to the laser plane. This is the plane's NORMAL VECTOR. The equation is derived from this and one point on the plane.
We then use popMatrix to undo what the pushMatrix did. We only use it once so we go back to the motors centre but still with its rotation transformation.
In this next part, we work within another for loop that runs once for every white pixel, so make it the length of stringPart BUT increment it in two. This is because we want to use X AND Y coordinates in this loop at the same time.
So now we're working with a single white pixel which will become a single data point. In this loop we first need to turn the white pixel XY coordinates into angles that act as the XY angles for the imaginary projection line from the camera's centre view... check the images to see what I'm talking about.
Now we need to use pushMatrix again and move from the motor's shaft along the arm to the cameras centre. Then we rotate in Y by the camera's angle, and finally we rotate in X and Y again using the angles I described in the last paragraph. So now our local coordinates are in the camera, aiming straight out along a single white pixel's projection line.
Now we make the projection line in the same way we made the normal vector for the plane. Make a point at the camera's centre and another point out some distance along the projection line. From this we make the equation for the line.
Here we pop the matrix again to return to the motor's shaft with the motor's rotation.
Now we use a rearranged version of the plane equation with the equation of the line inserted into it and solve for 't'. See the equation rearranged in the images. Then stuff the result for 't' into the line equations for XY and Z. So now we have our XYZ coordinates for a single data point! Shove it into the 'drawpoint' vectors which hold every point.
Use popMatrix again to return to the very start of the global coordinate system without any translations or rotation transformations.
So now we have 3 huge vectors for XY and Z components of every data point but haven't actually drawn any points for the point cloud. So we can do that now. Use a for loop to go through every data point making it as long as 'drawpointX' and in the loop, use 'point(drawpointX[i],drawpointY[i],drawpointZ[i]). This plots a point at every intersection between a white pixel's projection line and the laser plane.
That's it, Well done! Just hit run and hit any key to render the scene. Make sure the file you're trying to read from is saved in the folder of THIS sketch.
Step 6: Top Tips
There are a few things you can do to improve your scans...
You can change the threshold of the red pixel value to alter the sensitivity. However, this means that if the webcam sees anything with red colour in it, orange, pink, yellow and white may be considered red. So you might scan something you don't want to. A good example of this is the moon (white) in the background of a scan messing things up. One way to help prevent this to include in the statement (line 50 of the first Processing sketch) another condition that green and blue must be BELOW a certain threshold as well. BUT! The very bright centre of the laser line sometimes appears white and you don't want to lose that... It takes some experimenting.
Another top tip. Ideally you want the white line in the filtered images to be as central as possible. So try setting the camera's angle so something that keeps the subject's profile laser line close to the middle of the screen. This is because there is a slight distortion effect when the laser scans close to the screen edge, probably because I haven't properly calibrated the pixel angles.
Rotating from left to right is pretty good for scanning a room, but it gives quite a narrow field of view. What if I want to take a selfie or scan a tall, slender object? Just tilt the scanner 90 degrees and then scan. You will need to alter the 3D sketch accordingly by rotating the scene by 90deg in the Z axis (radians) just before you put in the motor angle rotation.
If your render looks stretched out or very flat, it's probably because you have put in the wrong camera angle. Try adjusting it between 8deg and 45deg and see what looks right.
Step 7: Future Improvements
There's quite a few things I'd like to do improve this project.
First of all I need to fix the bending distortion that comes from the edge of the images. This is most likely because I never measured the actual webcam viewing angle. I just read it from a product description as 50deg. I will measure them properly and investigate if the angle between each pixel is the same for all pixels. There may some parabolic relation between them when I'm assuming it's constant.
I am currently making a project box to stuff all the electronics in. I'm having to use a custom made arduino to fit in the tiny space. It also incorporates a multi USB adapter so that only one USB plug goes from the box to the laptop. If you are doing this, I recommend using a Terry tool clip for clamping it to a tripod leg (my brothers idea)!
The stepper motor works well for this project, but it's a bit too jumpy for getting really good scans, and the microsteps can be unreliable. I'm currently waiting on receiving a geared DC motor with a rotary encoder. This will need a slight change in the Arduino code, but should run a lot more smoothly and provide more torque.
Another interesting thing to do might be to make a colour gradient for the rendered data points. This could make the image stand out a lot better and give a good perception of depth.
I have a few other ideas that I'm pretty excited about, but I'm reluctant to share them with you just now because I don't want to cloud your creativity... Or something like that.
Anyhoo! Thanks very much for taking the time to check out this tutorial. I really hope you get the chance to make your own. Please let me know if you do!!
Please don't hesitate to ask questions! If you don't understand something, it's because I haven't explained it well enough.
And if you liked this project, please vote for me in the Full Spectrum Laser Contest! Thanks!!
Runner Up in the
Full Spectrum Laser Contest 2016
6 People Made This Project!
See 2 More
Question 2 years ago
Hi, im new to programming and electronics, im trying to set this up as a hobby. Is anyone able to assist with the elctronic set up?
3 years ago
Awesome. Simple and efficient. Here's a question.
Instead of rotating. I want to scan antique furniture. Any suggestion on making this portable? Does it need the Arduino to capture the scan? I imagine it does, in addition to making the stepper motor move precisely.
If it were handheld, would the images (cloud points) be screwed up from slightly different distances or angles? I want to reproduce items in museums or historic homes.
Second question, Can this be designed to go up and down like an elevator instead of round and round?
This is really well thought through. I can't afford $6000.00 USD for a handheld scanner. That's way beyond a retired guys budget. Any help would be gratefully accepted. Thanks for a great instructable. You really rocked it with this.
Question 4 years ago on Step 2
In place of nema 16 stepper motor if I take nema 17 I think same holding torque then
Is program written for nema 16 workable same for nema 17?
Or some modification take place let me know
Question 4 years ago
hey man! can you tell something more about what is this processing 3.05b
5 years ago
Do you think using a bright white line (relatively dark env) would enable colour identification as well?
5 years ago
What is the purpose of the LCD screen and potentiometers.
Reply 5 years ago
They're just to have a bit more control over the scan so you can set the sweep angle and speed. The screen is just to see what what these are set to. Not required at all if you don't mind setting all that stuff in the Arduino sketch.
5 years ago
Hey Callan, thanks for the tutorial. Unfortunately I'm having a bit of trouble running the first program "FX8JPFIIL1W&CVN" (I haven't run the second one yet). I have everything built, plugged in, and the arduino serial monitor open. But whenever I go to run the first program it opens a new window and then crashes my computer with an error message: "APC_INDEX_MISMATCH." I'm using Processing 3.3.3, and I've installed the hardware and video libraries. I haven't used processing before and it's a bit difficult to search for help online since all the results are for other things. Would you please assist me with this issue?
Reply 5 years ago
Fixed it, my logitech camera just didn't like my usb hub... lol
5 years ago
I get a cloud of processing but How can I merge all of points. In meshlab or CloudCompare doesnot work my project could you help me, someone?
5 years ago
Thanks for the awesome tutorial, I was wondering could we use another microcontrolller for this project? do you have any suggestion?? thank you before.
Reply 5 years ago
Yeah sure, as long as it can communicate through the serial port to a computer so it can talk to Processing. I'm afraid I'm only familiar with Arduino so can't suggest any others.
5 years ago
Thanks for this awesome tutorial! I'm in 9th grade and doing this project for my coding class and I wanted to know if I could use a Nema 17 stepper motor instead of the 16. Thanks!
5 years ago
Hi, thank you for this great tutorial
First I want to ask if I can use this scanner as a handheld scanner so I can pick up the scanner and turn around the object to scan it.
Second, I want to know the references that you have used to make this scanner, and if you know any advanced research in this field please let me know.
Thank you again :))
Reply 5 years ago
1. I suppose you could, you would need to somehow tell the program exactly where the scanner is in space and its orientation. This would be quite hard to do accurately. You could write a separate camera tracking program that follows the position of the scanner and pass that information to the original program, as well as using a gyroscope with the arduino to track the orientation. It sound a bit involved and I'm not sure it would be very accurate, but I am sure that it would be possible.
2. Erm, not a lot to be honest. Just the basic stuff from the Arduino and Processing website website with a couple additional things from Youtube for using the webcam and the 3D environment in Processing. Daniel Shiffman has some very useful videos, but I think I used this one too >> https://www.youtube.com/watch?v=aDHh2OJMnsU&t=0s
6 years ago
Hi there. I'm thinking about making this to get a point cloud off my yacht hull. Do you think the points would be accurate enough?
Reply 6 years ago
Because of the lens distortion (which I still need to fix), any pixels away from the centre of the cameras field of view get increasingly less accurate, so until that's fixed, it won't be accurate at all I'm afraid. You would need to do it at night time and it will only work well if the boat is light coloured.
Is it for measurement of just to make some cool art?
7 years ago
As the line heads away from center vertical in your webcam, the error will increase because you have no lens calibration. Check out OpenCV, the python wrapper, and camera calibration tutorials to get this data extracted as a matrix you can apply to the webcam data.
Reply 6 years ago
Reply 7 years ago
Thank you! Yes, that's exactly what I need. I searched for a while about lens distortion but didn't get very far with it. I was going to attempt taking an image of paper with lines and plotting what pixels they were on in excel etc. etc... It would have been a nightmare.