Introduction: Repair Using 3D Printing: 2a 3D Scanning

This guide is part of a series: Repair using 3D printing. This series of guides describes the process of reproducing a broken part by 3D printing a viable substitute. Please refer to the series’ main guide to follow the complete process. This guide includes a step-by-step explanation of the particular sub-process within Repair using 3D printing. First time readers are advised to read the whole guide, experienced readers can use the quicklists in each step to guide and speed up their next attempts. Skip to step 2 to get going!

Important

  • Is the part intact? If large pieces are missing, consider ‘restoring’ them before scanning, e.g. using clay. Missing geometry is very hard to repair digitally.
  • Make sure the part is not shiny, reflective, translucent/transparent , black or white. If so, try to spray it with a matte paint or chalk.
  • Are there any critical, sharp edges or corners? 3D scanning results in rounded off shapes. Alterations to the resulting scan model can be done later on.

Step 1: Background Info

When you followed the decomposition guide and found that the part features mostly curved and organic shapes, you probably ended up here: 3D scanning the part.

3D scanning is basically capturing an object’s geometry by finding and measuring a great number of points on the surface of it. These points are represented in three-dimensional coordinates called ‘vertices’. When these vertices are connected, triangular facets between them make up a Mesh surface, which is always an approximation of the original surface. The higher the density of vertices (resolution) in the mesh, the more accurate the model would be. Adversely, more computer power and time is needed to generate the model. Several methods for 3D scanning exist.

For this guide, we use a Photogrammetry-based approach, as it requires no special equipment whatsoever! In my Graduation project, I explored this approach and found a proper workflow that results in very usable, consistent results in a relatively short time. Photogrammetry-based 3D scanning, as the name suggests, is based on photographs to recreate a 3D model from a physical object. Photographs of an object are taken from various angles, and software works out the geometry by matching thousands of Reference points in multiple photos. By doing so, the software can work out the relative position of every camera (photo) to the object and do a second point matching operation to generate the mesh model.

This guide describes my approach to Photogrammetry-based 3D scanning; using a single, stationary camera (or smartphone!) to scan the broken part and prepare it for 3D printing. I will explain how to take good photos for scanning, how to set up your own (temporary) scanning scene and how to process the resulting 3D model.


Good luck!

Step 2: What You Need

The following tools are used in this Instructable:

  • A digital camera, DSLR or smartphone
  • A tripod, mount or any stationary support for the camera. [3D printable smartphone clamp]
  • A rotating platform: my own designed 3D printable rotating platform on Thingiverse! or e.g. IKEA SNUDDA
  • A reference for scaling and camera alignment: Printable reference A4
  • A FEATURELESS Background: smooth wall, paper or fabric, preferably white
  • Multiple light sources: matte (diffuse) LED bulbs or panels, studio lights or similar. I used the brightest IKEA RYET 1000lm LED bulbs, in simple metal desk lamps (like IKEA TERTIAL)
  • A decent computer (Windows x64, mac support unfortunately discontinued). Autodesk recommends at least a multi core processor, 4GB of RAM and a discrete graphics card.
  • Autodesk Remake, a licence for Students, Hobbyists and Start-up is free! Download Remake and upgrade to a free one-year license.
  • Matte spray paint or primer, fine powder spray or any other mattening substance, in case your part is shiny, reflective, transparent, black or white.

First, you obviously need a camera. This can in theory be any kind of digital camera. This includes a smartphone, which in my experience resulted in pretty good results. In general, a better camera however yields better quality images IF the camera is used with manual settings. Other than the resolution of the images, the benefit from a better camera in this case is when you have more control over the exposure and sharpness of the image, thus requiring you to be able to manually set the camera the right way. More on that later. That said, it is naturally preferred to go for a DSLR or mirrorless camera, with full manual settings to have all the control over the resulting image, possibly even with post-processing RAW afterwards.

Other than that, any camera set to automatic settings, including a smartphone works well when the scanning scene is set up correctly. Pick the camera that you are used to, if you have experience in photography, go for setting your camera manually.

Lastly, a fairly decent computer for processing the scan is needed. In this guide, we will be using Autodesk Remake as the processing software, which is available free to download for Students, Hobbyists and Start-up companies. You will need to create an account, and install the software on your computer. (unfortunately, Autodesk discontinued the support for Mac users). We will be using the Cloud processing feature of Remake, as the initial processing of the photos into a 3D model requires significant processing power or takes very long. Thanks to Remake’s processing servers, you will be able to automatically process the scan and receive a detailed model in about 15-30 minutes!

Step 3: Setup a Scanning Scene

Usually, in professional Photogrammetry-based scanning settings, an array of many cameras fire at once, creating a full set of photos in a fraction of a second. Since most of us don’t have access to dozens of cameras, we will be mimicking this setup using a single camera. The professional setup usually consists of multiple rings of cameras around the object in the center, equally spaced and with different altitude angles. This ‘sphere’ of cameras can be recreated by taking one picture at a time, either moving the camera or the object for each photograph. The latter is our approach, setting a stationary camera and moving the object each time.

To do so, a rotating platform is introduced. The object is placed in the center and can be rotated in equal increments to shoot a similar 'ring of photos' as would result from the professional setup. Make sure the object does not move on the platform! For instance, fix it with a small piece of clay, a nail or steel wire.

Download my own designed 3D printable rotating platform from Thingiverse!

In your scene, preferably on your rotating platform, you will need some kind of reference that helps the software to find reference points to align the different camera positions to. As you will be simulating different camera angles by rotating the object, the reference has to rotate along with the object. A simple page from a newspaper works perfectly, as it has plenty of fine visual details to use as reference points! But I’ve created an alignment reference tool that also includes scaling references, so you can also scale the resulting scan to true size directly!

Choose the object’s orientation wisely: try to make as much of its geometry visible from a horizontal (camera) perspective. This means that looking at it from the side should reveal as much of the surfaces, cavities etc. as possible, and as little contact with the platform as possible is made. Also, try to align the longest side of the object in a vertical direction, I will explain why later on.

Furthermore, anything in the photos that does not rotate along must be featureless to avoid camera misalignment: a flat white background works best. Position your setup in front of a white wall, or use a large sheet of paper or fabric to create a smooth, matte background behind the rotating platform. Next, position the camera in front of the scanning scene, on a tripod, mount or any other stationary position. Try to fill the picture frame with as much of the object as possible (from every angle!), make sure it is in the center of the frame as well.

Lighting

The next important aspect of the setup is to provide good lighting conditions. Proper lighting exposes all surfaces of the object in the pictures that you are going to take. This is very important to the quality of the scan, as the geometry of the object has to be ‘readable’ by the software!

The trick is to create Diffuse lighting, which means light hitting the object comes from many angles and scatters similarly to many directions. This kills any hard shadows, dark areas and highlights, which heavily impact the resulting 3D model!

One option you have is to use daylight! On a cloudy day, overcast daylight outside is a perfect diffuse light source, scattered by the clouds. Just make sure your scene is outside, as the light has to come from many angles rather than from a window only.

Another option is to set up a tentative light studio. By using multiple light sources and diffusing materials in front of them to scatter the light as much as possible, a diffuse setting can be created. Use multiple sources of the same kind, preferably matte LED bulbs or panels as they are consistent in colour temperature, intensity and colour spectrum. In my experience, a set of IKEA RYET 1000 lumen LED bulbs and simple desk lamps to hold them work great!

Direct multiple sources on the object, but always away from the camera! Two lights from either side of the camera, plus an optional one from above already provides a decent setting close to diffuse, as long as the source is a ‘soft’ light already (such as a matte bulb or LED panel). Use the white background to bounce the light as well.

You can try to diffuse the light even more by using translucent materials such as 'frosted' plastic or chalk paper. Using ‘bounce cards’ (white surfaces) to bounce the lights in many directions also works. Many tutorials on DIY light tents and studios can be found online. I’ve used an IKEA JÄLL laundry basket made of translucent EVA plastic as a light tent, with the opening towards the camera and a sheet of paper on the bottom. It is great as a small product photography studio as well!

Experiment with the lighting until you’ve found a setting that works for you. It is again very important for the quality of the model and is worth getting right upfront. (If you want to go for quick and dirty, I suggest to go for the setup outside). Take a look at the included pictures of various scanning settings that worked for me.

Step 4: Taking Good Quality Photos for Scanning

Next, it is time to take the set of photos that will be processed into a 3D model. The quality of these photos greatly affects the quality of the resulting model, as it is the only reference for the software to create it off of. Throughout my project attempts, I’ve found a lot of factors that influence the photo quality and found ways to shape these for an optimal setting. I’ll briefly discuss my findings to help you create consistent, usable photos as well!

Three aspects of the photo determine the image quality for scanning: Exposure, Sharpness and Resolution. The latter is the only one that cannot be influenced, as it is limited by the camera your are using. Luckily, most modern cameras, including many smartphones take photos of well over 10 Megapixels, which is more than enough for 3D scanning. The higher, the better of course, but with longer processing times in mind. I recommend you stick to 10Mp or higher, however my iPhone 5 (8Mp) produced very usable images as well.

Exposure

The exposure aspect is probably the most important; all surfaces and features of the object have to be clearly visible in the photographs, in order for the software to have usable data to reproduce the 3D model. This means that the lighting conditions, in combination with the camera settings has to result in an image without Black or White (under- and overexposed) areas, and a proper overall exposure. Luckily, almost all cameras since the 1960’s are capable of setting themselves automatically for a correct exposure. But if you know how, manual settings are preferred here to have more control over the exposure. Additionally, post-processing the photos can help correcting exposure settings afterwards. If you are familiar with photography, manual settings help to take better photos. However, if you are taking the quick lane, a smartphone or automatic camera works just fine.

I will not go into detail about manual settings here, as this is a complete photography tutorial in itself. Use what you are familiar with, a smartphone worked fine for me in many attempts. I used a Canon EOS 100D DSLR camera with a 50mm prime lens, and manually set it to f/10, ISO400 and about 1/40 shutter speed. I set the focus and white balance manually, once and upfront.


Sharpness

Next, the sharpness of the object in the scene is important for the outcome. Besides the usual focusing on the subject, the effect of ‘Depth of Field’ might affect the overall sharpness of the object in the photo, as we are photographing relatively small objects. This effect is the result of the combination of lens (diameter) and image sensor (area) used, creating a certain distance frame relative to the camera, wherein the subject is ‘in focus’. Anything further away or closer to the camera than within this area will be out of focus and therefore blurred or ‘unsharp’. Luckily, smartphones have a small sensor and small lens, often with a very short focal length and thus are not so much affected by this effect as a larger format camera. In this latter case, this effect is reduced by narrowing the aperture, by setting the camera to a higher aperture value manually. I set the aperture to f/11 in most cases.

Take a few pictures and judge the overall sharpness of the object. Look for blur in the edges and features that are further away from point you have focused on.

Taking the pictures

Once you are set to shoot the good quality images, go ahead and start with the first one. Set the camera to focus on the subject, horizontally, fixed stationary and with the right settings. Take the picture and rotate the platform 10 degrees. Remember to use a remote shutter or timed shutter delay to avoid any movements of the camera! If you are working with a narrow aperture, chances are the shutter speed will be long and the risk of camera movement blurring the image is apparent.

For my DSLR setup, I usually shot with the 2 second delay option set in the camera, or connected to Adobe Lightroom via USB to ‘tethered capture’ directly to my laptop. For my iPhone, I used a smartphone clamp and tripod, and the included earbuds’ volume control to take photos without touching the phone.

Count the photos you take, up to 36 for a full circle. If the geometry of the object requires so, take multiple full circles of photos from different altitude angles (after completing the previous full circle!), eg. from above or below. Make sure your reset the camera for the next circle of shots: refocus, check settings and fill the frame after adjusting the camera position.

Step 5: Processing the Scan

Now that you have successfully created a good set of photos to be processed into a 3D model, it is time to let the software do the magic. Import the photos from your camera to a new project folder on your hard drive. Check and discard any mistakes, unsharp or inconsistent photos, post-process them if you are able to.

Next, import the photoset into Autodesk Remake by creating a new 3D scan. Remake has few settings to adjust under the free license, just choose the Cloud processing and Local images options and leave the rest to default (High Quality). This way, the cloud processing is free and the results are generally good and workable.

The images will be uploaded to Remake's servers. Depending on your internet connection and image size and count, this can take some time. Processing will start automatically afterwards, and you will receive an email once the model is completed. Leave Remake to do the magic and go grab a cup of coffee!

Post-processing: removing model errors

Once your model ready, you can download it straight into Remake. Take a look at my unprocessed scan, embedded here. Great! But don’t let the image texture deceive you, it is the 3D model that we’re after. Turn off textures in the view options and review the model’s integrity. If all is good, little to no holes, false geometry and misalignment errors should be visible. If so, this is probably due to invisible surfaces in any of the photos, highlights or dark areas I’ve warned you for. A failed scan of the Senseo part is included as well.

However, another problem might be that some of the cameras have misaligned. Press ‘K’ and zoom out until you see the blue pyramids representing the cameras from your scene. If the cameras aligned properly, you should see perfect circles of cameras as is similar to the way that you shot the pictures. If cameras are misaligned, hover over to see which images failed to align and try reprocessing without the image. If the problem persists, you should consider reshooting the scene with additional references such as a newspaper page on the platform to aid the software in finding reference points.

Next, it is very important to scale the scan to true size. Use measurable points on the original part, such as sharp corners or edges, or the scaling references provided on the rotating platform, that should be included in the scan. Choose ‘Set scale’ and select two points that can be measured and pointed out accurately in both the model and the original scene. Preferably, use the scaling references integrated in the rotating platform. The scale is set by providing the linear distance between the two points, as measured in the original scene.

additional model editing tools

With the built-in tools in Remake, the model can be cleaned and fixed quite easily. Use the lasso selection tool to delete faces of the mesh, fill holes with the proprietary hole or automatically find defects in the model under ‘Diagnostics’. Take your time to get used to these tools; cleaning up the model saves a lot of time afterwards! The selection tools for instance select through the whole model, be aware that you don’t accidentally delete parts in the back of the model. Instead, use the Isolation option that appears in the bottom.

Cut away the scan platform by using the Slicing tool. Cut off any supports such as the clay in the bottom using the selection tools and try to fill the hole you just created. ‘Smooth’ or ‘Flat’ options can be used in different situations. Experiment with the ‘Bridge gap’ feature to create a bridge between two or more facets on opposing sides of the hole if both hole filling options create undesired results.

Finally, look for holes in the model that are not holes in the mesh (with boundaries), but where the mesh continues through the inside. Such holes are the result from highlights or dark areas, and can be removed by deleting a selection just around them, essentially creating two holes in the mesh on either side. These can be filled with the hole filling tool and are detected by the Diagnostics tool. Run a final Analysis to check if the model is completely defined and ready for the final step!

Senseo part final scan

Senseo part with errors

Step 6: Altering Your Scan in Fusion 360

At this point, you have a theoretically printable 3D model. But in most cases, it needs some extra alterations to create a fully functioning model from the scan. Exact hole diameters, accurate parting line cut-offs and other critical features found in the Decomposition step of this complete process might need extra attention in order to have a viable substituted model fed to the printer.

In this case, it is useful to import the model into a CAD modeling software programme. In particular, Autodesk Fusion 360 is a very nice partner to Remake, as it is from the same company. It is not coincidental that Remake has particular exporting settings for Fusion 360. After cleaning up as much as possible (and after error checking and scaling!), export your model for Fusion 360 to .OBJ (Quads). This will take some time as it re-processes the model to a square-based faceted mesh. Importing this model into Fusion 360 and converting it to a T-spline solid in there results in an editable, solid shape onto which you can make dimensioned alterations, similar to as you would do with modelling from scratch.

Open a new project in Fusion 360 and import the newly saved quad mesh under Insert > Mesh. Under the Sculpt options, find Convert and convert the imported mesh to a T-spine. Now you can go ahead and orientate the scan to a convenient orientation within the modeling environment and start modeling on alterations.

CAD modeling is described further in another guide of this series, see the next step.

Step 7: Continue!

So far we have created a 3D model from our original part, using 3D scanning. This model is now theoretically printable and might already be useful for your repair case. As the printed outcome is a plastic part, you can always alter some imperfections after printing, so it might be worth to try and print it directly.

Proceed to the next guide; Reproduction, to read more about the printing steps within this process. If the model needs additional love and care before printing, for instance additional details, dimensional accuracy in some features, or simply remodeling a whole lot, consider reading the CAD modeling guide of this series, as it includes many useful information specifically for repair purpose 3D models.

3 Reproduction guide