Introduction: [computational Fabrication] Photogrammetry to 3D Printing & Scanning a Rhino Jewelry Box

OVERVIEW

The goal of this project is to experiment with object scanning and reconstruction.

This is the seventh assignment for a course I’m taking this quarter (Spring 2020) on computational fabrication taught by Prof. Jennifer Jacobs at the Media, Arts & Technology program at the University of California Santa Barbara. The idea behind the course is that digital fabrication can be combined with computational design to create complex and functional physical forms.

Tools I used for this work include

  • Qlone, a photogrammetry app to scan objects on a printed mat (AR markers) and export .obj and .stl files (~$10)
  • Rhinoceros (Rhino), a commercial NURBS-based computer-aided design (CAD) tool
  • Grasshopper, a visual programming language based on a data flow paradigm that lets you parametrize the geometry you create in Rhino
  • Fusion360, CAD/CAM software by Autodesk
  • MeshLab, mesh editing software
  • MeshMixer, mesh editing / sculpting software
  • Ultimaker CURA, an open source 3D printer slicing application
  • Creality Ender 3D Pro, a commercially available 3D printer supplied by UCSB

Step 1: STEP 0 [Exploring At-home Depth-sensing & Photogrammetry]

So I have an iPhone X that has a structured light sensor called the TrueDepth camera (used for FaceID) that uses 30k points (src: https://support.apple.com/en-us/HT208108). I’m going to try out iOS apps that use this feature to create a 3d scan: ScandyPro, Capture by standard cyborg and Display.land. In addition to depth sensing, I’m also trying out photogrammetry using Trnio and Qlone.

My initial idea was to scan multiple models to create a space themed desk caddy. The hope was that this would give the opportunity to experiment with scanning multiple objects and combining different meshes! I didn’t get great results from a morning of exploration trying out the depth sensing apps. Some assorted images showcase some of the issues I was having including scale and the ability to capture the full model in a single scan. I liked Capture best but the exported file is a .usdz and the best mesh I got was likely not complete enough to use for this project.

Switching gears to photogrammetry, Trnio takes a long time to upload and process scans (see the images with blue dots) so I’m not sure what the scan quality will be like but Qlone had a great interface and was very fast. The app requires a printed AR mat with markers that guide alignment.

These are all of the apps I tried out:

Step 2: STEP 1 [Scanning]

Now that I’ve decided to use Qlone, I looked around my house for objects that would fit within the dome provided by Qlone to guide the scanning process and found a shell and a rhino. These are the results of my scan. Each one took a few minutes to capture. I captured two views of the shell -- hopefully I can combine them to create a single object.

I also found a wire and bead sculpted giraffe. I thought this idea was intriguing -- going from one set of craft materials to a very different representation -- but the scan failed to create a mesh since it was interpreted as having no volume. In general, I think scanning could be interesting for exploring design intent in a different (and perhaps malleable) form or for capturing soft, deformable structures (like food or flowers).

Some other ideas for future projects: scanning a hand to create votive holders, scanning long wildflower stems (or something lavender) to create building primitives for a 3d designed object.

Step 3: STEP 2 [Mesh Post-Processing]

Using MeshLab, I loaded the two shell .stl files to try and combine them. I used the Alignment feature and the Reconstruction VCG filter. Wow the error bound error bound 0.0010! That said the resulting mesh loses a lot of detail. I spent a couple hours experimenting with different filters and found that smoothing the aligned edges and repairing them before doing the reconstruction helped with the result. Also the parameters to the reconstruction parameter could be edited to change the detail in the result.

Next I had to remove some extra disjoint mesh elements. I did this operation in Rhino by loading the disjoint elements separately and deleting them.

Then I wanted to make sure the mesh was watertight. In Fusion360, I experimented with editing the mesh. The bottom had some weird elevations that I wanted to remove. The tools worked well. Lastly I decreased the complexity of the mesh before I modified it to create a watertight mesh.

I then opened my model in MeshMixer. I wanted to experiment with sculpting. I brought the original shell to my desk. I hadn’t realized the loss of detail until comparing them side by side. MeshMixer has great sculpting tools and I’m sure with enough time this would make it possible to capture some of that detail. Instead, I moved onto the rhinoceros mesh.

I only had one scan of the rhino since the feet were flat surfaces anyways so I skipped using MeshLab and went straight to Fusion360. I really like using Fusion360 for mesh editing. The preview tools are a great visual aid.

Then I decided I’d try to add eyes to the rhino using MeshMixers combine and mesh add features. The result was a bit creepy. I had originally wanted to create a soap dish or desk caddy or dip plate so I mirrored the rhino and created a semicircle shape out of them to go on the boundary of a dip container in the middle of a plate.

Step 4: STEP 3 [3D Printing]

Since the meshes don’t seem to have combined properly for the rhino platter, I’m doing a test print of the rhinos using a new wood filament that I haven’t tried out yet. I’m wrapping up this week’s project here. I want to experiment with more models and design something around them using more computational elements at a later date. Overall this was a very fun and exciting workflow and it was great to get familiar with so many tools and interoperating between them.