Introduction: Wk 7 Scans
My goal for this assignment was to try multiple scanning tools and let the results guide my project.
Step 1: Trying QLONE
I started with QLONE. I appreciate the user experience of knocking out the tiles in order to complete a scan. The one problem with this UX is that the work I invested to knocking out every tile led me to have a high expectation for scan quality.
The scan quality itself was underwhelming. While I understand the issues with the wine glass, the issues with the other scans were frustrating. The textures certainly mask the issues with geometry scanning.
valuable to learn why the mesh-repair is necessary.
I was planning to move forward with the monkey scan, but was unable to pay for the premium upgrade due to app-store payment profile issues.
Step 2: Using Display.land
I moved on to Display.land.
This app has a modern UI and features a slick capture experience that renders AR dots to indicated points captured in the scene. Quite the opposite of QLONE, the visual feedback in this app set my expectations low for scan quality.
After ~2 hours, the scene was available for review. There were some frustrations trying to download the mesh files, but eventually I was able to import the mesh into MeshLab.
Step 3: MeshLab Processing
In MeshLab, I used a straight-forward workflow to crop the scene
- Target the object, orient to top-down vantage point
- Select ALL vertices
- Subtract the vertices around the sculpture
- Delete vertices
- Orient to front view
- Select all vertices below the sculpture
- Delete vertices
- Return to top-down view
- Select the the remaining bits of the table vertices
- Accidentally delete the sculpture
- Search for the "undo" button...
- Discover that there is no "undo" in MeshLab
I was able to replay my steps to crop the sculpture relatively quickly.
Step 4: Rhino NURBS Preparation
I imported the cropped mesh into Rhino to prepare it for CAM. It was my first time working with a mesh in Rhino, so I did some digging online to learn about the meshToNURBS command. After scaling my polysurface and adding a base, I was able to export to STL and start my print.
There were some challenges with boolean union operation, as usual, but I was able to solve these problems by scaling up the "base" geometry to introduce a bit of overlap between the two objects. This facilitated a successful boolean union.
One of the images shows an experiment using OffsetMesh to progressively distort the sculpture mesh over a series.
Step 5: Resulting Print
I am satisfied with the resulting print.
Display.land is designed to scan spaces rather than objects-- they suggest targeting objects that are roughly greater than 1m in size. Because of this, the mesh that I scanned is a low-resolution capture. At first I thought this would be a blocker-- then I embraced it.
Low-poly models remind me of old video games. This model in particular reminds me of Star Fox 64... sitting in Paulo Estrada's mom's apartment on a hot summer day in Gaeta with the electric fan on high, trying to do a barrel roll.