In this Instructable, I am going to show you the process of how to create 3D models using digital imagery. The process is called Photogrammetry, also known as Image-Based Modeling (IBM). Specifically, this kind of process is used to recreate any object or space in three-dimensions. From artifacts and works of art to spaces like geological landforms and ruins, I am going to show how to create a 3D model Portraiture animation and demonstrate the workflow needed to accomplish this type of creative endeavor.
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Software
First, get the software needed to create 3D models from images. This includes:
Visual SFM - http://www.ccwu.me/vsfm/
Next is to reconstruct the 3D model. This includes:
Meshlab - http://www.meshlab.sourceforge.net/
Finally, last improvements using other software like:
Maya (student version or free trial), Blender, or any 3D modeling program.
Step 2: Images to VisualSFM
Now that you have your software, go out and capture raw footage of any object or space or the environment. There are two ways that you can capture footage for 3D models:
One way is to meticulously revolve around the object or space in a convergent manner with each step and take a picture.
The second way is to take a video and revolve around the object or space. Then go into Adobe Media Encoder and slice up the video into individual frames. Obviously, the more frames your camera shoots in the more material that you can acquire, thus more detail in your 3D capture.
1. File - Open+ Multi Images (This is where you take your frames or stills and import them into VisualSFM)
2. Now that you have all of your images uploaded, go ahead and click on the Compute Missing Matches button. It's the button that has 4 arrows pointing in the outward direction. This process might take a long time depending on how many images that you have uploaded. The reason why it takes so long is that the software is comparing each image to the other images that you have uploaded, comparing similar focal points and aspects in order to begin the process of recreating the 3D model, so please be patient.
3. Once that process is finished, go ahead and click on the Compute 3D Reconstruction button. It's the button that looks like the fast forward button but without the plus (it's right next to the Compute Missing Matches button). This is where VisualSFM takes the stills with the similar aspects amongst each other and starts to create the 3D model of the space or object or individual. VisualSFM considers the raw image data as well as the distance and depth of the subjects involved in each photo, that is how it is able to re-create the subject in question as a 3D model. The reason why it's called VisualSFM (Structure From Motion) is that the SfM process compares two-dimensional image sequences and estimates three-dimensional structures (3D models).
4. After that is done, go ahead and click on CMVS for dense reconstruction. This will finalize your 3D model and you want to save the .cmvs file and .nvm file as well as the .ply file. You will need the .nvm file for Meshlab and you will need the .ply file to acquire the 3D mesh of your object or mesh which will also happen in Meshlab.
Step 3: Meshlab
Open up Meshlab and import the 3D content that you have just created. Meshlab is an open-source program that allows users create 3D models, similar to Blender. While in Meshlab, go ahead and open the .nvm file that you created/saved from VisualSFM. You should see the dense point cloud that was transferred from VisualSFM. Go ahead and click on the layers tab if it's not already open. It's the button right next to the camera button on the toolbar, it says show layer dialog. From there, go to file and import the mesh from VisualSFM. The structure from motion mesh should be in the .ply file format.
Once you have the mesh on the display, you might need to delete some of the vertices or faces. On the toolbar at the top, go to the 'select vertices' or 'select faces' button. Select the desired vertices or faces and hit the delete vertices or faces button, the ones with the 'X' over them.
Now that you have your desired mesh, go to Filters up top, go all the way down to Point Set and click on Surface Reconstruction: Poisson. Hover over the text to see which entered values are ideal for your project. This process will use the vertices and normals from the raw data in the point cloud and .ply file to create a 3D mesh that we can use and manipulate. To see your newly created 3D model with the photograph texture, go to Filters, click on Texture and click on Parameterization + Texturing from Registered Rasters. This will use the projections from the camera data that was created in VisualSFM (the structure from motion part) to re-create the corresponding images (the registered rasters) onto the 3D surface. The image is at 1024 pixels but you can change that to 2048 if you want. The texture is also saved as a .png file and it is saved where the .nvm file resides.
Finally, save your project as the .mlp (Meshlab file) and export the model as a .obj file.
Step 4: Final Touches/Purpose (Maya, Blender, Adobe After Effects/Cinema 4D)
Now that you have a .obj of your model and the texture. Go ahead and import that .obj model into your desired 3D modeling program. I made my final iteration in the student version of Autodesk Maya and post-processing in Adobe After Effects. I am using Maya so I can make this 3D portrait more dynamic. By having access to the MEL/Python scripts in Maya, I can make an animation within Maya. After I complete the animation, I export those frames into After Effects for more post-production. I am in the process of making 3D portraitures/animations using this process. Hopefully, this inspires the creative process with you.
Here is a useful link!
Remember, there are a ton of resources online on this topic because of the open-source community.
Also, I uploaded a few simulation scripts for Maya, so give it a shot! One is a MEL script and the other is a Python script.
'Till next time!
Participated in the
Maker Olympics Contest 2016