Introduction: Run Photogrammetry in the Cloud

If you don't have a high-end gamer PC with 16 GB of RAM, and your video card is old, then you most probably need a very long time (up to 8 hours or so) to finish the photogrammetry part and need to tweak with the settings to work. Also, if you don't have a CUDA capable NVIDIA video card, then the DepthMap and DepthMapFilter steps couldn't be processed on your PC.

In these cases, you might want to process your screenshots in the cloud. On Amazon Web Services, you can "rent" virtual machines (called cloud computing instances, or instances), which have the necessary power to process your photogrammetry in a relatively short time (eg.: processing 300 UHD pictures into a 3D model took about 2 hours for me, 80 UHD images took about 40 minutes).

To do this, you have to register to Amazon Web Services with your credit card. If you a newly registered user on AWS, then you can use " AWS Free tier offers". In the first 12 months of membership, (among other offers) you can run a t2.micro instance 750 hrs a month (which means about 7/24 running it) and can have 30 GB-months of EBS volume usage, every month (which means you can always have fixed 30 GB of storage for your virtual machine instances).

You'll need a (free-tier eligible) t2.micro Instance, which will be used to prepare the photogrammetry input data (upload images and Meshroom itself), before starting a more expensive g3.4xlarge GPU-accelerated instance, which will do the actual processing. See Amazon EC2 Instance Types page to get to know, what do these instance types mean, and the Amazon EC2 pricing page for the pricing.

Also, if you want to run these instances (g3.4xlarge instances) cheaper than their on-demand price, you should check out spot instances on this pricing page, you can compare the actual spot instance prices with the on-demand prices.

To run your Meshroom photogrammetry pipeline, I recommend having one permanent, on-demand (free tier eligible) t2.micro instance, which can be used to prepare a project. Among its default EBS Volume, a second EBS Volume can be used to store the input data, Meshroom itself, and the temporary processing data. You should only have a g3.4xlarge instance when you actually want to run your photogrammetry pipeline and terminate it after the job is done. This way you can save on the EBS volume usage cost of this instance, which is a minimum of 50 GB. If you only have this volume while the instance is running, it has a negligible cost compared to the instance price: For example, in EU (Ireland), it has a cost of 0.11$ per GB-month, which means if I run my instance with a 50 GB volume for 2.4 hours (which is 0.1 day, and we say a month is 30 days), it costs:

50 GB * 0.11 USD * ( 0.1 day / 30 days) = 0.018 USD.

See EBS pricing page for more details.

If you checked the instance prices and decided to start using Amazon Web Services, the next steps are going to describe how to do that.

Supplies

You'll need PuTTY and WinSCP (both are free) on windows to connect to EC2 instances.

Step 1: What Is Photogrammetry

Photogrammetry is a procedure to reconstruct a 3D model from 2D pictures of a subject.

In order to do it, you have to take photos from many different angles of a subject standing still, without changing anything in the scene.

If the subject is an item or a sculpture, you can not move it between shots, because its pose and environment play a big part in the reconstruction. If you change the conditions, the photos won't describe the same 3D "snapshot" of the world anymore, and the software couldn't reconstruct it so well.

Here are some articles and videos about photogrammetry, and Meshroom, the photogrammetry tool we will use in this instructable:

SketchFab Meshroom photogrammetry tutorial for beginners

Step 2: Increasing Instance Limits, Creating Data Volume

If you haven't registered yet, you have to register on aws.amazon.com
in order to use its services. After registration, go to console.aws.amazon.com, and select your region in the top-right corner of the page.

After that, select EC2 from the Services on the page.

Before you can start a g3.4xlarge instance, you have to check your instance limits. If your g3.4xlarge instance limit is zero, you have to "Request limit increase". On that page, you should describe in a ticket, that you would like to run photogrammetry on an instance, that's why you need a limit increase. If you want to run spot instances, you may need to increase your Spot instance requests limit, too. You can do that the same way.

They responded pretty fast to my ticket and increased my limit in 2 business days. But before requesting your limit increase, you should check out the prices in regions near you to know, in which region you want to increase your instance limit.

To find the best (and cheapest) availability zone for the job, you can also check the Spot instance price history on the Spot requests > Price history page.

You need to create a new EBS (Elastic Block Storage) Volume for your photogrammetry data. If you are free-tier eligible (newly registered), you could have 30 GB-month of free EBS data every month. That's why I have a 20 GB storage for my project data, and 8 GB for my t2.micro instance. You only have to create the Data volume. Make sure it is in the availability zone your instances will be created in.

Creating a new Volume

To do so, go to the EC2 Management console, and click Volumes under the Elastic Block Store container in the left panel.

Then click on the "Create volume" button.

In the popup window, you need to set the volume size to 20 GB, and the availability zone to your preferred one. The volume type should be "General Purpose SSD" (gp2).

Click the "Create volume" button if you've set your settings.

Step 3: Launching the Base Instance

While waiting for your limits to be increased, start your t2.micro instance in the same availability zone as your computing instance will be started later.

Navigate to Instances page in your EC2 Management Console, under the "Instances" container on the left panel.

Click on the "Launch Instance" button.

On the next page, select Amazon Linux 2 AMI.

Select t2.micro instance for instance type, click to Next, configure instance settings.

Make sure to set 1 to instance count, select the correct subnet (in the right availability zone!), Shutdown behavior must be "Stop". Click to Next button.

Add storage: default settings are ok (8GB SSD volume). Click to Next button.

Add tags: we will not use this. Click to Next button.

Security groups: create new. SSH, TCP, port 22, the source field should be your IP. To achieve this, from the dropdown, select "My IP", and the IP field auto-fills with your IP address. Click Review and launch.

Review and launch: double-check security group, and instance details: availability zone.

Click the Launch button. You will be asked to select a keyPair to work with.

If you don't have a keypair yet, you should create one, and name it as you like. AWS for example.

Save the PEM file to your computer in a well-known location, you will need that file to access your AWS instance.

After saving, click Launch Instance.

You will be redirected to the Instances page.

Step 4: Attaching Data Volume

Now, attach the created volume to your newly created t2.micro instance:

Select Elastic Block Storage > Volumes from the left panel.

Select your Data volume from the list, click Actions, select Attach volume.

Select your instance to attach this volume to and click Attach.

A volume can only be attached to one EC2 instance at a time. Later we'll detach this and attach to the processing g3.4xlarge instance for the processing, then reattach to this one.

Step 5: Connect to the Base Instance

It's time to connect to our micro instance.

Go to the Instances page on the EC2 Management Console and select your instance. On the bottom of the view, there will be various information about the instance and its current state. What we need to check here is the Public DNS of the instance and that it is in running state.

You can connect to this instance from the machine you used during the Launch of the instance because you selected "My IP" in the security settings. That means, that only from your IP is this Instance accessible, only with the downloaded PEM file.

To connect to your instance from Linux or MacOS:

Open a terminal.

First, you need to set restricted permissions to the AWS.pem file you downloaded earlier with the following command:

sudo chmod 400 AWS.pem

This grants only read permissions to only the owner of the file.

Now you can connect to the instance using the following command, and the instance url given on the Instance page earlier:

ssh -i "AWS.pem" ec2-user@instance_url

When connecting the first time, the terminal will ask you to confirm the identity of the remote server, and add store its fingerprint. Type in "yes" and press enter.

"The authenticity of host 'ec2-XX-XXX-XXX-XXX.eu-west-1.compute.amazonaws.com (XX.XXX.XXX.XXX)' can't be established.
ECDSA key fingerprint is SHA256:+abadfjlsnfjasnlfdsfdkadfasdasdasd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-XX-XXX-XXX-XXX.eu-west-1.compute.amazonaws.com,XX.XXX.XXX.XXX' (ECDSA) to the list of known hosts.
__| __|_ )
_| ( / Amazon Linux 2 AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-2/ 16 package(s) needed for security, out of 24 available Run "sudo yum update" to apply all updates."

To connect to your instance from Windows:

You can connect to it using PuTTY:

See the following tutorial from Amazon

Step 6: Prepare the Project

You have connected to your t2.micro instance using PuTTY (from windows) or SSH (from linux or MacOS).

First, list the available volumes to check if the EBS volume is attached correctly with this command:

lsblk

it returns:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 88.4M 1 loop /snap/core/7169
loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/1335
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 20G 0 disk

Where the last one is the attached volume.

Before the first use, create a directory to mount the volume to, format the volume, mount it, and change its owner to the ec2-user:

sudo mkdir /data
sudo mkfs -t xfs /dev/xvdf
sudo mount /dev/xvdf /data
sudo chown ec2-user:ec2-user /data

Now we can install Meshroom by downloading it to the /data folder and extracting it:

cd /data wget -N https://github.com/alicevision/meshroom/releases/download/v2019.1.0/Meshroom-2019.1.0-linux.tar.gz
tar xzfv Meshroom-2019.1.0-linux.tar.gz

After that, create the project folder, where the photogrammetry data go:

mkdir /data/project

Now unzip and copy the attached processing script, and copy your first project's input data to the /data/project folder of the instance. I recommend compressing your input data into one file to make the file transfer faster.

You can compress into a zip, or tar.gz file. You can use scp from a Linux or MacOS terminal, or WinSCP from windows to copy files back-and-forth your PC and your EC2 instance.

From linux, call the following command with your PEM file and instance url from a terminal on your Local PC:

scp -i "AWS.pem" script.sh ec2-user@instance_url:/data/project/
scp -i "AWS.pem" input.tar.gz ec2-user@instance_url:/data/project/

From windows:

See WinSCP tutorial from Amazon

Let's say we call our project "project_name".

To decompress the uploaded input data, go to the project folder and untar / unzIp it from terminal / putty:

cd /data/project
mkdir project_name
cd project_name
tar xzfv /data/project/input.tar.gz

or

unzip input.zip -d project_name/

For the script to work correctly, the input images must be put into an input subfolder, directly under the project_name folder.

Leave the Data volume's folder and unmount it.

cd ~
umount /dev/xvdf

You can exit the ssh session by typing exit and pressing enter, or just pressing CTRL + D, but you can leave it open, if you like. You can connect to your other instance from a different Terminal / PuTTY window.

After finished preparing the project, detach the Data volume from the EC2 Management Console:

Open the Elastic Block Store > Volumes page, select the 20 GiB Data volume, select Actions > Detach volume.

Step 7: Start the Processing Instance, Attach Data Volume

Now, we have the Data volume ready to work, so we can start a g3.4xlarge instance to process the uploaded images with Meshroom.
Open the Instances page on EC2 Management Console, click Launch Instance.

Select "Deep learning base AMI (Amazon linux 2) V19". To easily find it, search with: ami-0725d2c605f0ce9a5

Select g3.4xlarge for instance type.

On the instance Details page:

  • Select the same subnet as the EBS volume and the t2.micro instance
  • Select "spot request" checkbox to start a spot request
  • You can optionally set a price limit for the spot instance.
    Spot Instances' price could change after every processing hour. If you set a limit, and the price goes above that limit, the instance will be stopped automatically.
  • Set the Spot request to be valid for a short time window, like 2-3 hours or so.
    This way, you can limit the processing time in case of any issues with the processing caused running the process indefinitely.

Default storage settings can be used. Click Next.

No Tags need to be added. Click Next.

On the Security groups page use the previously created Security group by choosing "Select an existing security group" and selecting it from the list.

Review The instance details, then Request the spot instance with the same keyPair as the other instance.

On the last page, you will be notified that the Spot instance request is being created. Click on the "View Spot Requests" button to navigate to the Spot Requests page on the EC2 Management Console.

When the status of your request changes to "fulfilled", you can click the instance-id in the "capacity" column to open the Instances page with your g3.4xlarge instance selected, and follow its state, until it is "running".

While it is starting up, or after it is started, go to the Volumes page, and attach the Data volume to this instance.

Step 8: Do the Processing

Now connect to the g3.4xlarge instance and start the processing.

Open the instances page from the EC2 Management Console, select your g3.4xlarge instance, and copy its instance_url to the clipboard.

SSH to the new instance from linux:

ssh -i "AWS.pem" ec2-user@instance_url

Or use PuTTY from windows:

See Amazon's tutorial

Open Volumes on the Management Console.

Attach the Data volume to the g3.4xlarge instance.

List available volumes

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 88.4M 1 loop /snap/core/7169
loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/1335
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 20G 0 disk

Create a directory to mount to:

sudo mkdir /data

Mount the xvdf volume to the /data folder:

sudo mount /dev/xvdf /data
cd /data/project/your_project_directory

Start the process:

sudo ../script.sh

You have to keep the terminal window open until the script finishes its work. You can follow the progress in the terminal.

The script will unmount the volume and shutdown the instance after the process is done to save time and money.

Step 9: Download the Results

After the script finished and the g3.4xlarge instance is terminated, you could reattach the Data volume to the t2.micro instance and download the end result.

Start the micro instance from the EC2 Management Console if it was stopped (if it was stopped, the instance_URL changes! Copy the new url from the instances page).

Attach the Data volume to the micro instance from the Volumes screen.

Connect to the t2.micro instance from a linux terminal, or from PuTTY, and mount the Data volume:

sudo mount /dev/xvdf /data

Copy the output of the photogrammetry to your local computer:

From your local linux machine (where local_output_folder is a path on your computer to copy the results to):

scp -i "AWS.pem" ec2-user@instance_url:/data/project/your_project_folder/output/* local_output_folder

From windows:

See WinSCP tutorial from Amazon

Step 10: Enjoy the Results

Open the result texturedMesh.obj file in a 3D modeler and enjoy your Textured 3D model.