Drone videography has really exploded in recent years, and there are a huge number of talented pilots that make the most acrobatic videos using their quadcopters and first-person-view headsets. My brother Johnny FPV is one of these pilots, so I wanted to see if it was possible to reconstruct the environments he flies in from his freestyle footage.
Step 1: Preprocess Your Video
Once you've got some aerial footage, some preprocessing is required. I'm using Adobe Media Encoder, but almost any video editing software should be able to take care of this.
I selected a short clip (~11 seconds) and changed the framerate from 29.77 to 30fps, and saved the new video to my desired folder.
Next, I'm using FFMPEG to export every other frame of the video as a jpg. If you are unfamiliar, FFMPEG is a set of command line tools that allows processing and conversion of almost ANY kind of audio or video you can think of. There is paid software that will allow you to do many of the same things, but if you're willing to work with the command line a little, it can be an incredibly powerful tool.
A good guide to installing FFMPEG is available here.
You'll want to change your directory to the location of your image file (cd), and then use the following script:
ffmpeg -i (name of your video file) -vf fps = 15 exp%03d.jpg
Changing the fps will naturally change the number of images exported per second of video. This goes back to why I changed the fps of the video from 29.97 to 30- grabbing 15 images per second will now simply grab every other frame from the video. If you wanted every sixth frame you'd set it to 5 fps... etc.
"exp%03d.jpg" will result in the images being saved as exp000.jpg, where the images are numbered sequentially with three digits- if you have a longer video clip and want to export more than 999 images, writing %04d would number all the images with four digits; you could export up to 9999.
( note: "ffmpeg -i (name of your video file) -r (framerate) -f image2 exp%03d.jpg" also works for extracting frames from video, but for whatever reason, I get better point clouds from images processed in the former method)
Step 2: Import Photos and Build a Point Cloud
Once you've got a set of images, you can begin a normal photogrammetry workflow. I'm using Agisoft PhotoScanPro, but other programs such as Autodesk Remake would probably be equally (if not moreso) successful.
After importing my photos, I also set the camera calibration to Fisheye, since this footage originally came from a GoPro. A few minutes of processing later, and some 3D information begins to emerge! The point cloud might not look like much, and only consists of a few thousand points, so it's not quite enough to compute a mesh yet. From this I built a dense point cloud, and now have about 200k points to work with.
Step 3: Build a Mesh
Now that we have all these points to work with, a mesh can be computed. I'm using a relatively high face count, and enabling interpolation- this will make the mesh a little "fuzzier" but there will be far fewer missing surfaces and holes in the mesh in the end. After a few minutes of processing, we begin to have a result resembling the architecture the drone was flying around!
Step 4: Texture and Final Results
Photoscan also allows you to build a texture for your mesh from the input images, which gives the final touch of detail for the model. I have mixed feelings about the results of this process (there are way better ways of producing an accurate model), but overall I think it's amazing that any model at all can come out of such nutty footage!
Further directions I might take this project could be to thicken the surfaces into watertight models for 3D printing, or they could become part of a surrealist VR landscape.