UAV's as a Digital Tool for Preservation

Agisoft PhotoScan Professional is software for converting any set of overlapping pictures into a variety of data, including 3D point clouds, 3D meshes, aerial orthomosaics, digital elevation models (DEMs), and more.

This tutorial will take you through a very basic example of turning a set of pictures captured with an unmanned aerial vehicle (UAV) into a point cloud, a DEM, and an orthomosaic. There are many more operations and features in the software to create the highest quality data possible. The program is fairly simple to use, yet powerful and able to process a large amount of data, though powerful computers may be necessary! PhotoScan also batch processing capabilities as well as an API which can be used to write processing scripts.

These example photos have the coordinates (longitude and latitude) stored in the camera's EXIF data. Additional surveyed ground control points are highly recommended to use for very accurate georeferencing but will not be shown in the interest of time.

Getting Started with PhotoScan

1. Open Agisoft PhotoScan Professional from the Start menu. (Start > All Programs > Agisoft). Click File > Save and save the PhotoScan project as HanoverHouse

This will open the software window, which has a few main parts.

    • Reference pane lists the cameras, ground control points (called markers), and scale bars.

    • Workspace pane lists the different parts of the model, such as sparse cloud, dense point cloud, DEM, etc. Double clicking an item will open it in the Model pane.

    • Model pane shows the active portion of the model in a 3D or ortho view (depending on the selection).

    • Photos pane displays all photos added to the project.

    • Console displays the command line output from all processing and can be used to enter Python commands from the API directly.

Adding and Aligning Photos

This step will load photos (referred to as cameras in the software) into the project, then estimate the camera locations and generate a sparse point cloud.

2. Click on Workflow > Add Photos...

3. Navigate to D:\HanoverHouse\photos. Click in the space between photos, hit Ctrl+A to select all photos, and click Open.

The photos will be added to Chunk 1 and show up in the Photos panel. Because these have coordinates in the EXIF data, they will also show up as blue squares in the Model window.

4. Click on Chunk 1 to expand it. Then click the Reference button at the bottom and inspect the information about each camera (photo).

The coordinates and altitude are shown for each photo. The additional columns relate to the accuracy of the model and will remain empty, for now. Let's align the photos, which will prompt for a number of parameters:

    • Accuracy refers to the accuracy of the camera locations. For Highest quality, the photos are processed at full resolution. For each step lower, the photos are downscaled by a factor of 4. This reduces the processing time, which may be advantageous, but the accuracy is decreased.

    • Pair preselection is how the software selects a pair of photos to compare. Reference preselection uses the GPS position information so photos taken far away from one another will not be compared. Generic preselection uses a lower accuracy setting to match first, then a higher accuracy setting for the final matching. For large datasets, this can reduce processing time significantly.

    • Key point limit is the number of matching points generated in the sparse cloud. A higher number is generally better, but the default is a good value for most cases.

    • Tie point limit is the number of matching points per image. Again, the default is a good value in most cases.

    • Constrain features by mask will exclude portions of photos from the matching. This is important to remove unimportant portions of photos, such as sky, background, etc, from processing.

5. Click on the Workspace pane again. Next, click Workflow > Align Photos...

6. Set accuracy to Medium. Check the boxes for both Generic preselection and Reference preselection. Leave other settings at default. Click OK to run.

Once the processing finishes, a sparse point cloud will appear in the Model pane.

Building the Dense Point Cloud

Now that we have a good-looking sparse cloud we can generate our dense point cloud! Notice the blue bounding box around the sparse cloud in the Model pane. This determines the extent of any further processing.

8. Click the Resize Region button. Click and drag the corners of the bounding box to the extent of the sparse point cloud. Then click the Navigation tool to switch back.

Now that the bounding box has been set, we will generate the dense cloud. Again, there are many parameters to be set:

    • Quality is the quality of the point cloud reconstruction. Again, lower quality uses downscaled photos and results in fewer points, but will process much faster.

    • Depth filtering will remove outlier points during processing when enabled. Agressive, moderate, and mild settings are available, or disable to keep all points. If small details are important, do not use moderate or agressive depth filtering.

9. Click on Workflow > Build Dense Cloud...

10. Set the quality to Low and the depth filtering to Moderate. Click OK to start.

11. Once it finishes, notice that a Dense Cloud has been added in the Workspace panel. Double click on it to show it in the Model pane.

12. Explore the resulting point cloud!

Generating a Textured 3D Mesh

The next step will be to create a textured 3D model, stored as a mesh of triangular faces. This dataset is useful for visualization of the results and can be brought into 3D modeling environments, virtual reality, and others.

13. Click on Workflow > Build Mesh...

Let's review the settings.

    • Surface Type describes the type of environment to be modeled.

      • Height field is for creating a model which is approximately flat and "planar," such as a landscape or topography, this method is adequate and processes faster.

      • Arbitrary models all types of objects and is more computationally intensive.

    • Source Data will generally be set to dense cloud to provide a good quality model. Use sparse cloud if model is needed fast and quality is not important.

    • Face count (high, medium, low, custom) sets the number of triangular faces used in the model. More faces = better quality but longer processing time and larger file size.

    • Interpolation sets the ability for gaps in the model to be filled. Extrapolation setting fills all holes, default setting leaves possibility for gaps.

    • Point Classes will default to using All points. If points are classified, a mesh of a particular class can be used (e.g. ground only, vegetation only).

14. Accept all the defaults and click OK to run. View the results.

7. Click and drag in the box to rotate the model. Use the mouse wheel to zoom in and out. Inspect the sparse cloud for potential errors in alignment.

15. Let's add texture to the model. Click on Workflow > Build Texture... which has the following settings:

    • Texture mapping modes alter the visual quality of the output model.

      • Generic is the default and tries to create a uniform texture. This is appropriate if none of the other options fit your needs.

      • Adaptive orthophoto produces good texture quality for vertical surfaces, such as building walls or cliffs.

      • Orthophoto produces smaller file sizes and is adequate for "planar" surfaces, such as topography, but does not render vertical faces well.

      • Spherical is best only for features with a ball-like form.

      • SIngle photo option uses texture from a single photo. Not sure of a good use case for this.

      • Keep uv is advanced, not recommended.

    • Blending mode determines how pixels of the same location in different photos will be averaged/merged.

      • Average takes a mean of all values.

      • Mosaic will attempt to reduce visual seams to produce a good quality output.

      • Max and Min Intensity options will select the extreme values for that pixel to the output

      • Disabled will select a single photo for that feature and use it to build texture.

    • Texture size/count sets the pixel size of resulting texture, with higher number = higher resolution.

      • If visualization is critically important, set high (4096 x 8)

      • To reduce file size, leave default or decrease

16. Set the mapping mode to Adaptive Orthophoto and leave the other defaults. Click OK to run. Check out the results

Creating a Digital Elevation Model and Orthomosaic

Once the dense cloud has been generated, a number of other products can be created. A DEM can be created which will interpolate the point cloud elevations onto a grid. An orthorectification and mosaicking of the photos can also be performed to generate an orthomosaic of the area. We will create both of these products.

Note: building a DEM is only possible if there is georeferencing information present, such as in the photo EXIF data or in the form of ground control points.

13. Click on Workflow > Build DEM...

The coordinate system of the DEM needs to match the coordinate system of the georeferencing data. In this case, it is the WGS 1984 datum used to define the coordinates in the EXIF data of the photos. The data can be exported into a different coordinate system.

13. Make sure to select Geographic for the Type and select WGS 84 (EPSG::4326) in the dropdown, if not already selected.

14. Set the Source Data to Dense cloud and Interpolation to Enabled.

15. Leave the default resolution, but take note of the pixel size (about 0.25 m per pixel). Click OK to process. When finished, double click the DEM to view it in the Model pane.

Creating the orthomosaic will also prompt for a number of setting to be defined:

    • Surface is the source of the surface for orthorectification and texture overlay. DEM is best for aerial survey data, mesh

    • is better for non-georeferenced data.Blending mode is the way in which pixels from various photos will be combined in the output image. The Mosaic mode blends pixels from various images to reduce seams in the texture. Average mode will average all pixels of the object. Disabled will not average pixels in any way, it will only use the "best" photo for each feature.

    • Color correction will reduce the variation in brightness among the data.

    • Pixel size is determined from the ground sampling resolution. Generally you can increase for a smaller file, but do not increase beyond the initial value.

16. Click on Workflow > Build Orthomosaic...

17. Set the surface to DEM and the blending mode to Mosaic if not already specified.

18. Click the Metres... button to see the resolution of the orthomosaic in metres (about 3.5 cm per pixel).

19. Check the box to Enable hole filling and then click OK to run.

20. When processing finishes, double click on the Orthomosaic in the Workspace panel and inspect the results.

21. Save your project.

Note the detail and resolution in the DEM. As the point cloud includes all sorts of objects (such as buildings, trees and vegetation, and the ground) this is a Digital Surface Model (DSM). Also note the areas of "smearing" due to gaps in the dense point cloud.

Viewing the Processing Report

PhotoScan has the ability to automatically generate a report on the project processing, with information on:

    • Survey data: number of photos, number of aligned photos, flying altitude, tie points, ground resolution, coverage area, overlap

    • Camera calibration shows results of the camera model parameters

    • Camera location error: lists errors in the XYZ positions

    • Ground control error: positioning uncertainty in the ground control

  • DEM output

    • Processing parameters: all parameters used to generate the models, time required to process

To create the report:

22. Click on File > Generate report...

23. Enter a title and description for the report and click OK.

24. Enter a name for the PDF and specify the output location (e.g. D:\UAV_LiDAR_Workshop) and click Save.

25. View the report and inspect the contents.

Exporting Data for Use in Other Programs

Now that we have a variety of data products generated, we can export these for use in GIS, mapping, and analysis. Some general notes about exporting are:

    • Point clouds can be exported to a variety of formats: OBJ, PLY, XYZ, ASPRS LAS and LAZ, E57, U3D, potree, DXF, OC3, and as a 3D PDF.

    • DEMs can be exported to GeoTIFF, BIL, and XYZ; as a KMZ; and as tiles. Boundaries can be specified and/or the data split into blocks.

    • Orthomosaics can be exported as a TIFF, JPEG, or PNG; as tiles; and as a KMZ. Boundaries can be specified and/or the data split into blocks.

    • Coordinate systems can be specified for most georeferenced datasets and does not necessarily have to match the ground control data.

To export data, right-click on one of the data sets in the Workspace panel and follow the dialogs. For example:

26. Right click on the Orthomosaic and select Export Orthomosaic > Export JPEG/TIFF/PNG.

27. Specify the output coordinate system and the desired Pixel size (default is about 5 cm).

28. Check the box to Write World File to save the projection file along with the image file. Click Export... and give an appropriate name and location to finish.

29. Navigate to the output directory to view the result!

Additional Topics

PhotoScan offers many, many more features which we do not have time to explore today, but will become the topic of future workshops and workflow documents. A brief list of topics to further your expertise is:

    • Incorporating targets to improve scaling and accuracy

    • Using Ground Control Points to georeference the model

    • Importing camera GPS positions (not already in EXIF)

    • Editing the Point Cloud - deleting, classifying, modifying points

    • Building textured meshes

    • Batch processing to automate steps

    • Scripting using the API

    • Using Clemson HPC cluster, Palmetto, for processing

    • Calculating and exporting NDVI from multispectral imagery

And many, many more!

Sources:

Materials and information are modified from Analyzing High Resolution Topography with TLS and SFM workshop materials provided by UNAVCO: and used under Creative Commons license.

  • Douglas, B, et. al. (2018). Analyzing High Resolution Topography with TLS and SFM, InTeGrate. Retrieved April 19, 2018, from https://serc.carleton.edu/dev/getsi/teaching_materials/high-rez-topo/index.html