Agisoft Metashape Pro Software


We are going to be using software specifically written to produce three dimensional models
created from overlapping photos taken from any type of imaging equipment. This is a Russian
company called Agisoft called Metashape Pro. Over the years Scott has had several projects that
make use of Metashape Pro ranging from creating 3D models of human remains, times series
orthomosaics and elevation models to UAV image processing. In today’s lab, we are going to
work with UAV data flown this past year as part of a Caribou Habitat Monitoring Project

This software is not installed on the computers in the lab so start off by using Remote Desktop to connect to osmotar.gis.unbc.ca.

We will be working out of the research drive, otherwise known as the R: drive. There is however a catch that the software does not handle network drive letters well so we will access using the UNC path R:\GEOG456 is equivalent to \\gis-smb.gis.unbc.ca\Research\GEOG456. It is very important we open our files with the UNC path from the beginning. You can access this path by opening windows file explorer and typing over the file path.

Outline of Structure From Motion Processing

Structure from motion is the process of calculating the differences in in images taken of the same area from different perspectives and relating that to 3D space. Additionally we will be completing a Radiometric calibration along the way that will make this data have similar properties to the imagery from Landsat or Sentinel.

Dual Camera System compared with Satellites
https://micasense.com/dual-camera-system/

The steps we will follow today are as follows (* Denotes optional but recommended step)

  1. Import data into Metashape Pro combining multiple files into a single camera.
  2. Perform Radiometric Calibration
  3. Align Photos
  4. * Build Dense Point Cloud
  5. * Classify Point Cloud
  6. Generate Digital Elevation Model (Will be DSM if point cloud unclassified)
  7. Generate Orthomosaic
  8. Export Products

Prep Your Workspace

Navigate to the folder \\gis-smb.gis.unbc.ca\Research\GEOG457 in this folder make a subfolder for your lab today. (Right Click > New > Folder)


*Please note that everyone in the class can access the contents of this GEOG457 and as we are going to be generating very large files today we will be deleting this folder in a few weeks.

Next Open of Agisoft Metashape Professional

When it opens we will configure the software for network processing to do this go into Tools > Preferences > Network ensure that the Hostname is osmotar.gis.unbc.ca, and the Root is \\gis-smb.gis.unbc.ca What we are doing here is telling Metashape that whenever we process we want to send the job to Osmotar, Osmotar then takes the job breaks it into pieces and distributes it across the network. All of the processing agents then also have their root of file system set to \\gis-smb.gis.unbc.ca so this will help them find the files. This is actually very similar to when you access Osmotar for remote desktop, Osmotar is the gateway and it assigns you to one of four servers based upon which one has the fewest users.

*One of the most common sources of job failure is a typo in the root path.

Import the Data

The easiest way to add the multispectral photos is to go into the folder \gis-smb\Research\GEOG457\Dataset\200rd_fsr using the windows file browser, and then Dragging the entire MicasenseRedEdgeDual folder onto “Chunk 1”.

It will take a little time for the photos to load then you will be presented with an option to choose the data layout, this is a Multi-camera system (Each band is a separate file inside the folder) if everything worked properly you should see a message about Calibration images

Radiometric Calibration

To calibrate our photos we will be using two pieces

Calibration Panel

This is a small piece of material with known reflectance, to see what this looks like using Windows Photo Viewer take a look at the various bands of IMG_0000 or IMG_0001, what do you notice about the calibration card? How about the drone case it is sitting on?

Downwelling Light Sensor

This is a sensor on top of the RPAS that measures how much sunlight there is to help account for differences in light levels over the duration of the flight.

In the Tools menu you will find the Calibrate Reflectance option

A window will open, that has 4 images listed, and a blank column for Panel, press the locate panels button. The software will search for the barcode on the panel and add some basic information. Once this is done it may have populated the Reflectance column as well, whether it did or did not we will be adding a file regardless to get more precision. Select panel… then in the window that opens in the \\gis-smb.gis.unbc.ca\Research\GEOG457\Dataset folder you will find a .csv file with the same name as the panel that was detected.

This file contains the relative reflectance as tested in a laboratory for each wavelength from 250nm to 950nm

And coming back to the Calibrate reflectance menu check Use sun sensor and press OK. When it asks if you want to process over the network select no.

When it is done you will notice all the thumbnails along the bottom got darker.

Align Photos

The next part of the process is Photo Alignment, the truly magical stage of processing where we convert a set of individual photos and figure out how they fit together to form a 3D surface.

Accuracy: High processes at native resolution, each step-down, reduces image resolution to 1/4 for faster processing. Preselection is used to help reduce the amount of computation we need to do, Generic is checking photos that are taken in sequence for being stereo pairs. And Reference is using the GPS data with the photos as a starting point. This is in comparison to checking every photo against every other photo. We are also Excluding stationary tie points, this means if the same object is in the same place on many photos ignore it, this is really helpful if you have dust on the lens or a faulty pixel.

Now because we all want to go come tonight instead of pressing OK and waiting for 30 minutes, multiplied by the number of us there are, let’s close Metashape and do some cooking show magic, on the R drive COPY the contents of the Phase2 subfolder into your subfolder and open the .psx file.

Wow, amazing how fast our photos aligned!

Build Dense Cloud

Again going into workflow you will find Build Dense Cloud

For the options
  • Quality: This determines how many points the output cloud will have, the primary place where we will notice this is in the DEM resolution, High will match the orthomosaic, every step down reduces resolution by 1/4.
  • Depth Filtering, this setting is used to remove noise from the output cloud, however when working with vegetation due to is movement over time of capture setting this too high may result in filtering out the vegetation. Conversely if you were modeling smooth solid surfaces you may want this high to get a cleaner result.
  • Calculate point colors: This has no impact on us today other than if the preview is black and white or colour, though colour is needed for advanced feature classification.
  • Calculate point confidence: How sure the software is a point belongs, this could be used for further filtering the exports.

Classify Ground Points

Because we like to be specific about our DEMs we need to know the difference between bare earth and the surface. This is easy to do by classifying ground. A note here this may be ineffective on thick vegetation, as we have no concept of multiple returns here. This algorithm works by looking for sharp changes in elevation that are within a size range; anywhere that it cannot see ground there will be no classified points and we will need to interpolate later.

Tools > Dense Cloud > Classify Ground Points

Keep the default settings, you would only change these if it was not working and would require some trial and error.

To look at the results expand your chunk and double click on Dense Cloud, on the top menu bar, turn off cameras

And in the Point Cloud view options (button with 9 dots, see image below) select Dense Cloud Classes.

Digital Elevation Models

To build the DEM you will find the options in the Workflow menu

  • Projection:
    • Type: We will always want Geographic, Planar and Cylindrical are used in other forms of 3D modeling.
    • CRS: For this there is 3 primary choices we could make
      • Select the UTM zone where the data was captured in this case (10N, or EPSG::32610).
      • If we want to share on the web Pseudo Mercator ESPG::3857 may be a good option as this is what most webmaps use.
      • EPSG::4326, though this is the default (matches photo GPS tags) this should only be used for making 3D maps (Google Earth, ArcGIS Scene).
  • Source Data: Dense Cloud, for a low quality product fast you could use the sparse cloud (no ability to classify), or a Mesh.
    • One way to get a potentially sharper Orthomosaic on Rough terrain is to build a Mesh using only ground points, under Tools > Mesh > Smoothing, apply smoothing until the mesh shows elevation changes but is not rough, this will require less warping when building the orthomosaic.
  • Point classes: Here you can select points based upon standard las classifications for todays lab we will use the ground points only. Here is where some thought could be placed, using only ground points will produce a sharper image, thought the photos were taken of the surface not just terrain
  • Region: You could force specific pixel sizes if you will be resampling regardless this is likely a better place to do it, otherwise stick with the defaults.

Orthomosaic Generation

Almost finished, Build Orthomosaic is again in the Workflow menu (notice how it is ordered yet?)

Sticking with type Geographic you will notice that the CRS is not changeable, this is because it must match the DEM from the previous steps. When using the DEM.
  • Surface: This could be built on our DEM or if we made on a Mesh.
  • Blending Modes: Mosaic, Average, or Disabled.
    • Mosaic: The default and what we will be using, this attempts to find the optimal photo for any place and select it for display in the ortho.
    • Average: Get all cameras that can see a point and average their DN’s This could potentially provide smoother output however as trees tend to move in the breeze it makes the results blurry in our case.
    • Disabled: Just pick the closest camera.
  • Refine Seamlines: This option tries to pick optimal seamlines based on image content, and camera coverage. As our camera has multiple lenses that do not quite overlap it is possible to get a situation where say 8 bands come from one image, and 2 bands from another. This we want enables.
  • Enable hole filling: If there are gaps in the surface build the ortho over the hole rather than providing no data.
  • Enable ghosting filter: This can help especially if you have objects appearing differently in different photos but still not fully part of the output, you will not likely need this for RPAS data, but this post with GIF does a much better job explaining: https://www.agisoft.com/forum/index.php?topic=7961.msg38271#msg38271
  • Pixel Size: Again likely you want the defaults however you know you are going to resample set the resolution here. Note that X and Y are in map units? What would this be if you for some reason chose ESPG::4326?

Export Results

Finally to save the image, right-click on your orthomosaic (Note that Export Orthomosaic and Export Orthophotos are not the same things)

For some of the options that we have not seen yet

  • Split in blocks: For very large projects we may want to export a series of tiles rather than one image, many computers struggle with large files so keeping in blocks of around 8192 x 8192 seems to be compatible with most desktops. Multiple files can also be a pain to manage so it is a balance. Our project is small so we do not need to worry about this.
  • Write World file: This is an extra sidecar file containing the location of the image for software unable to read the GeoTIFF EXIF data. You will likely not need this unless you have specific software having issues.
  • Compression:
    • TIFF compression: LZW is usually a safe bet as it is lossless compression, most people have a CPU that will have no issues decompressing and it saves on hard drive space.
    • JPEG: This is lossy but makes the smallest files, I would consider this for RGB data, thought would avoid it for data that we want to perform further analysis on.
  • Write tiled TIFF: Affects the internal layout of the file, potentially provides faster loading times.
  • Generate TIFF overviews: Also sometimes referred to as pyramids, it makes the file bigger by adding various low resolution copies that can be displayed while zoomed out. (overviews and tiles work well together as the computer never needs to load the entire file, it will either be displaying an overview, or when zoomed in only some of the tiles
  • Alpha Channel: This is an extra band that represents transparency. In this case it is essentially a mask to prevent edges displaying a solid colour.
  • BigTIFF: A file format that allows saving TIFFs larger than 4GB, many computers without specialized software will not be able to open these (QGIS, Catalyst, ArcGIS, and Photoshop all work with these). When compatible likely more convenient than splitting into blocks.

Assignment

  1. (1 point) Build the orthomosaic for the same site (200rd) using the images from the AutelEvoIIPro, it will process significantly faster that the lab today due to the smaller size of the data, additionally there is no need to perform radiometric calibration.
  2. (1 Point) Complete the process one more time with a dataset from here https://www.opendronemap.org/odm/datasets/, pick a dataset with coordinates in EXIF, and under 1GB (Processing time should be under 15 minutes). Again no radiometric callibration.
    To download on the dataset page choose Code then Downlaod as ZIP
  3. (3 Points) NDVI Comparison
    1. Download both Landsat and Sentinel images for the 200rd Site, as close to the date (August 30th, 2021). of the RPAS data as possible(while still being cloud free).
    2. Compute NDVI using the software of your choice for the Micasense data, Landsat, and Sentinel.
    3. How does the data compare? What are the average NDVI values between platforms? How does pixel area compare between the datasets? (1 Paragraph)
    4. Write a paragraph describing the advantages and disadvantages of RPAS data compared to satellite data.

Copy your exported Orthomosaic and NDVI layers to your K drive. Include the file path in your Word document, submit the document only.

Categories: GEOG 457Labs