In this lab we will investigate a landslide near Williams Lake. To do this we will:

  • Process a digital surface model and an orthomosaic from drone imagery ,
  • Compare it to lidar

1. Make a project folder in a location of your chosing

Keep your data organized, make the following folders:

  • RPAS > Survey_Images
  • RPAS > Exports
  • Lidar > Geotiff

2. Make a Spexi account

There are many tools that you can use to generate point clouds, surface models, and ortho imagery from drones. For example, Agisoft, PIX4D, and OpenDroneMap. Unfortunately, I didn’t install any on the lab computers. So we’ll use the free version of Spexi. This website allows for generating the data in the cloud, with no local installation required.

Go to https://spexigeo.com/ and make a free account.

3. Load imagery into Spexi

  • Create a New Project
  • Create a New Process
  • Add images to Process, imagery is located on the L: Drive

4. Process DSM and Ortho

1 credit per 100 images. 5 free credits……. Process wisely!

  • Select Product Type = “Map and 3D Model”
  • Product Name “Soda Creek Landslide”
  • Look at the Parameters options (do not change)
  • Look at the Ground Control options (do not change)
  • Click Save and Process
  • Download DSM

5. While you wait

  • Use the Map Preview in Spexi to locate the site

6. Find and Download a Lidar DEM (Geotiff) for this location

7. Load into QGIS

  • Add drone ortho and DSM to map (if the Spexi data is not ready, use the data from the exports folder)
  • Add lidar geotiff to map
  • Using the QuickMapService (install plugin if necessary) to add satellite imagery basemaps
    • Add Google
    • Add ESRI
    • Add Bing
    • Add Here Wego

8. Compare hillshades

  • First re-project both datasets to NAD 83 UTM Zone 10 using Raster > Warp (reproject). Also set your map CRS to the same NAD 83 UTM Zone 10. Save the reprojected files in your folder as: lidar_UTM.tif and rpas_UTM.tif
  • Style the reprojected DEMs as hillshades.Flicker the layers (turn on and off) – any shifts visible between the datasets?

8. Georeferencing

  • Now we need to georeference these datasets to eachother.
  • Open the Layer > Georeferencer and load the rpas_UTM.tif file. Keep the lidar hillshade visible in the main window.
  • In the Transform Settings menu, set the 1st order polynomial transformation type and the NAD 83 UTM 10. For the resampling method, select Linear.
  • Style the RPAS DEM as a hillshade in the georeferencer window using Settings>Source Properties>Symbology>Hillshade
  • Add 10 Ground Control Points (GCPs) by clicking on a location in the georeferencer window, and then clicking in the main window on the same feature. Pick easily visible features, like roads and edges. Distribute the points evenly across the area scene.
  • Keep an eye on the residuals of your points!

8. Compare Lidar and Drone

  • Again, compare the hillshades of the new adjusted RPAS DEM and the Lidar.
  • Using the Profile Tool, plot a cross section of the two DEMs.
  • Is there an elevation bias between the two datasets? If so, try to correct it. We’ll need to make an assumption that the Lidar data is “correct” and we want to adjust our drone imagery to it.
  • Correct the elevation bias using the Raster Calculator. Make a new profile figure.
  • Using the Raster Calculator, difference the two DEMs. Pay attention to the Symbology of the output…

Questions

These questions are due by email at bevington@unbc.ca before the next lab on Feb 6.

  1. Make a map with the georeferenced hillshades from the RPAS (left) and Lidar (right) DEMs and identify at least 2 major geomorphic changes between the two maps. Make sure to have the major cartographic elements (as per previous labs). Also, write a simple figure caption describing the data sources.
  2. Make a figure that has the the pre/post elevation profile over an area where there has been change and write a few lines about what you see.
  3. What are some sources of error that may still be present in this analysis? What assumptions are we making about the Lidar data?
  4. Using the profile tool, make a cross section over the RGB Ortho from the Drone. Color the chanels as 1=Red, 2=Green, 3=Blue. Draw a cross section over the truck and another cross section over the impounded lake. What happens to the band values over the truck? How about the lake? What is the vertical scale of the plot? What does this tell you about the Spectral information, i.e. what is the data type (bit depth) of this image?
  5. Warp (reproject) the RGB image to have 5 m spatial resolution using Bilinear algorithm. Then run the 5 m image into the GRASS i.segment algorithm (find the tool in the Processing > Toolbox) in order to break the image into similar segments. Use all of the defaults. Provide this image as a map and describe what the tool tried to do. This is an unsupervised classification…. Provide a map of the output, you can skip the legend on this map if it is too cluttered. Is this map useful?
  6. Using Sentinel-2 at EO Browser, is there still an impounded lake? If not, approximately when did the lake drain?