Remote Sensing
Remote Sensing
The impact of Hurricane Matthew on the coast of South Carolina was felt on October 8, 2016. Evacuation routes were impacted due to flooding as residents made their way out of the storm. SAR (Fig.1) data can be used to create a flood map to highlight parts of highways leading out of the city that were flooded, so that alternate routes of evacuation can be put in place.
The accuracy for the training data was 97.5% and the accuracy for the validation data was 89.5% (Fig.3). Testing the trained classifier on the validation set gives a more realistic accuracy. The foundation of machine learning requires that classification be tested on new data to assess the true accuracy of the classifier.
A training sample was created with the Sentinel C band SAR. The following categories were sampled: open water, open flood waters, flood channels, flooded vegetation, urban areas, and low vegetation. Google Earth Engine was used to achieve the goal of machine learning because it integrated the programming aspects of machine learning with GIS. The training sample was split into 70% training data and 30% validation data. The Random Forest classifier was trained using the training data. The trained classifier was then used to classify the validation data. The trained classifier was used to classify the coast of South Carolina, with the highway layer added to find the parts of highways that were flooded (Fig.2). Finally, the error matrix for the training set and the validation set and their accuracy were calculated (Fig.3).
SAR has its advantages over other phenomenology due to its ability to penetrate cloud cover, as in the case of storms. It also is not affected by atmosphere and hence no atmospheric correction is required. For these reasons alone, it is a game changer for emergency responders. Google Earth Engine is also the game changer here due to its ability to split the training sample into training and validation data sets and its ease in integrating GIS functionality with programming capability.
Hyperspectral data is very useful in that it allows spectral matching across a library of spectral profiles to identify crop types in Khas Urozgan, a town south west of Kabul, Afghanistan. A high value in the near Infrared (NIR) band of a LandSAT indicate a presence of vegetation. But to identify the type of vegetation, we will require the extra NIR bands in the hyperspectral data.
The resulting spectral profile plotted in the lower right corner (Fig.2) can be further refined using spectral matching techniques and lookup tables before they can be used for spectral matching. But this first pass gives an idea on the different spectral profiles that are exhibited by different types of vegetation, where otherwise a single NIR band value is displayed if LandSAT 8 was used.
The EO1 Hyperion data was used (Fig.1). There are about 242 NIR and SWIR (shortwave IR) bands. Google Earth Engine was used to load the bands, applied atmospheric correction and three data points X, Y, Z, were collected for and plotted to show their spectral profiles.
Hyperspectral data contains more information in a single pixel than multispectral data. In multispectral, we are only able to discern between trees and vegetation. Whereas in hyperspectral, we are able to differentiate what kind of vegetation.
Newer technology on Unmanned Aerial Vehicles (UAV) or drones now allows the attachment of High Definition (HD) cameras and Light Detection and Ranging (LiDAR) sensors. Drones have been deployed to areas inaccessible to transport and where take off space is restricted. North Carolina State University has an agriculture land south on Lake Wheeler Road that requires land and crop monitoring using UAV.
The DSM was created, as shown in Fig.2. Using images to construct the DSM is known as Structure from Motion (SfM).
The flight plan was set up for the area of interest and an UAV was flown to collect data. Eight Ground Control Points (GCPs) were set up within the area using GPS to establish their coordinates. 632 images were collected and down sampled to 78 images. Agisoft Metashape was used to create a Point Cloud Density (Fig.1). The Point Cloud Density were aligned using the coordinates of the GCPs. A mesh or triangulated network was created to provide a 3D structure to build the Digital Surface Model (DSM).
With the introduction of SfM, data collection and processing using an UAV is becoming more common as technology miniaturized the equipment used by the drone. The advantages of using UAVs versus manned flights are the images reveal more about the landscape than LiDAR. Especially with disaster or emergency events, UAVs need very little space for takeoff and landing, requiring little time to set up, operate and pilot them, making them a popular choice for time sensitive emergency response management.
The goal of this study is to quantify coastline changes by comparing Digital Elevation Model (DEM) of Mexico Beach pre Michael and post Michael. To ensure that the impact is from Hurricane Michael and not other prior hurricanes, the year selected must be as close as possible to the day of landfall. Given the availability of data, 2017 is the closest pre Michael year without prior hurricane events on Mexico Beach. It will be used as a benchmark to measure coastline changes in 2018.
Using the volumetric measurement, the volume of sand displaced was calculated to be 111380.0861 cubic yards (Fig.2). The volume of sand displaced from field data done by the Florida Department of Environmental Protection in April, 2019 was 252,997 cubic yards.
LiDAR point cloud for Mexico beach in 2017 and 2018 were downloaded from NOAA and uncompressed into LAS files. Using ArcPro, DEM were derived from the LAS point cloud. The difference in elevation between 2017 and 2018 DEM were then calculated and displayed on the map (Fig.1).
The result from LiDAR is not even close to the field data. There may be human error in drawing the outline along the coast for the volumetric measurement. Further research will be needed to find an alternative method to estimate coastal loss with LiDAR.
The goal of this project was to classify land cover for North Sacramento, CA with Lake Folsom on north. This was done using a type of supervised classification method called the Support Vector Machine. The original data was a Geo tiff format. Date of acquisition: 7/11/21.
The .tif file was classified according to the six Land Cover categories (Fig.2), together with a confusion matrix (Fig.3). The Kappa statistics of 0.55 suggests that Support Vector Machine reduces the error 55% compared to the error from a randomly generated classification.
A training sample was created from the true color composite (Fig.1) consisting of 510 data points for six Land Cover categories: Water, Urban/Developed Land, Agriculture/Herbaceous, Evergreen/Coniferous vegetation, Deciduous vegetation and Barren land. Support Vector Machine used the training sample to train the pixel values to the six categories. The data was inspected to established the ground truth, in order to generate a confusion matrix to assess the accuracy of the classification.
In order for a supervised classification to be effective, the user has to be familiar with the area of interest. This is because the training sample and the ground truth are determined by the user. I am not familiar with the Sacramento area, hence I make mistakes especially in the agriculture area.