In both chapters we learned about interpreting images in the Google earth engine API, such as conducting analysis for image manipulation by bands, using arithmetic (i.e., as raster math operations), using thresholds, and using mask functions.
The first feature in the lab was discussing spectral indices, which are used to highlight different features of land cover. Most often, this takes the form of vegetation indices like normalized Difference Vegetation Index (NDVI) or the Normalized Burn Ratio (NBR). We learned about, according to the electromagnetic spectrum, how different bands capture different intensities of photons, which allow depiction of surface cover differently. For NDVI, the metric is calculated taking the difference between the Near-infrared band from the red band, over the additive function of the two objects. The returned value, normalized between -1 and 1, depicts the lack-of to abundance of vegetation, with high scores typifying forests and other green vegetation areas.
Using the ee.ImageCollection function we called in the Sentinel-2 Dataset for the spatial area of San Francisco. Using a Geometry.Point, we can assign the location for where the image collection will be filtered to. Other filters applied to the dataset included a date range from February to April, and to grab the first image in the collection [.first();]
Figure 1: RGB Colour composite for the San Francisco Area on an Sentinel-2 Image.
We then assigned variable objects to both NIR (band 8) and Red (band 4) bands, so that we could subtract and add operators to create numerators and denominators. The index layer was calculated using another object, which was the product of dividing the two previous objects (numerator by the denominator).
Creating a palette which has a colour range from red to green corresponds to the range of values from -1 to 1, and were included as an embedded variable in the Map.addLayer function. It can also be computed as a separate dictionary and called in (as displayed here).
Figure 2: NDVI Score for a Sentinel-2 Image for San Francisco.
We then used the Normalised Difference Function that just requires us to stipulate in the source imagery what the NIR and red bands are, and then can be added to the map. The result is identical between the two; one utilises the logical operators in google earth engine and the other uses a purpose-built API to accomplish the same task.
Figure 3: NDVI score output for San Francisco using the .normalizedDifference(); function in GEE.
We then computed the normalized difference water index that swaps the RED band for the Shortwave infrared band. We also changed the colour ramp from white to blue, and put the range of values between -.5 and 1.
Figure 4: NDWI raw output score for San Francisco on a Sentinel-2 image
Again, the values were reworked using a data dictionary instead of embedded details for the min and max visualization parameters.
In this section we learned how to use logical expressions to change existing band values, as opposed to following operators (add, subtract, divide, etc).
We created a threshold value to separate vegetation and non-vegetation values in the NDVI index through using a threshold, so that scores are not expressed across -1 to 1 but are binary-ily defined as either vegetated or non-vegetated. Here we commented out the map-set center function used in the previous exercise, otherwise every time we are computing values it is going to bring us back to San Francisco, when we are instead looking at Seattle, Washington. The resulting NDVI function for Seattle is as follows.
Figure 5: Raw NDVI output for Seattle, Washington.
We then implemented a threshold where we use the .gt function to select values greater than (greater than) the specified 0.5 value. This is meant to delineate, in this example, the intensity of vegetation. This value is then mapped so that we can view values above this threshold, and all others (those equal to and less than the threshold) appear as the other colour.
Figure 6: Binary threshold result for forest and Non-forest area in Seattle, Washington; derived from NDVI scores for the Landsat 8 image.
In addition to this one filter, we then learned about less than, less than and equal to, equal to, not equal to, and greater than and equal to. Using these boolean operators, we can then use the .where function to create an if function, without calling into the syntax a JavaScript “if” statement. The major distinction here from the previous iteration is categorizing the water as all pixels less than -.1 show as water. Effectively, the range of NDVI values here is broken down into three classes of values. The where function requires a test operator (a “does this value equal this” to be applied to the entire band), and then a value to be assigned to the cell should it meet/not meet these criteria.
Figure 7: Thresholded values for Water, Forest, and Non-forest area using boolean operators and .where functions, derived from NDVI scores
We then learned how to create a mask, both for the entire extent of the image file, and then for the forested-only areas within the Seattle Washington image tile. The mask for the forest first used the threshold which classified all values above a certain value as “forest”; subsequently, using a equals operating within the mask function allowed us to isolate the forest values from all others in the extent.
Figure 8: Masked Forest Layer for Seattle, Washington.
Now, as the mask took on the values of the forest not to map, the mask will also include the forest layer, as seen when printed to the Map interface.
Here we learned that remapping is the process of assigning a different value to existing values using a .remap function which operates similar to a find and replace.
Overall, in this chapter we learned how to select bands and calculate indices using band differences, we also learned how to preform boolean math operations on varying imaging products for altering the values and Digital Numbers (DNs) of raster files.
In this lab we learned how to run a classification in google earth engine, preform supervised and unsupervised classifications, and collect sample points for image classification in Earth Engine.
In this example we started by importing a Landsat 8 collection, filtering it, and making a composite as a true-colour image for our geometry of Turin, Italy. To create sample data, we created a geometry layer for each of the land cover classes we used in our classification. In effect, the points we place will look at the spectral profiles in the band image and be reflected as what that class of land cover looks like; the greater diversity and abundance of samples will improve the classification accuracy.
Figure 9: Bounds for Turin, Italy and the mosaiced image of LANDSAT 8.
Using the add geometry layer, we then changed in the field parameters to change from a geometry import to a feature collection, then added points correspondingly for each of the locations. The result was classifications for forest, developed areas, water, herbaceous landscapes, and snow cover for a total of 5 land cover categories.
Figure 10: Training points for the supervised/unsupervised classification layer, including FeatureCollections for water, forest, developed area, snow, herbaceous land covers in Turin, Italy
We then used the .merge function to merge the different features into a training collection. Using the set of all points and their corresponding values, we assigned a variable for all the available bands in the Landsat image so that the spectral profiles would be the most refined, and then used the .sampleRegion which applies the classified area over the prediction bands to create a new variable which is the spectral profiles for each class, which will be used later in the classifier.
Using the classification and regression tree function, we specified the training data for the model, and what the input spectral profiles would be, and computed a new classified layer that we added to the .MapAI.
Figure 11: Result of CART supervised classification for Turin, Italy
In my instance, there were two many overlapping classifications which interfered with the classification accuracy for the CART classification. THe result is displayed in the console above.
Using the Random forest algorithm, a different supervised classification was computed for the study extent. Once writing the code to skip overlaps, the clarity in the resulting images was improved.
Figure 12: Random Forest Classification for Turin, Italy
In this example, there is clear to be way less noise within the classified layer.
We then tried computing the classification using an unsupervised classification, where it takes the center of values for our collections of classifications, and tries to classify the landscape according to the clustering of spectral values.
The results of the unsupervised classification were likely non-functional as there was not enough difference between the values to form legitimate clusters and a proper classification of the image, as pictured in the textbook.
Figure 13: Result of Unsupervised Classification (K-means classified) for Turin, Italy, using the training data set
Overall, in this section we learned how we can assemble training data, and use existing regression and predictive functions in the docs.API layer to call, classify images and create supervised and unsupervised classifications of land cover. Moreover, we learned about spectral differencing and creating images that represent index values of band difference, such as NDVI both in hard-coding into the interface and using existing functions native to GEE.