Lab 1: Introduction to Earth Engine
from Earth Engine Fundamentals and Applications - EEFA (2021)
Eidan W. Willis
from Earth Engine Fundamentals and Applications - EEFA (2021)
Eidan W. Willis
Introduction
Chapter F1.0: JavaScript and the Earth Engine API
The Code Editor
Getting Started
Save, save, save!
JavaScript Basics
Earth Engine API Basics
Chapter F1.1: Exploring Images
Adding an image to the map
True-color Composites
False-color Composites
Additive Color System
Attributes of Location
Abstract RGB Composites
Conclusion
Lab 1 of ENVB 530 serves as an introductory look into Google Earth Engine (GEE): a powerful browser-accessible software that doubles as both a database and a programming interface for geospatial analysis and remote sensing. It provides access to a deep repository of satellite imagery and Earth Observation (EO) data to anyone with a personal computer and access to the internet. Unique to GEE is its functionality as an Application Programming Interface (API) that primarily runs a JavaScript syntax (scripts can also be written in Python and there's even a compatible R package, rgee), which allows for a wide array of manipulations and analyses which can be used to cultivate meaningful results and robust research.
This lab will be broken into two sections. In Chapter F1.0, we will learn how to write and save a simple script in GEE's Code Editor, while also familiarizing ourselves with the JavaScript syntax. In Chapter F1.1, we will apply our new knowledge of the Code Editor to explore, render, and manipulate satellite imagery. Both chapters in this lab were based on corresponding chapters in the Earth Engine Fundamentals and Applications (EEFA) book.
Figure 1. Google Earth Engine Code Editor
The Code Editor
Taking a look at the Code Editor (Figure 1), we can divide it into several key features:
This upper-left most window functions as 1) a directory for all saved code scripts, 2) a resource for documentation on the myriad of GEE-specific JavaScript functions in another, and 3) a space to upload images, in the form of GeoTiff (.tif, .tiff) or TFRecord (.tfrecord + .json) and tables, including Shape files (.shp, .shx, .dbf, .prj, .zip) and CSV files (.csv).
b. The Earth Observation (EO) Search Bar
At the top of the screen a search bar provides access to an extensive catalog of satellite imagery from a variety of different satellites, resolutions, and spectral bands. These images have been collated over a vast temporal and spatial range and are often broken into collections distinguished, primarily, by data quality.
c. The Code Editor
In the center of the screen is the programming interface where the ~magic~ happens! Here, users can type, edit, save, and run code that can be used to manipulate imported assets (i.e., image collections, feature collections, geometries, etc.). Results, error messages, and print statements from the Code Editor are generally
d. Inspector, Console, and Tasks
The top-right window functions as follows: The inspector tab is where in-depth point-by-point analysis is carried out on assets that have been visualized on the map. This tab is empty unless the map is clicked, whereupon specific commands will be carried out and results will be displayed. Next, the console tab displays print statements and error messages directly from the Code Editor. Results will be displayed in this tab upon hitting the "Run" button at the top of the Code Editor. Lastly, the Tasks tab allows for users to manage upload/download queries for the importing and exporting of assets.
e. The Map
The entire bottom half of the screen is taken up by the map window, displaying a zoomable map where the imagery products of the Code Editor are visualized. The base-map can be visualized either using a simplified Open Street Map-style layer or a realistic, high resolution Google Earth-like Satellite layer. Assets imported to the map are represented as layers that stack on one another in the order by which they are added to the map. This means that Map.AddLayer statements (i.e., an essential EE command that adds a given asset as a layers to the map) that were added later in the script will be added on top of those added earlier in the script. Layer opacity is adjustable and layers can be toggled on or off.
f. The Geometries Interface
Geometries of varying type (point, line, shape, rectangle) can be added, edited, and organized in the top-left of the map using a small interface.
Getting Started
We begin by writing our first print statement in the Code Editor:
Figure 2. Our first print statement
In most programming languages, print statements are incremental in debugging; this is no different in JavaScript. Note the format, where the function print() wraps the element that is being printed. In this case we're printing a String that reads "Hello World". Unlike some other more strict programming languages, JavaScript isn't too picky with structuring syntax. Strings can be wrapped in single quotations (' ') or double (" ") and, although a semi-colon (;) is the convention for ending a line of code, the code isn't broken in its absence. However, semi-colons should still be used so as to ensure that the code remains readable and breaks are clearly designated.
Figure 3. Our first look at the Console in action
As mentioned above, print statements are shown in the Console upon execution of the script. What is printed to the Console will differ depending on the type of element being printed (e.g., a String will be displayed as a line of text, while an Image Collection will be displayed as a List of all the images in that collection)!
Save, save, save!
Now that we've started on our first script, we should be sure to save our progress. Saved scripts can be found in the Scripts tab in the top-left window of the API. Before we can save our script, however, we need to create a repository to contain our scripts. If you're just getting started with GEE and have not added anything to your Scripts tab, you will be prompted to create a repository when you attempt to save your first script. Otherwise, you can navigate to the "New" drop down menu in the Scripts tab and create a new repository to house your work.
JavaScript Basics
In JavaScript, we distinguish Variables using the prefix var. Variables can contain a variety of data types, including Strings, Characters, Integers, Doubles, Floats, and so on. They can also represent a wide range of classical data structures, such as Lists, Dictionaries, and Arrays, as well as GEE-specific Object Classes, like Images, Image Collections, Features, Feature Collections, Geometries, Reducers, Joins, and so on. For instance, as seen below, a variable 'city' can contain a String that represents the name of a real-life city called "San Francisco". Note that comments (in green) are distinguished using a double backslash (//).
//Create a new variable called city
var city = 'San Francisco';
print(city);
//Create a new list called cities
var cities = ['San Francisco', 'Los Angeles', 'New York', 'Atlanta'];
print(cities);
//Create a new directory called cityData
var cityData = {
'city' : 'San Francisco',
'population' : 873965,
'coordinates' : [-122.4194, 37.7749]
};
print(cityData);
Variables can also be used to denote Lists, a data type that uses one container variable to store multiple values (e.g., the variable 'cities' contains four Strings that represent the names of four real-life cities). Variables can also be used to denote a wide range of Objects, such as a Dictionary that contains groupings of corresponding key-value pairs (e.g., the variable 'cityData' is a Dictionary that contains key-value pairs, where each value is referred to by its key). We can see how each of these variables differs when we look at each of their print statements in the Console (Fig. 4).
Figure 4. The Console after we execute three consecutive print statements for the 'city', 'cities', and 'cityData' variables, in that order.
Earth Engine API Basics
The Earth Engine Application Programming Interface (API), like other APIs out there, is designed to increase ease of access and comprehensibility of code for users. When confronted with any new programming language, it isn't immediately evident how to proceed. For instance, you may want to the values of two variables, a and b. A quick search through EE's Docs tab will show us that the .add() function can be used:
//Declare both variables a and b
var a = 1;
var b = 2;
//Declare a variable called result as the addition of a and b
var result = ee.Number(a).add(b);
print(result);
//print(result) returns the number 3
This is where EE's Docs tab shines as a helpful resource, not only for implementing code, but also for learning JavaScript syntax and structure.
Here's an example of how to create a List containing a sequence of year values:
//Create a variable called yearList that is a sequenced list of years from 1980 to 2020 separated by 5 years at each step
var yearList = ee.List.sequence(1980, 2020, 5);
print(yearList);
And an example of how to concatenate two strings:
//Concatenate two strings
var mission = ee.String('Sentinel');
var satellite = ee.String('2A');
var combined = mission.cat(satellite);
print(combined);
The console returns the following output:
Now we will take a look at some of the image manipulation capabilities of Earth Engine, starting with an image from June, 2006 taken by the Landsat 5 satellite. As a browser-based repository, EE holds more than 50 petabytes of satellite imagery and EO data. To access EE Objects in this repository, one can use the search bar or just declare a variable in the Code Editor and set it equal to an ee.Image Object containing a String that indicates the repository path of the imagery. To begin, we will declare and print() a variable called 'first_image' as follows:
var first_image = ee.Image('LANDSAT/LT05/C02/T1_L2/LT05_118038_20000606');
//print the image to take a look its metadata
print(first_image);
In the console, we see the following:
We can see that an Object of type ee.Image corresponding to the image we searched for has been printed to the console. Of particular interest are the last drop-down Lists: bands and properties. The 'bands' category is a dictionary containing references to several spectral bands, each of which contains unique information on the solar radiation captured by the satellite. Each of these bands represents a different range of wavelengths in the electromagnetic spectrum capable of identifying and visualizing a variety of different objects on the ground. The 'properties' category refers to a variety of other observation data and metadata information pertaining to the imagery itself.
Adding an image to the map
We can visualize our image on the map using the Map.addLayer() function as follows:
//Visualizing an Image
Map.addLayer(
first_image,
{ //Dataset to Display
bands: ['SR_B1'], //Band to Display
min: 8000, //Display Range
max: 17000
},
"Layer 1" //Name to show in Layer Manager
);
When you run the script, it initially looks as though nothing appears on the map. Zooming in over Shanghai on the Eastern coast of China, however, you can see that an image has been pasted on top of the base map (Figure 5). In this code, the Map.addLayer() function takes an image, several parameters (including reference to a spectral band and its display range), and a display name for the image layer and displays the image on the map according to this input information. The band currently being displayed is 'SR_B1', or Surface Reflectance Band 1. We can display several bands side-by-side just by specifying a new band in between the curly brackets containing parameter values ( {...} ). For instance, to compare our first image to Surface Reflectance Bands 2 ('SR_B2') and 3 ('SR_B3'), we would write:
//Adding SR_B2 band to the map
Map.addLayer(
first_image,
{
bands: ['SR_B2'],
min: 8000,
max: 17000,
},
'Layer 2',
0, // shown
1 // opacity
);
//Adding SR_B3 band to the map
Map.addLayer(
first_image,
{
bands: ['SR_B3'],
min: 8000,
max: 17000,
},
'Layer 3',
1, // shown
0 // opacity
);
It is important to note that we've added two more parameters below the layer name that indicate the display opacity of the layer. For instance, Layer 2 is initially rendered with an opacity of 0, meaning that it is only visible to the user after increasing the opacity. Each image is shown below, Figure 6 corresponding to SR_B2 and Figure 7 to SR_B3.
Figure 5. Adding our first image using the SR_B1 band
Figure 6. first_image using the SR_B2 band
Figure 7. first_image using the SR_B3 band
Comparing the images, you can see that they vary in their brightness and contrast. Each of these bands represents a different component of the RGB color model: SR_B1 is the blue band (0.45-0.52 μm), SR_B2 the green band (0.52-0.60 μm), and SR_B3 the red band (0.63-0.69 μm). Although they are appearing in grayscale individually, what would happen if we were to combine them into one image?
True-Color Composites
Pixel values in each of these single-band images can be combined and compared using an RGB (red-green-blue) composite that takes all three bands as input parameters in the Map.addLayer() function. This can be visualized in the following code snippet:
//Make a True-Color Composite
Map.addLayer(
first_image,
{
bands: ['SR_B3', 'SR_B2', 'SR_B1'],
min: 8000,
max: 17000
},
"Natural Color"
);
Figure 8. A true-color composite of the SR_B1, SR_B2, SR_B3 bands.
We can see the resulting image in Figure 8. We can see that the bands are written in reverse order (SR_B3, SR_B2, SR_B1), as the order in which bands are written indicates whether the band fills the red, green, or blue slot in the RGB color model. And, as we mentioned above, SR_B3 corresponds to the red spectrum, SR_B2 the green spectrum, and SR_B1 the blue spectrum. The color composite of this image is not dissimilar from what we would see with our own eyes (assuming you aren't color blind, that is!).
False-Color Composites
In the RGB color model, bands can be combined in combinations that create false-color composite images that allow us to visualize information on the Earth's surface outside of the human-visible spectrum. In other words, objects, information, physical/biological processes, and so on, that may be difficult or impossible to see in true-color can be easily visualized using false-color composites. If we look at some of the other bands available in USGS Landsat 5 Level 2, Collection 2, Tier 1 (i.e., the collection from which we got the image of Shanghai we're looking at), we can see that there are several other Surface Reflectance bands available to us (Figure 9).
Figure 9. A snippet of some of the bands available in the USGS Landsat 5 Level 2, Collection 2, Tier 1 satellite imagery collection
(Source: https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LT05_C02_T1_L2#bands)
Let's take a look at another composite, this time using the near infrared 'SR_B4' band. The code for visualizing this image on the map is as follows and the resulting image in Figure 10. This multi-band image can be interpreted and analyzed using our understanding of where each band fits in the RGB color model. In this case, we have the near infrared SR_B4 band in the first slot, SR_B3 (visible red) in the second, and SR_B2 (visible green) in the third. We can interpret where the surface reflectance values of each of these bands is greatest depending on the color of the pixel we're looking at. For instance, most values on land appear in red, indicating that the pixel value of the near infrared band is higher than the other two bands on land. This can be seen if you click on a random pixel on land and look at the resulting chart comparing band values in the Inspector window (Figure 11). Brighter pixels with high surface reflectance values in all three bands will be shown in white, and those with lower surface reflectance (note: lower SR usually means higher absorption of solar radiation) in all three bands are shown in black.
//Make a False-Color Composite
Map.addLayer(
first_image,
{
bands: ['SR_B4', 'SR_B3', 'SR_B2'],
min: 8000,
max: 17000
},
"False Color"
);
Figure 10. False-color composite image using the near infrared SR_B4 band.
Figure 11. A graph indicating different surface reflectance values of the SR_B4, SR_B3, and SR_B2 bands in a pixel on land. The key thing here is that SR_B4 is greater than both SR_B3 and SR_B2 (the two bands to the immediate left of SR_B4).
Let's make another false composite, this time using the shortwave infrared SR_B5 band, SR_B4 (near infrared), and SR_B3 (visible green), in that order. The code is shown below and the resulting image can be seen in Figure 12. Similar to the last image, regions with high surface reflectance in all three bands will be shown brighter, while low surface reflectance values will be darker. We can see several regions along the peninsula that have high surface reflectance of shortwave infrared, near infrared, and visible green. Brighter objects tend to have higher albedo, meaning more radiation is reflected. In this case, we could posit that these brighter regions are buildings with lighter colored roofs, concrete sidewalks, or other lighter colored objects that reflect a lot of solar radiation.
//Make another False-Color Composite
Map.addLayer(
first_image,
{
bands: ['SR_B5', 'SR_B4', 'SR_B2'],
min: 8000,
max: 17000
},
"Short wave false color"
);
Figure 12. A shortwave infrared false color multi-band image over Shanghai.
Additive Color System
As we have seen in our exploration of true-color and false-color composites, the order of the bands determines the output of colors in the layer, as well as any interpretations of what those colors might be telling us about what's happening on the ground. Deriving meaning from a satellite image often requires changing the original, unaltered value of a pixel, otherwise known as its Digital Number (DN). It is common to use an additive color system to arrange the color channels (i.e., in the RGB model, these channels are red, green, and blue) used to display the DN of each pixel in a way that would allow us to display physically meaningful, quantifiable units in each pixel. Quantifying a pixel's value is often possible by separating the pixel's color into its original channels using the additive color system to see whether one of these channels outweighs the others. For instance, in the case of our near infrared false-color composite, reddish pixels indicated that the first band value (i.e., the red channel) of that pixel was greater than the other two bands. For a true-color RGB composite, when the pixel value of the two first bands is greater than the third, the pixel color will appear as a composite color of those first two bands (i.e., red + green = yellow). Taking a look at a visualization of the additive color system in Figure 13, we can also see why pixels that appear white are a composite of all three bands and, alternatively, why a pixel that appears black would be relatively absent of all three. Using this system, we can derive greater meaning from a given pixel's value than would otherwise be possible.
For RGB composites, this means that if the pixel value of two bands are greater than that of the third band, the pixel color will appear tinted as a combined color.
Figure 13. An example of the additive color system, retrieved from Chapter F1.1 of Earth Engine Fundamentals and Applications (EEFA), 2021
Attributes of Location
We've looked at bands that store discrete ranges of wavelengths in the electromagnetic spectrum. However, there also exist bands that can store more abstract information of attributes on the surface of the earth. For instance, let's take a look at an image with the band called 'stable_lights' as its only band. To begin, however, we have to import a 1993 image from NOAA's DMSP OLS: Nighttime Lights Time Series Version 4, Defense Meteorological Program Operational Linescan System image collection:
var lights93 = ee.Image('NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F101993');
//let's print it to see what bands we can use
print('Nighttime lights', lights93);
From our print statement to the Console, we can see that several bands are available to use. As we said above, let's add an image to the map displaying the 'stable_lights' band:
//Add the stable_lights image to the map
Map.addLayer(
lights93,
{
bands: ['stable_lights'],
min: 0,
max: 63
},
"Lights"
);
Figure 14. An image of nighttime lights zoomed-in over East Asia.
This is an image of the global average nighttime brightness of each pixel; it allows us to depict light sources emitting radiation from the Earth's surface that are visible to the satellite. Keep in mind, however, that this image is an abstraction of what is actually happening on the ground. It is a compilation of only stable sources of light that remain over a given amount of time, meaning that ephemeral activities that produce light (e.g., lightning strikes, wildfires) and absorb light (e.g., clouds) are not visualized.
Abstract RGB Composites
Using the additive color system described above, we can create an RGB composite image that allows us to compare nighttime lights at multiple intervals in time. The code below describes how we add new stable_light images at a decade-long time step (i.e., images from 2003 and 2013) and combine them using the .addBands() function to create a three-band change composite:
//import a new nighttime image from the same collection, this time from 2003
var lights03 = ee.Image('NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F152003')
.select('stable_lights').rename('2003'); //select only the stable_lights band and rename the image
//import another image from 2013
var lights13 = ee.Image('NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F182013')
.select('stable_lights').rename('2013'); //select only the stable_lights band and rename the image
//create a new variable that combines the lights03 and lights13 into a change composite
var changeImage = lights13.addBands(lights03) //use the .addBands() function to combine
.addBands(lights93.select('stable_lights').rename('1993')); //add the 1993 image selected for stable light and rename
//print to Console
print("change image", changeImage);
//add the change composite to the map
Map.addLayer(
changeImage,
{
min: 0,
max: 63
},
"Change composite"
);
Figure 15. A three-band change composite image containing three images at a 10-year time step depicting nighttime lights in 1993, 2003, and 2013
In this image, the tint of the lights indicate how nighttime lights on Earth have changed from 1993 to 2013. Brighter pixels indicate that nighttime lights have high brightness in all three time steps. Using the Inspector panel to take a closer look at one of these bright pixels in Tokyo, Japan (Figure 16), we can see that nighttime brightness has remained constant over 30 years. Alternatively, we can see how nighttime lights have changed in Shanghai over these same 30 years, increasing from roughly a quarter of Tokyo's in 1993 to be roughly equivalent to nighttime lights in the Japanese capital in 2013 (Figure 17).
Figure 16. A graph of the 'stable_lights' change composite image at a pixel in downtown Tokyo, Japan, comparing nighttime brightness values at each of the three time steps indicated (1993, 2003, and 2013).
Figure 17. A graph of the 'stable_lights' change composite image at a pixel in downtown Shanghai, China, comparing nighttime brightness values at each of the three time steps indicated (1993, 2003, and 2013).
The 'stable_lights' band is also capable of differentiating the source of stable nighttime light emissions via differences in color. Some examples include halogen lights used to attract squid and other creatures to the surface during nighttime fishing activities in the Korean Strait (Figure 18); changing light emissions at sites of resource extraction in the North Dakota, USA (Figure 19); and a holiday light show in Western Russia depicts the boom and bust of oil and gas extraction activities (Figure 20). Differences in color indicate the time at which these activities began or ended – red indicates a beginning closer to the end of the 30-year test period (i.e., close to 2013), green suggests a stronger period of activity in 2003 that declines by 2013, and blue suggests activity in 1993 that teeters off by 2013. Remember that the additive color system mentioned above can be used to better understand what the different hues of the RGB color model mean for interpreting what's happening on the ground.
Figure 18. Emission of halogen lights from nighttime fishing activities in the Korean Strait between South Korea and Japan
Figure 19. Light emissions from resource extraction activities in North Dakota, USA, appearing in red
Figure 20. A holiday light show indicates the boom and bust of oil and gas extraction activities in Western Russia from 1993 to 2013
In summary, Lab 1 of ENVB 530 serves as an introductory look into Google Earth Engine (GEE): a powerful browser-accessible software that serves as both a repository for EO data and an interface for analysis and remote sensing applications. Thanks to its functionality as an API that primarily runs a JavaScript syntax, Earth Engine serves as a template for a wide array of manipulations and analyses which can be used to cultivate meaningful results and robust research. In this lab we learned:
how to write and save a simple script in GEE's Code Editor, while also familiarizing ourselves with the JavaScript syntax.
how to apply our new knowledge of the Code Editor to explore, render, and manipulate satellite imagery.