By Eidan Willis
Adapting the NBR-informed BULC-D algorithm to utilize the Burned Area Index (BAI) in an effort to more accurately detect and visualize wildfire burn scars in regions where the Normalized Burn Ratio (NBR) index may be incompatible.
Presentation slide deck detailing the proposed project.
Link: https://docs.google.com/presentation/d/1NszTuqR5sPYUEsAQKil-MR8HztHvQyaULNpcreIbg0g/edit?usp=sharing
In the recent past, wildfires of unprecedented scale and intensity around the world have made their way into the minds of most, often accompanied by concerns regarding climate change. With anthropogenic activities driving warmer temperatures around the globe, the likelihood, severity, frequency, and longevity of wildfires is expected to rise. According to the United Nations Environment Programme, "[e]ven with urgent action, the number of wildfires globally is expected to increase 50 per cent by the end of the century" (1). In our modern technological era, comprehensive, accurate, and reliable methodologies for conducting geospatial analyses on regions impacted by wildfires are increasingly being employed to inform and guide communities, governments, and international organizations in wildfire mitigation, firefighting, as well as rescue and evacuation efforts.
This past Summer, I helped to develop a new version of the Bayesian Updating of Land Cover (BULC) algorithm – an algorithm capable of accurately visualizing wildfire burn-scars from satellite imagery – developed by Dr. Jeffrey Cardille of McGill University's Department of Natural Resource Sciences (2). The algorithm uses the Normalized Burn Ratio (NBR) spatial index – an index that capitalizes on the normalized difference between NIR and SWIR bands – to identify "burned" pixels in an image, where a pixel is considered to be burned when its NBR value – ranging from 0 (darker) to 1 (brighter) – drops below a certain threshold (3). BULC then compares NBR values in pixels from images taken in a year that the region experienced a wildfire to pixels in images taken in a recent year previous that did not. The difference is taken between the target year (i.e., saw fire) and the expectation year (i.e., did not see fire) NBR values and drastic drops in the same pixel over multiple temporally successive images are flagged as burned. The most recent version of the algorithm – BULC Version D (i.e., BULC-D) – is capable of accurately detecting changes in NBR while accounting for natural variations in the index due to yearly changes to the land cover, such as seasonality. By employing a harmonic regression to adjust the mean index value depending on the time of year, Version D of the algorithm can account for "false burns", pixels where natural phenomena – like changing foliage color in deciduous forests in autumn, changes in vegetation cover due to drought, and deforestation – have historically been incorrectly flagged as burned.
Over the 2021/2022 academic year, I've been testing BULC-D in several regions and have determined that it is capable of accurately delineating wildfire burn-scars in many regions and ecologies, but evidently not all of them. Comparative analysis between the BULC-D output and official fire progression maps in test regions have yielded an unsatisfactory dissimilarity and, therefore, suggest that the algorithm's performance is still lacking in certain regions. This was first noticed when testing the algorithm on the Fall 2020 Cameron Peak and East Troublesome Fires that took place just west of Fort Collins and Boulder, CO, respectively. One possible explanation for this is a potential incompatibility of the NBR index in certain regions or particular land cover types.
In theory, it is possible to re-task the BULC-D algorithm to use another spatial index, given that some modifications to the underlying script are made. As such, I propose a project aimed at adapting the BULC-D algorithm to work with the Burned Area Index (BAI) – an index that detects burn scars using the spectral signature of charcoal in post-fire images (4).
This project is aimed at repurposing the BULC-D algorithm to utilize the Burned Area Index (BAI). The following objectives will be pursued with the ultimate goal of delivering a new "spectral lens" from which wildfire burn scars can be analyzed, hopefully increasing the overall accuracy, flexibility, and usability of the algorithm:
Objectives
Find 4 test fires that burned sometime in the past 10 years – with corresponding official fire progression maps – that was not accurately visualized using BULC-D informed by NBR.
Rewrite the NBR-informed BULC-D script in GEE to instead rely on BAI for its computations. This will involve a variety of changes to the algorithm.
Compare BULC-D's NBR output to its BAI output for each of the 4 test fires. Both outputs will then be cross-compared with the official fire progression maps to see if one seems to perform better than the other overall.
Questions
How well does a BAI-informed BULC-D visualize burn scars as opposed to official fire progression or extent maps of the same fire?
How does BAI-informed BULC-D perform compared to its NBR-informed counterpart in:
regions where NBR-based burn scar visualization has performed well?
regions where NBR-based burn scar visualization has not performed well?
The most important data required for this project includes:
the software used.
the four test fires upon which the accuracy of the new index will be tested on.
Version D of the burn scar detecting Bayesian Updating of Land Cover (BULC-D) algorithm, developed in Summer 2021 by Dr. Cardille and myself. This algorithm has been described in detail in Introduction & Background.
Software
I will be using Google Earth Engine to conduct my analyses for this project, as the Bayesian Updating of Land Cover, Version D (BULC-D) algorithm is written and runs using Javascript syntax-based scripts developed in this software.
Candidate Test Fires
1) 2020 Cameron Peak Fire, Colorado
Cameron Peak Fire Extent Map as of 11/6/2020. Produced by US Forest Service. Source: https://inciweb.nwcg.gov/incident/6964/
Time Range:
8/13/2020 – 12/2/2020 (6)
Location:
Larimer and Jackson Counties and Rocky Mountain National Park, Colorado (5)
Damage Estimate:
208,663 acres (5)
Supplementary Information:
8-14 inches of snow fell in the region on the evening of 9/8/2020, temporarily halting the fire (5)
8-18 inches of snow fell in the region between 10/24/2020 and 10/25/2020 (5)
Cameron Peak fire was the largest in Colorado history (5)
2) 2020 East Troublesome Fire, Colorado
East Troublesome Fire Extent Map as of 11/9/2020. Produced by US Forest Service. Source: https://inciweb.nwcg.gov/incident/maps/7242/
Time Range:
10/14/2020 – 11/30/2020 (6)
Location:
Grand County and Rocky Mountain National Park, Colorado (6)
Damage Estimate:
193,812 acres (6)
Supplementary Information:
Snow fell in the region between 10/24/2020 and 10/25/2020, dramatically affecting fire behavior (6)
E. Troublesome fire was the second largest in Colorado history (6)
3) 2020 Chernobyl Exclusion Zone Wildfire, Ukraine
Chernopyl Exclusion Zone Fire Extent Map. Produced by Copernicus Emergency Management Service. Retrieved from: https://phys.org/news/2020-04-image-chernobyl-space.html.
Time Range:
4/4/2020 – 4/14/2020 (7)
Location:
Chernobyl Exclusion Zone, Chernobyl, Ukraine
Damage Estimate (Ukrainian Government):
28,417 acres burned (7)
Supplementary information:
~30% of tourist attractions were destroyed by the fire (8)
Kyiv, Ukraine had the worst air pollution in the world one point on 4/16/2020 due to the fires (7)
4) CONTROL: 2019-2020 Kangaroo Island Bushfire, Australia
Kangaroo Island Ravine Fire Extent Map, January 12, 2020. Produced by South Australian Country Fire Service. Retrieved from: https://wildfiretoday.com/2020/01/12/bushfire-has-burned-almost-half-of-kangaroo-island/.
Time Range:
12/20/2019 – 1/21/2020 (9)
Location:
Flinders Chase National Park, Kangaroo Island, Australia (9)
Damage Estimate:
211,474 (9)
Supplementary Information
Fires burned approximately 49% of the island (9)
The initial Duncan and Menzies fires were started by lightning strikes on 12/20/2019 (9)
Also started by lightning strikes, the Ravine fire began on 12/30/2019 (9)
Four candidate test fires (detailed in Data section, Candidate Test Fires) were identified based on project objectives and questions. These fires were chosen because:
all fires occurred within the past 10 years
all fire accounts are accompanied by one or more official fire progression or extent maps
NOTE: a map is considered to be official if it has been developed by a known government or organizational entity, ensuring at the very least that the contents of the outsourced map have a higher chance of having been cross-validated and peer-reviewed before dissemination to the public. This necessary step ensures that comparisons, as well as any determinations based on those comparisons, are being performed using reliably outsourced data.
The first step was to identify fires that have met the criteria specified in the above section (i.e., Data, Candidate Test Fires). Test fires outlined in this proposal are candidates for the project and may be swapped if one or more appropriate alternatives are found. These fires were found by searching the web for information on the location, start time, end time, season (i.e., Spring, Summer, Fall, Winter), and burn extent. Other information regarding environmental conditions (i.e., land cover type, topographical similarities, meteorological similarities, seasonal similarities, etc.) were kept in mind as important qualifying characteristics for candidate fires. The 2020 Cameron Peak and East Troublesome fires in Colorado – fires that were originally identified as having an NBR-informed output that disagreed with the accompanying official fire extent maps – were chosen as two of the test fires for this project (i.e., candidate fires 1 and 2). Next, a 2020 fire that took place in Ukraine's Chernobyl Exclusion Zone – a region where environmental conditions were relatively similar to those of the Colorado fires – was chosen (i.e., candidate fire 3) in an attempt to identify common conditions that could be affecting the NBR-informed output. Lastly, the 2019-2020 Kangaroo Island bush fire– a fire with conditions similar to regions where the NBR-informed output has historically performed relatively well (i.e., arid/semi-arid regions, such as Southern California) – was chosen (i.e., candidate fire 4).
Each of these fires were validated by cross-comparing the NBR-informed BULC-D output with the fire's official progression or extent map. Evaluations on whether the BULC-D output agreed or disagreed with the outsourced map were performed manually through visual confirmation. It is important to note that this project is not currently relying on formal statistical analyses to guide our understanding of whether an NBR-informed output and a corresponding map agree or disagree. At least at this moment, judgements are being made based on observation alone as it seems to not only be too time consuming and beyond the scope and scale of this project to incorporate statistical determinations in the results, but validation via visual confirmation has thus far yielded satisfactory results since the inception of my Honours Project.
Potential Deliverable: visualization(s) comparing outsourced official fire progression maps to the NBR-informed BULC-D output. The NBR-informed BULC-D output of these fires will be verified as being relatively inaccurate, with most outputs being too "noisy" or dissimilar from the official maps for our liking.
Once we have chosen our four test fires, the next step is to modify the BULC-D algorithm to take BAI as its input index instead of NBR. Re-tasking the algorithm to use BAI as its input index is not as straight forward as it may seem for several reasons. First, we have to understand how to calculate BAI – this can be done according to the following equation:
BAI = 1/((0.1 - RED)^2 + (0.06 - NIR)^2)
(10)
Once we've done this, we have to write code to incorporate the result of this equation into each pixel of each image in both the target year and expectation year image collections. As daunting a task as this sounds, we can get away with simply replicating the way in which NBR is currently being computed and fed to the algorithm.
We then have to take into consideration the fact that NBR and BAI span very different integer ranges (i.e., covered in Introduction & Background). This is important because of the way in which the algorithm computes the Z-Score – the position of a raw index value in terms of its distance from the mean index value (11) – of each given pixel in each given image. This Z-Score will have to be recalculated for BAI; if it is not adjusted, the algorithm will draw the BAI-informed output according to the distribution of the NBR index, yielding determinations about the burn state of each pixel based on the wrong index.
In order to correctly adjust the Z-Score, we have to obtain information on the range of BAI values that are possible in a fire image. This will be done by randomly sampling BAI values in one or more of the test fires. It may be discovered later on that a similar sampling methodology needs to be carried out over non-fire images (i.e., those that would be representative of an expectation year image), but I do not believe that this will be needed as there should be plenty of unburned pixels in the fire images. Data will be tabulated in Excel and categorized as being either "burned" or "unburned" (i.e., n = 30 data points for each category). These data will then be organized in a histogram to visualize the distribution of BAI in both categories. These histograms will be manually analyzed to determine the suitable BAI value ranges we should use to adjust the Z-Score to inform BULC-D using BAI. This will allow us to accurately capture the range of possible BAI values in a fire image and inform the algorithm on which BAI values to classify as burned.
After adjusting the Z-Score, the algorithm should now be correctly calculating and visualizing the BAI value within each pixel of each image in both the target year and expectation year image collections. At this point, we would be able to continue onto Step 3.
Potential Deliverable: screenshot(s) and accompanying explanation of script, particularly parts of the code that were challenging/especially important to implement.
Once we've sorted out the necessary changes to the code underlying the BULC-D algorithm outlined in Step 2, we can compare NBR-informed and BAI-informed BULC-D outputs to one another, as well as compare both to the official fire progression or extent maps for each of our four test fires. As in Step 1, each of these fires will be evaluated on whether the two BULC-D outputs agree or disagree with the outsourced map. These evaluations will be performed manually through visual confirmation, so the same stipulations regarding the lack of formal statistical analyses to guide our conclusions on this project's results will hold here as well. But, as was mentioned previously, evaluations via visual confirmation have yielded satisfactory results throughout my Honours Project thus far, leading me to believe that we can apply this method to this project without any issues. This may not be the case, however, and the consequences of this will briefly be covered in the next section on potential problems.
Potential Deliverable: visualization(s) comparing the new BAI-informed BULC-D output to its NBR-informed counterpart, as well as to outsourced fire progression maps of the 4 test fires.
Give a brief overview of the results you expect. What problems could occur and under what circumstances could they occur? What if data is missing? What if you get an uninteresting result when you do the steps you have devised?
There is a chance that the candidate test fires I have chosen will not be suitable for this project, or that conclusions or relationships drawn between the fires – particularly those between the Colorado and Ukraine fires – will be less meaningful than I anticipate. If I come to this realization early on in the project, I may have time to swap the Ukraine fire for a more suitable alternative. Otherwise, I will simply note any lack of meaning between the two as a limitation to the project.
Modifying BULC-D may prove more complex and difficult to understand than it has been thus far. If this is the case, I intend to embark on further web research to inform myself on how to fulfill the objectives I've set forth for this project.
There could be issues if the sample data collected on BAI distributions in one or more fire images may not correctly represent the actual distribution of BAI, which would in turn have an effect on the Z-Scores and the algorithm's ability to accurately and reliable visualize a given fire. Though I do not believe a sample size larger than n = 30 will be required, it may be informative to sample a wider swath of fires using the index if this ends up happening.
Time is a major practical limitation of this project, as I am essentially attempting to finalize the results my Honours Project in a few weeks. An upside of this, however, is that because it is an integral part of my Honours Project, there is a much lower likelihood that I do not have at least something to deliver for the final project.
Computational limitations of GEE may come into play. This is one of the main reasons why I decided against using the ee.Histogram() function (or an alternative using Random Forest) within the API to sample the fires. With the ee.Histogram() function I am limited to sampling only a region of 1 million pixels at a time. This may sound like a lot, but this is not a large enough region to accurately and randomly sample even one of my test fires. Therefore, it makes much more sense to manually sample data points using the methodology I have described above.
The Kangaroo Island fire in Australia burned in both 2019 and 2020, which may end up complicating my attempts to accurately represent the burn scar. The BULC-D algorithm is limited in that one can only choose one target year, though multiple expectation years can be chosen. For example, if I have a fire that burned in 2020, I could choose 2017, 2018, and 2019 to represent my expectation image collection (one still needs to pay mind to the computational limit of GEE when doing so, though) but am limited in that I have to choose 2020 as my only target year. This means that, for the Kangaroo Island fire in 2019-2020, I can only use either 2019 or 2020 as the target year as the input for BULC-D. As a result, I will get an incomplete sample to represent this fire, which may end up being reflected in the final output. I anticipate dealing with this by choosing the year with the wider range of NBR/BAI values. Doing so should allow me to circumnavigate this issue so long as the best possible target year is still adequately representative of what the fire would look like in a complete year.
Snow fell and covered the regions burned by the two Colorado fires. In the NBR index, snow shows up as a very high value, while burn scars remain very low towards 0. This could be what is having an effect on NBR-informed BULC-D's ability to visualize the fire, which may be resolved by developing a BAI-informed alternative. It is important to mention this, though isn't necessary a problem with the project itself as it would aid our understanding of the algorithm's ability – or lack thereof – in detecting and visualizing wildfire burn scars late in the season or in snowy regions.
This project aims to repurpose the BULC-D algorithm to utilize the Burned Area Index (BAI), with the ultimate goal of developing a new "spectral lens" from which wildfire burn scars can be analyzed. In turn, the result of this project will hopefully increasing the overall accuracy, flexibility, and usability of the BULC-D algorithm with respect to post-fire burn scar detection and visualization.
Two candidate test fires – the 2020 Cameron Peak and East Troublesome fires in Colorado – were chosen based on the disagreement between the fire visualizations created by NBR-informed BULC-D and by corresponding fire progression/extent maps. A third candidate fire – the 2020 Chernobyl Exclusion Zone fire in Ukraine – was chosen for the relative similarity of its environmental conditions to those of the Colorado fires. Lastly, a fourth control fire – the 2019-2020 Kangaroo Island bush fire in Australia – was chosen for the relative similarity of the region's environmental conditions to the arid/semi-arid regions where the NBR-informed BULC-D has historically agreed with or improved upon official fire progression maps. It is from these four fires that the project will answer the following questions:
How well does a BAI-informed BULC-D visualize burn scars as opposed to official fire progression or extent maps of the same fire?
How does BAI-informed BULC-D perform compared to its NBR-informed counterpart in:
regions where NBR-based burn scar visualization has performed well?
regions where NBR-based burn scar visualization has not performed well?
Methodologies that will be carried out in the project were outlined, as were potential conceptual and practical problems that could be encountered while striving to complete the project. Deliverables in the form of comparative maps and script excerpts are expected.
(1) United Nations Environment Programme. (2022, March 9). As climate changes, world grapples with a wildfire crisis. UNEP. Retrieved March 11, 2022, from https://www.unep.org/news-and-stories/story/climate-changes-world-grapples-wildfire-crisis
(2) Cardille, J. A., & Fortin, J. A. (2016). Bayesian updating of land-cover estimates in a data-rich environment. Remote Sensing of Environment, 186, 234–249. https://doi.org/10.1016/j.rse.2016.08.021
(3) Key, C. and N. Benson, N. "Landscape Assessment: Remote Sensing of Severity, the Normalized Burn Ratio; and Ground Measure of Severity, the Composite Burn Index." In FIREMON: Fire Effects Monitoring and Inventory System, RMRS-GTR, Ogden, UT: USDA Forest Service, Rocky Mountain Research Station (2005).
(4) Chuvieco, E., Martín, M. P., & Palacios, A. (2002). Assessment of different spectral indices in the red-near-infrared spectral domain for burned land discrimination. International Journal of Remote Sensing, 23(23), 5103–5110. https://doi.org/10.1080/01431160210153129
(5) USDA Forest Service, Fire and Aviation Management. (2021, June 21). Cameron Peak Fire Information. InciWeb – The Incident Information Service. Retrieved March 22, 2022, from https://inciweb.nwcg.gov/incident/6964/
(6) USDA Forest Service, Fire and Aviation Management. (2021, June 21). East Troublesome Fire. InciWeb – The Incident Information Service. Retrieved March 22, 2022, from https://inciweb.nwcg.gov/incident/7242/
(7) Wikipedia contributors. (2021, December 15). 2020 Chernobyl Exclusion Zone wildfires. Wikipedia. Retrieved March 22, 2022, from https://en.wikipedia.org/wiki/2020_Chernobyl_Exclusion_Zone_wildfires
(8) Gorchinskaya, K. (2021, March 29). Fire Destroys A Third Of Tourist Attractions In Chernobyl. Forbes. Retrieved March 22, 2022, from https://www.forbes.com/sites/katyagorchinskaya/2020/04/15/fire-destroys-a-third-of-tourist-attractions-in-chernobyl/?sh=38b302a02467
(9) Local Recovery Team, Kangaroo Island Local Recovery Committee (2020, November). Kangaroo Island Community Recovery Plan 2020 – 2022. Government of South Australia. https://www.recovery.sa.gov.au/2019-20-bushfires/kangaroo-island/kangaroo-island-icon-panel/our-community/our-community/KI-CommunityRecovery-Plan-Final-LoRes.pdf
(10) ESRI. (2022). Indices gallery—ArcGIS Pro | Documentation. ArcGIS Pro. Retrieved March 22, 2022, from https://pro.arcgis.com/en/pro-app/latest/help/data/imagery/indices-gallery.htm
(11) McLeod, S. A. (2019, May 17). Z-score: definition, calculation and interpretation. Simply Psychology. www.simplypsychology.org/z-score.html