Mapping canopy height in a consistent fashion at global scale is key to understand terrestrial ecosystem functions, which are dominated by vegetation height and vegetation structure7. Canopy-top height is an important indicator of biomass and the associated, global aboveground carbon stock8. At high spatial resolution, canopy height models (CHMs) directly characterize habitat heterogeneity9, which is why canopy height has been ranked as a high-priority biodiversity variable to be observed from space5. Furthermore, forests buffer microclimate temperatures under the canopy10. While it has been shown that in the tropics, higher canopies provide a stronger dampening effect on microclimate extremes11, targeted studies are needed to see if such relationships also hold true at global scale10. Thus a homogeneous high-resolution CHM has the potential to advance the modelling of climate impact on terrestrial ecosystems and may assist forest management to bolster microclimate buffering as a mitigation service to protect biodiversity under a warmer climate10.

In this work, we describe a deep learning approach to map canopy-top height globally with high resolution, using publicly available optical satellite images as input. We deploy that approach to compute a global canopy-top height product with 10-m ground sampling distance (GSD), based on Sentinel-2 optical images for the year 2020. That global map and the underlying source code and trained models are made publicly available to support conservation efforts and science in disciplines such as climate, carbon and biodiversity modelling. The map can be explored interactively in this browser application: nlang.users.earthengine.app/view/global-canopy-height-2020.


Download Height Map From Google Earth


Download 🔥 https://bytlly.com/2y2Q5y 🔥



During inference, we process images from ten different dates (satellite overpasses) within a year at every location to obtain full coverage and exploit redundancy for pixels with multiple cloud-free observations. Each image is processed with a randomly selected CNN within the ensemble, which reduces computational overhead and can be interpreted as natural test-time augmentation, known to improve the calibration of uncertainty estimates with deep ensembles37.

Finally, we use the estimated aleatoric uncertainties to merge redundant predictions from different imaging dates by weighted averaging proportional to the inverse variance. While inverse-variance weighting is known to yield the lowest expected error38, we observe a deterioration of the uncertainty calibration for low values (

Also at large scale, the predictive uncertainty is positively correlated with the estimated canopy height (Fig. 2b). Still, some regions such as Alaska, Yukon (northwestern Canada) and Tibet exhibit high predictive uncertainty, which cannot be explained only by the canopy height itself. The two former lie outside of the GEDI coverage, so the higher uncertainty is probably due to local characteristics that the model has not encountered during training. The latter indicates that also within the GEDI range, there are environments that are more challenging for the model, for example, due to globally rare ecosystem characteristics not encountered elsewhere. Ultimately, all three regions might be affected by frequent cloud cover (and snow cover), limiting the number of repeated observations. Qualitative examples with high uncertainty, but reasonable canopy-top height estimates, for Alaska are presented in Extended Data Fig. 8e,f.

The evaluation with hold-out GEDI reference data and the comparison with independent airborne LiDAR data show that the presented approach yields a new, carefully quality-assessed state-of-the-art global map product that includes quantitative uncertainty estimates. But the retrieval of canopy height from optical satellite images remains a challenging task and has its limitations.

Regarding map quality, besides minor artefacts in regions with persistent cloud cover, we observe tiling artifacts at high latitudes in the northern hemisphere. The systematic inconsistencies at tile borders point at degradation of the absolute height estimates, possibly caused by a lack of training data for particular, locally unique vegetation structures. Interestingly, it appears that a notable part of these errors are constant offsets between the tiles.

We further demonstrate that our model can be deployed annually to map canopy height change over time, for example, to derive changes in carbon stock and estimate carbon emissions caused by global land-use changes, at present mainly deforestation48. Annual canopy height maps are computed for a region in northern California where wildfires have destroyed large areas in 2020 (Fig. 4b). Our automated change map corresponds well with the mapped fire extent from the California Department of Forestry and Fire Protection (www.fire.ca.gov), while at the same time the annual maps are consistent in areas not affected by the fires, where no changes are expected. While the sensitivity of detectable changes such as annual growth might be limited by the model accuracy and remains to be evaluated (for example, with repeated GEDI shots or LiDAR campaigns), such high-resolution change data may potentially help to reduce the high uncertainty of emissions estimates that are reported in the annual Global Carbon Budget48. It is worth mentioning that the presented approach yields reliable estimates based on single cloud-free Sentinel-2 images. Thus, its potential for monitoring changes in canopy height is not limited to annual maps but to the availability of cloud-free images that are taken at least every five days globally. This high update frequency makes it relevant for, for example, real-time deforestation alert systems, even in regions with frequent cloud cover.

Technically, our dense high-resolution map makes it a lot easier for scientists to intersect sparse sample data, for example, field plots, with canopy height. To make full use of scarce field data in biomass or biodiversity research, dense complementary maps are a lot more useful: when pairing sparse field samples with other sparsely sampled data, the chances of finding enough overlap are exceedingly low; whereas pairing them with low-resolution maps risks biases due to the large-scale difference and associated spatial averaging.

As a possible future extension, our model could be extended to map other vegetation characteristics17 at global scale. In particular, it appears feasible to densely map biomass by retraining with GEDI L4A biomass data8 or by adding additional data from planned future space missions52.

Whereas deep learning technology for remote sensing is continuously being refined by focusing on improved performance at regional scale, its operational utility has been limited by the fact that it often could not be scaled up to global coverage. Our work demonstrates that if one has a way of collecting globally well-distributed reference data, modern deep learning can be scaled up and employed for global vegetation analysis from satellite images. We hope that our example may serve as a foundation on which new, scalable approaches can be built that harness the potential of deep learning for global environmental monitoring and that it inspires machine learning researchers to contribute to environmental and conservation challenges.

Formally, canopy height retrieval is a pixel-wise regression task. We train the regression model end to end in supervised fashion, which means that the model learns to transform raw image data into spectral and textural features predictive of canopy height, and there is no need to manually design feature extractors (Supplementary Fig. 3). We train the convolutional neural network with sparse supervision, that is, by selectively minimizing the loss (equation (1)) only at pixel locations for which there is a GEDI reference value. Before feeding the 15-channel data cube to the CNN, each channel is normalized to be standard normal, using the channel statistics from the training set. The reference canopy heights are normalized in the same way, a common pre-processing step to improve the numerical stability of the training. Each neural network is trained for 5,000,000 iterations with a batch size of 64, using the Adam optimizer56. The base learning rate is initially set to 0.0001 and then reduced by factor 0.1 after 2,000,000 iterations and again after 3,500,000 iterations. This schedule was found to stabilize the uncertainty estimation.

Modelling uncertainty in deep neural networks is challenging due to their strong nonlinearity but crucial to build trustworthy models. The approach followed in this work accounts for two sources of uncertainty, namely the data (aleatoric) and the model (epistemic) uncertainty35. The uncertainty in the data, resulting from noise in the input and reference data, is modelled by minimizing the Gaussian negative log likelihood (equation (1)) as a loss function35. This corresponds to independently representing the model output at every pixel i as a conditional Gaussian probability distribution over possible canopy heights, given the input data, and estimating the mean \(\hat{\mu }\) and variance \({\hat{\sigma }}^{2}\) of that distribution.

Several metrics are employed to measure prediction performance: the RMSE (equation (6)) of the predicted heights, their MAE (equation (7)) and their mean error (ME, equation (8)). The latter quantifies systematic height bias, where a negative ME indicates underestimation, that is, predictions that are systematically lower than the reference values.

We also report balanced versions of these metrics, where the respective error is computed separately in each 5-m height interval and then averaged across all intervals. They are abbreviated as aRMSE, aMAE and aME (Fig. 1b).

For the estimated predictive uncertainties, there are, by definition, no reference values. A common scheme to evaluate their calibration is to produce calibration plots34,57 that show how well the uncertainties correlate with the empirical error. As this correlation holds only in expectation, both the uncertainties and the empirical errors at the test samples must be binned into K equally sized intervals. In each bin Bk, the average of the predicted uncertainties is then compared against the actual average deviation between the predicted height and the reference data. On the basis of the calibration plots, it is further possible to derive a scalar error metric for the uncertainty calibration, the uncertainty calibration error (UCE) (equation (10))57. Again, we additionally report a balanced version, the average uncertainty calibration error (AUCE) (equation (11)), where each bin has the same weight independent of the number Nk of samples in it. ff782bc1db

download 8 ball pool hack latest version

kufar download

constitution of india

download voltron cartoon

mr and mrs smith download in hindi