Speakers and Abstracts 

 

 

Prof. Dr. Rudiger Haas, Baarda Lecture 2024

Rüdiger Haas is professor for space geodesy at Space Geodesy and Geodynamics. He is responsible for the geodetic VLBI (Very Long Baseline Interferometry) activities at the Onsala Space Observatory. Rüdiger’s reseach work deals with space geodesy and global geophysical phenomena, as e.g. Earth rotation, global reference frames, change in atmospheric water vapour, sea level measurements. Rüdiger is the scientific leader of the Onsala twin telescope project, chair of the European VLBI Group for Geodesy and Astrometry (EVGA) 

The title of prof. Haas' presentation is


 

 

Konstantinos Pantelios, Giorgio Agugiaro, TU Delft, A quick overview of the 3DCityDB-Tools plug-in for QGIS 

The 3DCityDB-Tools plug-in for QGIS, already available as free and open-source software, allows to connect to local or remote instances of the free and open-source CityGML 3D City Database for PostgreSQL/PostGIS and to load data as "classical" layers into QGIS. Once data layers are available in QGIS, the user can interact with them as usual, i.e. perform analyses, work with associated attributes, explore and visualise the data in 2D and 3D, etc. Additionally, data in the database can be deleted, either using classical QGIS editing tools, or bulk-wise.

As semantic 3D city models tend to be huge datasets and are generally best managed in spatial databases, the main idea behind the development of this plug-in is to facilitate access and use of CityGML/CityJSON data for those practitioners that lack a deep knowledge of the international standard OCG CityGML data model, and/or have limited experience with SQL/Spatial-RDBMSs in general. The plug-in consists of a server-side part (written in PL/pgSQL) and a client-side part (written in Python).

The presentation will provide an overview of the rationale for the plug-in, its overall structure and main functionalities, as well as some hints at the planned further development.

Amir Hossein Owlia, IHE, Open-Source MPySEBAL: Fine-Tuning Actual ET Estimates from Remote Sensing Data 

This presentation unveils MPySEBAL, a novel open-source model (available on Github, AHOwlia/MPySEBAL) for deriving accurate actual evapotranspiration (ETa) from remote sensing data. MPySEBAL combats uncertainties plaguing traditional SEBAL methods, including those caused by pixel selection, sensor resolution, and image extent that implemented in python.

Key Strengths of MPySEBAL:

Following the MPySEBAL introduction, a practical case study will be presented, referencing the provided research paper (https://doi.org/10.22059/ijswr.2021.316455.668862). This analysis will highlight the inherent uncertainties in RS-based ETa estimation. 

Adriaan van Natijne, Lars Keuris, Keuris Earth Observation, Hypothesis Testing on a Continental Scale: GPU Based Time Series Classification 

Without accurate, prior knowledge on the nature of the behavior, it is difficult to set-up an appropriate model. Therefore, many models have to be tested iteratively, at great computational cost. Parallelized GPU processing allows us to impose many models on multiple time series simultaneously, and therefore extract model parameters at unprecedented scales. 

We demonstrate this methodology with a consumer laptop at ~20 million models per second. Subsequently, a best model for each time series is selected using hypothesis testing. The proposed parallelized methodology is applied to all ~9.4 billion time series within the most recent European Ground Motion Service (Crosetto et al., 2021) dataset. 

We extended the singular EGMS model, which only includes offset, velocity, acceleration and periodicity, to 3724 model variations, including step functions. In particular, step functions are critical because they point out erratic processing (e.g. phase unwrapping errors) or hazardous physical behavior (e.g. sinkhole precursor). Our results support a wide range of end users in the stability assessment at various scales. 

Sebastian Ciuban, TU Delft, Parameter Estimation, Statistical Hypothesis Testing, and Rare Events in Positioning Safety Analyses 

Parameter estimation and statistical hypothesis testing for model misspecifications are central tools in the design of positioning algorithms for safety-critical applications (e.g., automated driving, shipping, flying). A key safety indicator in this context is the probability of positioning failure, which is defined as the probability that the position estimator falls outside an application specific safety-region. 

Positioning failure is considered a rare event due to the strict requirements that its probability must meet (e.g., to be lower than 10^(-7)). 

Therefore, two main challenges arise if one would want to carry out positioning safety analyses: (i) dealing with the (generally) non-gaussian probability density functions of the position estimator that accounts for the dependence between parameter estimation and statistical hypothesis testing, and (ii) having to compute ‘small’ probabilities of positioning failure. 

Using an example based on GNSS positioning of an automated vehicle, we tackle these challenges by capturing the aforementioned dependence in the PDF of the position estimator according to the distributional theory for the Detection, Identification, and Adaptation (DIA) method (i), and using techniques from rare event simulation (ii). Additionally, we quantify the ‘over-optimism’ in the results when the dependence between parameter estimation and statistical hypothesis testing is ignored. 

Chengyu Yin; PJG Teunissen; CCJM Tiberius, TU Delft, Ambiguity-resolved detector for GNSS mixed-integer model 

The detection, identification, and adaptation procedure (DIA) has been widely used in GNSS model validation. Detection is the first step in this procedure, where an overall model test is performed to diagnose whether an unspecified model misspecification occurs. 


The currently used detectors for the GNSS mixed-integer model either employ the float ambiguity where the integer property of the ambiguity is ignored (ambiguity-float detector, AFD) or assume the fixed integer ambiguity is fully known (ambiguity-known detector, AKD). 

The ambiguity-resolved detector (ARD) has been recently developed, which is superior to the AFD by using the integer property of the ambiguity. It also solves the weakness of the AKD by treating the ambiguity as an unknown integer vector. With the ARD, ambiguity resolution can contribute to the model validation even if the success rate of the ambiguity resolution is not close to one. 


In this presentation, the theory of the ARD will be introduced. The distribution of the ARD test statistic will be shown and the procedure to obtain the ARD critical value will be described. 

Andreas Krietemeyer, Elske de Zeeuw-van Dalfsen, KNMI, Deployment and first results of using cost-effective GNSS units to monitor the active but quiescent volcano Mt. Scenery on Saba, Caribbean Netherlands 

The Royal Netherlands Meteorological Institute (KNMI) operates a multi sensor network to monitor the volcanoes on the islands Saba and St. Eustatius in the Caribbean Netherlands. In February 2022, the KNMI deployed four cost-effective GNSS units on the island of Saba to complement the existing monitoring network of four permanently operating GNSS stations. 

The material price of the cost-effective units is below €1000, while a permanent GNSS station costs around €10.000. The installation time of one cost-effective unit is also significantly lower (about 2 hours), compared to that for a permanent station (multiple days). 

The cost-effective unit’s design includes all necessities for independent operation: a solar charger, a microcontroller and a 4G modem connection used for data transmission and remote connection. 

We present our system’s design and first positioning results. The results are validated by analysing data of one cost-effective unit which was co-located with a permanent GNSS station. We show that the cost-effective GNSS units are well suited to extend existing volcano monitoring networks. We encourage the application of such units also in areas where the installation of conventional GNSS is deemed too costly, or in risk-prone areas where rapid installations are necessary. 

Bob van Noort, PJG Teunissen, CCJM Tiberius, TU Delft, Stochastic parameter and uncertainty estimation of GNSS satellite orbit and clock offset errors 

In safety of life applications, such as civil aviation and autonomous driving, a reliable position estimator is needed, generally obtained with GNSS positioning. 

Reliability is expressed as the integrity risk: the probability that the position estimator provides a solution outside of an alert region of interest around the user-receiver's true, yet unknown, position. 

To quantify the integrity risk, information about the types and frequency of faulty measurements is needed. When GNSS systems are used for positioning, different types of faults may occur. Two of these fault types are a single satellite failure and a constellation failure. These failures occur when the broadcast satellite orbit and clock offset contain faults, thereby introducing a bias in the ranging measurements larger than a certain threshold. 

At the Galileo Reference Center, data are stored on the broadcast and precise satellite orbit and clock offset. A stochastic analysis will be carried out to estimate the variance of the Galileo broadcast orbit and clock errors, the variance of the ranging observable as a result of these components, and corresponding uncertainties. Consequently, the satellite and constellation failure probability can be estimated, which allows computing the integrity risk, providing a reliability measure for the position estimator.  

Lucas Alvarez Navarro, TU Delft, SuperGPS-2: Accurate positioning and time-transfer using virtual ultra-wideband radio signals 

Global navigation satellite systems (GNSS) struggle to maintain position accuracy and availability in dense built-up and urban environments due to signal blockage and multipath. 

This presentation introduces an independent terrestrial networked positioning system (TNPS) based on a constellation of radio transmitters time-synchronized at the subnanosecond level. This terrestrial network can serve as a back-up or complement to GNSS, offering centimeter-to-decimeter-level uncertainty. 

Targeting ranging accuracy at the decimeter level by means of radio signals requires a large bandwidth, conventionally in the order of 1 GHz, i.e. an ultrawideband (UWB) signal. Using such a large portion of the radio spectrum would be expensive and most likely unavailable. 

The SuperGPS-2 project aims to overcome this limitation while maintaining decimeter-level accuracy by employing a sparse multiband signal, i.e. using multiple smaller bands (a few MHz wide) spread across a large bandwidth. The presentation introduces the benefits of using a sparse multiband signal for time delay estimation, the positioning performance obtained in the previous SuperGPS project, and the research topics that will be covered in the SuperGPS-2 project. 

Lotfi Massarweh, TU Delft, Enhancing Future Satellite Navigation with Low Earth Orbiting Constellations 

Back in 2017, the only satellite system in Low Earth Orbit (LEO) with global coverage was the Iridium constellation, primarily used for communications. In recent years, the LEO region has become densely populated with thousands of satellites deployed for broadband services (e.g., SpaceX Starlink, Eutelsat OneWeb) and remote sensing (e.g., Spire Global, Planet Labs). Future mega-constellations are planned to enhance Position, Navigation, and Time (PNT) capabilities, currently enabled by Global Navigation Satellite Systems.  

In this presentation, we explore the emerging role of future LEO-PNT systems in supporting Navigation, Geoscience and Remote Sensing applications. We start with an overview of the latest developments for currently planned constellations such as Xona’s PULSAR, Future Navigation’s Centispace, and ESA’s LEO-PNT demonstrator. Using numerical simulations, we then evaluate the expected performance of user positioning in terms of convergence time, while considering various system architectures. 

This analysis highlights the main benefits, e.g. large geometry variability, alongside the key challenges, e.g. limited coverage. Finally, we discuss a few potential applications in geodesy and remote sensing, including ionosphere monitoring, and water vapor tomography. We emphasize the pivotal role of Machine Learning (ML) in these applications, thus presenting new research opportunities enabled by the integration of LEO-PNT systems. 

Serge Kaplev, TU Delft, Hardly separable hypotheses and its influence on navigational integrity risk 

In hypothesis testing theory the Detection, Identification and Adaptation (DIA) method combines estimation and testing in one solution based on the misclosure vector and its partitioning. 

Even for simple examples like single-constellation GNSS positioning with observations consistency check and outlier detection there could be found interesting cases with complicated geometry. 

In one of such cases, where almost all satellites are in one vertical plane, and two symmetrical satellites are pulling the geometry from the singularity, two hypotheses are close to each other, barely separable and cause large integrity risk under wrong identification. 

This simple example gives a clear understanding of importance of considering all combinations of decisions and reality probability to the final uncertainty of the estimation. 

Philip Conroy, Yustisi A. Lumban-Gaol, Simon A.N. van Diepen, Freek J. van Leijen, Ramon F. Hanssen, TU Delft, Monitoring Dutch Peatland Subsidence Using InSAR 

Actively monitoring ground motion is of the highest importance in The Netherlands, a country in which many of its regions lie below sea level. Water tables in the country have been managed for centuries by using a system of dams, dikes and canals through which excess water can be pumped away to allow for the prevention of flooding, and for the reclamation of submerged land. However, the effects of centuries of active water management in the region have resulted in significant land subsidence, and its effects are being felt as it is becoming a significant threat to the future of the country as sea levels continue to rise. 

This has created the need to monitor land surface motion at large spatial scales with frequent temporal sampling. While InSAR is a promising candidate for such a task, the technique often suffers from drastic losses of signal quality in the spring and summer months when used to produce time series observations of peatlands. This significantly limits the effectiveness of InSAR as a tool to monitor peatland surface dynamics. 

We present the preliminary results of peatland surface motion using a novel InSAR processing method which is designed to overcome the issues which have prevented its application over northern peatlands in the past. This work is the first large scale analysis of the Dutch Green Heart region made with InSAR, providing land surface motion time series data at the parcel scale for a 2000 km2 region with sub-weekly sampling over the period Jan. 2015 to Oct. 2023. Our presentation will briefly outline the results, validation efforts and the various challenges involved. 

Jiawei Dun, Ling Chang, Wenkai Feng, Xiaoyu Yi, University of Twente, Updating landslide inventory for pre- and inter- impoundment periods in the Baihetan Reservoir area with multi-temporal InSAR 

The Baihetan Hydropower Station in China is currently impounded, which may induce numerous landslides. Therefore, continuously identifying landslides for pre- and inter-impoundment periods and updating landslide inventory is necessary. 

In this research, we employed a multi-temporal InSAR approach -- SBAS and eliminated atmospheric phase delay using GACOS and then identified landslides with 18 ALOS SAR (2007 - 2010) and 216 Sentinel-1 SAR images (2014 - 2022), and finally updated the landslide inventory for the Hulukou to Xiangbiling section within the Baihetan reservoir area. 

The result shows 52 landslides before and during impoundment were detected, among which 31 landslides were newly added during impoundment and 22 landslides were directly interfaced with water. Most landslides thrive on the slope between 30-40°, oriented toward the northeast and northwest, and elevated between 800-1200 meters, and situated in nonhard strata and are modulated by fault zones. The landslide occurrence diminishes with increasing distance from the water's edge. 

In addition, the deformation of some landslides such as Wulipo and Shikanmen landslides, correlated with reservoir water levels and precipitation trends. All this valuable information enriches the landslide inventory and can be used for landslide prevention and mitigation in the Baihetan reservoir area. 

Graphical Abstract - Jiawei Dun.pdf

Riccardo Riva, TU Delft, Reconciled evidence of a sea-level acceleration along the Dutch coast 

Based on a breakpoint analysis of tide gauge data along the Dutch coast, Steffelbauer et al. (2022, hereon S22) provided the first instrumental evidence of a sea level acceleration from the early 1990s, while Keizer et al. (2023, hereon K23) showed that the acceleration had started already in the 1960s. 

We update the breakpoint study by S22 and reconcile their result with the K23 findings. By using slightly longer timeseries, an additional tide gauge station in the Wadden Sea, and the K23 model for the nodal tide, we find that stations along the west coast and the Wadden Sea behave differently. In particular, the analysis of the western stations reveals a clear breakpoint in the 1970s, consistent with the K23 results, while the inclusion of stations in the Wadden Sea leads to the emergence of a breakpoint in the 1990s, as in S22.

We also perform a systematic analysis of all possible combinations of start and end years for the breakpoint detection, which allows us to estimate that the average pre-1990s sea level acceleration along the west coast has been about 0.04 ± 0.03 mm/yr2, which is again consistent with the K23 estimates. 

Lukas Beuster, Clara García-Sánchez, Hugo Ledoux, TU Delft, Throwing Shade; Scalable Solutions for Urban Climate Change Adaptation 

With record-breaking warm temperatures, cooling down our cities and reducing heat stress is critical for cities worldwide. To improve outdoor thermal comfort, providing shade in areas used by people is the most important action. 

We present a scalable method for building and tree shade analysis at the urban scale. Using high resolution DSM and aerial imagery to map shadows throughout the year, we compare the role of buildings and trees in providing grey and green shading in public space — from sidewalks to parks and playgrounds. 

We show that buildings, not trees, provide the most shade in Amsterdam, cooling down the city and reducing heat stress. Nonetheless, trees are providing the most "meaningful" shade. 

With the evidence provided we're hoping to add to the knowledge base on climate adaptation, and provide decision-makers at the city level with actionable information to tangibly reduce heat stress in public space across diverse geographies and climates. 

Geethanjali Anjanappa, S.J. Oude Elberink, M.G Vosselman, University of Twente, Learning From Old Maps: Deep Learning-based Semantic Change Detection for The Netherlands 

Topographic maps, such as the Basisregistratie Grootschalige Topografie (BGT) maps of the Netherlands, provide spatial information about the physical environment. Currently, many Dutch organizations utilize map data for activities like urban planning and water management. 

However, these maps can quickly become outdated due to continuous environmental changes. At present, different agencies manually update BGT maps, resulting in extended processing times.

Recent advancements in deep learning (DL) have led to automated change detection methods using multi-temporal geospatial data identifying "from-to" changes. These methods, however, require extensive labeled samples, which are not always readily available. While existing digital maps could provide valuable ground-truth data, their potential as reference data remains underexplored. Furthermore, most existing studies using maps focus only on binary change detection in buildings.

The current research focuses on detecting semantic changes using old BGT maps and newly acquired 2D-3D airborne data. We use convolutional neural networks (CNNs) and transformer-based models for multi-modal semantic segmentation of airborne data. These semantic maps are then compared with the old maps to identify and quantify changes. Further, we use a labeled training dataset derived from existing BGT map objects. Therefore, our method leverages the rich information from BGT maps for both semantic segmentation and change detection.

Claudiu Forgaci, Francesco Nattino, TU Delft, City River Spaces, a tool for automated and scalable delineation of urban river spaces 

Accelerated urbanization, climate change and the increasing need for resilience to environmental shocks and stresses have brought urban river spaces, as vital green-blue corridors and central public spaces, to the forefront of urban transformations worldwide. 

Yet, as urban design and planning research tackles the spatial implications of this trend, it faces the challenge of capturing the specificities and complexities of riverside urban areas. 

An essential part of that challenge is how boundaries are drawn in the analysis of urban areas surrounding rivers, as the resulting spatial units of analysis and decision-making can have a considerable impact on the sustainability of urban riverspace transformations. 

To overcome this challenge, we are developing the City River Spaces (CRiSp) open-source software to support a growing interdisciplinary research community concerned with understanding and transforming urban river spaces, and thereby enable new research avenues, such as integrated local spatial analyses and global cross-case analyses. 

We will present CRiSp in its current development stage and discuss use cases with the geospatial research community. CRiSp will undergo beta testing in a cross-disciplinary research community next spring and it is planned to be released in late 2025. 

Carlos Fortuny-Lombraña, TU Delft, Comparative Analysis of Upperbound and Direct Integrity Risk in the DIA Method 

The Detection, Identification, and Adaptation (DIA) estimator framework, crucial for safe positioning in safety-critical applications like autonomous vehicles and civil aviation, combines parameter estimation and statistical hypothesis testing. 

The computation of integrity risk, which is the probability that the position estimator is outside a safety region, is essential for positioning safety analyses.

We explore computationally efficient upperbounds for the DIA integrity risk. These upperbounds significantly reduce computational complexity compared to the direct method, but they can potentially overestimate the actual integrity risk, leading to conservative safety measures.


 We compare this approach to the direct method and analyze its feasibility using, as an example, a specific reference station geometry with poor identifiability of possible faults.

While upperbounds significantly reduce computational complexity, they can overestimate integrity risk under normal conditions (hypothesis $\mathcal{H}_0$) by a factor of 1-10. This offers a conservative integrity risk but necessitates caution in safety-critical scenarios due to potential overestimation. 

Additionally, as the precision of the observables improves, the overestimation may become more pronounced. Conversely, under fault conditions (hypothesis $\mathcal{H}_a$), the difference between upperbounds and the direct method is negligible in most cases. 

Tishya Duggal, Hossein Aghababaei, Ling Chang, University of Twente, Polarimetric SAR for Snow Depth Estimation: A Comparative Analysis of Diverse Radar Modalities 

Estimating snow depth is vital for various purposes like hydrology, climate modelling, avalanche risk assessment, and winter sports. While remote sensing techniques, particularly Synthetic Aperture Radar (SAR), have made significant advancement in estimating snow depth, there remains a lack of comparative assessments of varies SAR modalities. This includes different polarization channels, frequencies, and platforms for accurate snow depth estimation over complex terrains and diverse earth surfaces.

This study thus employs data from various platforms, including Sentinel-1, SAOCOM, and UAVSAR, to investigate the effects of different polarization configurations (dual and quad polarimetric), different frequencies (C- and L- bands), and different SAR sensor platforms (airborne and spaceborne) on snow depth estimation. 

The study focuses on five sites situated in the mountainous terrains of the USA: Fraser, Cameron Pass, Little Cottonwood Canyon, Basin Summit, and Mores Creek, chosen for their significant snowfall. 

By utilizing NASA’s SnowEx snow depth data as reference and employing a Random Forest regression model, this study predicts snow depth for the SAR images and reports a comparative assessment of SAR-based snow depth estimation performance across polarizations, frequencies, and platforms. Subsequently, the RMSE of these predictions were compared. 

The results indicate varying degrees of accuracy among the different SAR datasets, with UAVSAR operating as airborne system and with L-band frequency consistently achieving the best performance across most study areas, exhibiting the lowest RMSE values. Following it, SAOCOM quad-polarimetric (QP) spaceborne data performed notably well. Next in line, dual-polarimetric (DP) spaceborne data from Sentinel-1 (C-band, ∼20 m spatial resolution) and SAOCOM (L-band, ∼50 m spatial resolution) often showed comparable error levels, with similar RMSE values across the research areas. 

Youdong Chen, Ling Chang, Keren Dai, University of Twente, A processing chain for monitoring landslide displacement over low coherence areas 

In this study, we propose an innovative processing chain designed for low coherence areas to monitor landslide displacement. 

This processing chain includes a pre-processing feasibility assessment with the integration of the maximum detectable deformation gradients, vegetation coverage and R-index, and landslide displacement monitoring using distributed scatterer interferometry (DS-InSAR). 

Using DS-InSAR, the coherence of 85 Sentinel-1 SAR used (2021-2023) can be dramatically improved. We tested our methods over the small-sized Tianxi landslide located in Guangxi province, China, surrounded by an NDVI value higher than 0.5. The results show that the pre-processing InSAR feasibility assessment can predict the potential distribution pattern of the InSAR measurement points. 

This helps determine whether the Tianxi landslide labeled as low feasibility can be monitored by DS-InSAR, as opposed to small baseline subset interferometry (SBAS-InSAR). The results also show that DS-InSAR provides more comprehensive and detailed monitoring of landslide displacement and identifies five times as many InSAR measurement points compared with SBAS-InSAR. 

Time series analysis of displacement, combined with rainy factor, indicates that DS-InSAR can detect precursory movements. It is likely that the combined effects of human activity and rainfall contributed to the failure of the Tianxi landslide. 

Sander Oude Elberink, George Vosselman, University of Twente, Quality of ridgelines from laser scanner and Dense Image Matching point clouds 

Ridgelines from gable roofs can be accurately extracted from laser scanner point clouds. For many years, these ridges have been used for point cloud quality assessment by assessing the differences between corresponding ridges in overlapping strips. 

For the Integral Height Model (IHN) of the NL, we investigate the quality of ridgelines from Dense Image Matching and AHN laser scanner point clouds. A crucial step in the processing is the planar segmentation which groups the points on the roof that are used to fit a plane. Next, we analyse the influence of roof changes over time, e.g. new solar panels on the roof. 

We show ridge extraction results and ridge line differences from point clouds from DIM point clouds and several editions of AHN, and explain what it means to the quality of ridgeline extractions. Differences between several sets of ridgelines are divided into systematic offsets in the positions of the datasets, changes of individual roofs over time, and how different characteristics of the point clouds influence the location of the extracted ridges. 

Longxiang Xu, Camilo León-Sánchez, TU Delft, Solar irradiance computation in urban areas by means of semantic 3D city models 

To perform solar potential analysis, it is essential to consider the geographical location and its surroundings. This is the case of near and far topography, nearby constructions, and vegetation (de Sá et al., 2022). These city objects can be represented by means of semantic 3D city models [3DCM] (Agugiaro et al., 2020), which are datasets that allow for a coherent geometrical and semantical representation of urban features in a well-defined data structure.

Our research focuses on developing and implementing a method that uses 3DCM to calculate the solar irradiance on urban objects retrieved from a CityGML-compliant dataset. The outcomes of this calculation are vital as they are required to determine the solar gains of buildings—a key factor in assessing their energy performance. 

Our analysis involves comparing the computed solar irradiance values against the statistical values set by the Dutch standard NTA8800:2023 (NEN, 2022). This standard specifies the method for assessing the energy performance of buildings in the Netherlands. The statistical values available in the standard are grouped by the orientation and inclination of the boundary surfaces that envelope a building, since we compute those values in our method, the results from our research and the official values are comparable. 

Yuqing Wang, Wietske S. Brouwer, Freek J. van Leijen, and Ramon F. Hanssen, TU Delft, Constrained recursive parameter estimation for InSAR point scatterers 

The growing availability of SAR data presents a valuable opportunity for real-time deformation monitoring. However, efficient data utilization remains challenging. 

Our study introduces a mathematical framework for the time series analysis of InSAR arcs between point scatterers. This approach is grounded in the concept of recursive least squares and employs the amplitude data as a quality proxy for parameter estimation. 

The method updates the existing dataset when a new observation is available without the need to store previous observations, utilizing the wrapped phase. Additionally, we incorporate a constraint based on correlated acceleration into the recursive estimation to constrain the smoothness of the displacement signal.

 Figure 1 demonstrates both batch and recursive solutions for a specific arc, highlighting that a linear model in the batch solution fails to detect the anomalous signal, whereas the recursive solution effectively captures non-linear displacements. 

Our results indicate that combining recursive least squares with smoothness constraints holds significant potential for parameter estimation of InSAR point scatterers. Furthermore, the recursive estimation produces results comparable to the batch solution, offering advantages in modelling non-linear displacements and computational efficiency. 

Ou Ku, Wietske Brouwer, Freek van Leijen, Netherlands eScience Center, Scale-up of Python-based InSAR processing: challenges and best practices 

Recent advancements in open-source Python libraries like Xarray and Dask have opened new possibilities for handling larger-than-memory datasets. 

In this talk, we will share an example case of scaling up a Python function designed for processing InSAR data, based on our recent experiences. Initially, this function was created to analyze small subsets of raster interferogram stacks, and we aimed to extend its capabilities to handle much larger datasets. 

Throughout this process, we faced and overcame challenges when leveraging the functionalities provided by Xarray and Dask. We distilled our findings into a set of best practices, which we will discuss and demonstrate using our function as a practical example. We believe our insights will inspire other researchers facing similar challenges. 

Simon van Diepen, P. Conroy, F.J. van Leijen, R.F. Hanssen, TU Delft, A Dynamic Digital Elevation Model of the Dutch Peatlands 


Subsidence is a wide-spread issue throughout the peatlands of the Dutch Green Heart. Situated mostly below sealevel, both elevation and elevation change are extremely important parameters. Nevertheless, little attempts have been made at joint estimation of these parameters. 

Here we present the first Dynamic Digital Elevation Model (D-DEM) of the Green Heart, where elevation per parcel is provided as a function of time rather than a static elevation. 

The D-DEM integrates InSAR observations since 2015 with five national Digital Elevation Models (DEMs) since the 1960s, four acquired using Airborne Laser scanning, the fifth using surface leveling. 

However, datum connection between the products is necessary before we can estimate our elevation model. 

We employ buildings assumed to be rigid with benchmarks from the NAP reference frame to correct for biases between the DEMs. Integrated Geodetic Reference Stations enable us to connect InSAR and GNSS with the combined DEM product. 

We estimate a median irreversible subsidence rate of 3.1 mm/y for the entire Green Heart, with local maxima exceeding 10 mm/y. The D-DEM thus allows us to quantify irreversible subsidence in the entire Green Heart for the first time. 

Jakob Ahl, Freek J. Van Leijen, Ramon F. Hanssen, TU Denmark & TU Delft, Network formation strategies in PSInSAR 

Fig. 1 A) Seed point (orange) and partition of the image into quadrants. B) the four lowest cost arcs are selected, black lines. C) The algorithm continues until a sparse network covers the image. D) The network is densified with low-cost arcs. E) Linear velocity estimates. 


PSInSAR (Persistent Scatterer Interferometric SAR) is a powerful tool to estimate surface deformation with high spatial resolution. Since its inception two decades ago many variants have appeared following different approaches, however, most are structured similarly relying on four main steps; Initial pixel selection, Network formation, Arc estimation, and Network integration.

Choices made in the first two steps are critically deciding factors in the success rate of later steps. We will show that different choices made in the first two steps are going to results in different deformation estimates by using TU Delfts inhouse PSInSAR software, DePSI. 

As an alternative approach, we propose the use of an empirical cost-function based on the correlation between noise sources and known parameters, to produce networks more likely to results in correct Arc estimation than conventional network formation methods.

The approach will be demonstrated and compared with conventional techniques based on a TerraSAR-X dataset of Copenhagen.

Kwasi Appiah-Gyimah Ofori-Karikari, Michael Marshall, Andrew Nelson, Mariana Belgiu, Monica Pepe, University of Twente, Integrating Environmental Data with Satellite Imagery for Large-Scale Crop Grain Nutrient Mapping 

Food Composition Tables (FCTs), which are widely used in food security analysis, offer country-level snapshots of food nutritional quality. 

FCTs are highly uncertain because crop nutrient concentrations vary over space and time. Little progress has been made to exploit satellite data for assessing the nutritional status of crop yield. 

Sentinel-1 and Sentinel-2 have emerged as invaluable data sources for agricultural monitoring from space, because they track crop growth and development at high (10-20m) spatial resolution through time. 


This paper integrates Sentinel-1, Sentinel-2, and other geospatial information across Ethiopia in 2018 into a Random Forest to predict nutrient concentrations in grain yield for important global staples. The model was trained and tested with data from field measurements collected over the same period. 

Results were promising: 

Our findings underscore the effectiveness of Sentinel-2 narrowband vegetation indices and soil properties for nutrient analysis and improving the reliability of FCTs. 

Adibah Nurul Yunisya, Peter van Oosterom, Edward Verbree, TU Delft, Wayfinding in multi-environment: Exploring Spatial Cognition Through Point Cloud VR Simulation 

Wayfinding is an intricate human behavior that requires spatial cognition to perform optimally. 

Various variables are used during wayfinding varying from the person's ability and spatial interpretation to the spatial condition itself. 

Up until now, the current study of wayfinding mainly concerned variables in a single environment without acknowledging the possibility of variable correlation between environments. Furthermore, wayfinding simulation studies mainly use digital modeling for the simulation environment, causing a lack of real-life ambiance. 

Therefore, this ongoing study aims to address the potential correlations of variables between environments. Furthermore, the result will be gathered through a virtual reality simulation with a point cloud scan-building model. The progress of this study showed the promising potential of combining the technology of VR and Point Cloud as a wayfinding study method. Which is expected to provide a more accurate environment representation for the study of wayfinding. 

Amira Zaki, Ling Chang, Irene Manzella, Mark van der Meijde, Serkan Girgin, Hakan Tanyas, Islam Fadel, University of Twente, Automated Python workflow for generating Sentinel-1 PSI and SBAS interferometric stacks using SNAP on Geospatial Computing Platform 

Detecting and monitoring surface deformation using radar satellite data is vital in geohazard assessment. Among the available radar satellites, Sentinel-1 has provided the scientific community with unprecedented spatial and temporal resolution. 

However, Sentinel-1 data processing is complicated and poses computational challenges, limiting its use. Software tools have been developed to help, each with its own limitations. 

SNAP-ESA stands out for its user-friendly interface and stable performance in Interferometric Synthetic Aperture Radar (InSAR) data processing. However, SNAP-ESA struggles with generating flexible InSAR time series interferometric stacks for Persistent Scatterer Interferometry (PSI) and Small Baseline Subset (SBAS) techniques and faces computational challenges over large areas. 

To fill this gap, we introduce an automated Python workflow, SNAPWF, using SNAP-ESA that enables efficient PSI and SBAS InSAR interferometric time series stacks generation using flexible network graphs. The new workflow has been implemented on a dedicated geospatial computing platform, enabling efficient performance over large areas 

Felix Dahle, TU Delft, Historical Structure-from-Motion of the Antarctic Peninsula 

This presentation explores the potential of the TMA archive, a vast collection of historical aerial images of Antarctica from 1950 – 2000, to construct 3D models using Structure from Motion. 

SfM is a photogrammetric technique that allows for creating three-dimensional structures from two-dimensional image sequences. 

While this approach has been successfully applied to individual glaciers in our archive, our project aims to automate this process on a significantly larger scale, encompassing the Antarctic Peninsula. This will enable us to compare these historical 3D models with current elevation data, offering a unique perspective on the long-term effects of climate change on the glaciers of Antarctica. 

However, utilizing these historical images presents several challenges. Key image parameters, both external and internal, are often unavailable. Additionally, the images are black and white and capture often monotonous terrain, resulting in low contrast and difficult scene interpretation. 

To overcome these issues, we are adapting existing SfM methods and make use of modern machine learning-based algorithms to accommodate these specific constraints. 

Gina Stavropoulou, TU Delft, Detection and filtering of glass roofs for a more efficient 3DBAG reconstruction pipeline 

The 3DBAG is an open dataset containing 3D building models from across the Netherlands, which are reconstructed based on two open datasets; the BAG (building footprints) and AHN (aerial lidar). 

The occurrence of glass surfaces in the lidar data, which are most commonly observed in greenhouses, poses a problem for the pipeline. Due to the transparency of the glass, the point clouds of such structures present both ground and roof points within the footprint. The reconstruction of such cases can generate non-existing, overcomplicated structures and takes significantly longer than the usual buildings. 

To tackle these cases, we have developed a simple, effective solution to identify glass roofs; we divide the points into “ground” and “non-ground” and then we study their distribution in 2D. Since different regions of the footprint might present different distribution of points, we divide the space into a grid and we classify each grid cell as “normal”, where most of the points are either "ground" or "non-ground", or as “mixed” where the cell under examination presents a mixture of both classes. 

The cell classes together with the footprint of the building can then be used to calculate various metrics which help us identify glass roofs. 

Jeroen Kappé, Geodelta, Deep Learning-based Segmentation of Cracks within a Photogrammetry Solution 

The city of Amsterdam faces the challenge of monitoring and assessing 200 kilometers of historic quay walls, of which much is deemed to be in poor condition. A key monitoring technique used is photogrammetry resulting in deformation testing. The fundamental data source forming the basis of this deformation analysis is a collection of overlapping images acquired of the masonry quay walls. Solely focusing on deformations overlooks a potential wealth of information which could be retrieved from this imagery, like the existence of cracks in the quay walls, a key sign of potential deformation of the structure.

As manual visual inspection of this imagery is very time-consuming, this work proposes a methodology based on fully-supervised deep learning-based segmentation techniques with the goal of detecting and localizing cracks in the masonry quay walls. For this purpose, two neural networks are trained, one for the segmentation of quay walls in images, and one for the segmentation of cracks.

The neural network architectures which are considered in this work are DeepLabV3+, FPN, MANet and LinkNet, together with different encoders and loss functions. For quay wall segmentation, we adopt transfer learning on a network trained on masonry walls and fine-tune it for quay walls specifically. Here, DeepLabV3+ with ResNeXt-50 was found to be most effective, achieving a F1-score of 96.3 % on the test set. For crack segmentation, FPN with ResNeSt-50 performed best, resulting in a test set F1-score of 78.8 %.

The inference of the crack network is done with a multi-level scheme to detect cracks at different image scales and increase output confidence.

The inherent photogrammetric properties of the imagery have proven to be vital for further post-processing steps, like aggregating overlapping predictions, resulting in more prediction confidence.

Photogrammetry also enables converting pixel-wise predictions to crack length and crack width in the units of meters and millimeters respectively. The methodology additionally proposes photogrammetric image processing methods to transform neural network predictions to a 3D representation and a true-to-scale orthographic 2D image.

https://repository.tudelft.nl/islandora/object/uuid%3A1c25e888-9d60-4833-9aae-1e843061b92a 

Iván Puente-Luna, Joaquín Martínez-Sánchez, Xavier Núñez-Nieto, Spanish Naval Academy, Application of Underwater Photogrammetry in Shallow Waters Using Unmanned Surface Vehicles (USVs) 

The importance of surveillance and reconnaissance (SR) tasks in any military operation has been confirmed over time. In the field of amphibious operations, landing operations from afloat units are common and can provide crucial support to operations that the Army is currently developing or planning to develop in the near future. 

In this context, the use of seabed mapping systems in coastal areas would not only allow for faster action but also greater economy of resources and security.

In this work, a low-cost Unmanned Surface Vehicle (USV) is used, integrating a GoPro camera and a positioning system. The goal is to create scaled underwater photogrammetric models that provide accurate three-dimensional information and orthophotos. The methodology will be tested in two different case studies. Based on the results obtained, the possibilities and limitations of this technique will be studied.


Guang Zhai, Thibault Candela, Kay Koster, Jan Diederik van Wees, TNO, InSAR space geodesy for land subsidence 

InSAR is an advanced satellite-based geodetic technique and it has become one of the most important tools for measuring earth surface deformation that can have a wide range of scientific and engineering applications. High-quality deformation information is playing a crucial role in helping with energy transition and climate adaption activities, ranging from energy storage, CO2 sequestration, mining, groundwater resources management to land subsidence and hazard mitigations. 

Here, we present: 

Robin Claesen, Robert Voute, Roderik Lindenbergh and Felix Dahle, TU Delft, Global Digital Elevation Models from Satellite Images 

This study investigates the application of advanced photogrammetry techniques to improve the accuracy of Digital Elevation Models (DEMs) derived from satellite imagery, focusing on the integration and optimization of the photogrammetry pipeline for diverse satellite sources. 

Recent advancements in photogrammetry and the increasing availability of satellite imagery have broadened the scope for accurate earth surface modeling. By adapting a traditional photogrammetry pipeline to incorporate cutting-edge computer vision algorithms, this research aims to refine the generation of DEMs, which are crucial for environmental monitoring, urban planning, and disaster management.

The methodology centers on modifying existing pipelines to handle satellite images by incorporating feature detection and matching innovations like the DISK algorithm and LightGlue matching, alongside adjustments to camera models and projection matrices suited to satellite characteristics. The thesis evaluates the performance of these adaptations through extensive analysis using satellite imagery of varying resolutions and environmental conditions.

The results demonstrate significant improvements in the fidelity and accuracy of the generated DEMs, validated against ground truth data from Lidar measurements. The adapted pipeline proves capable of handling multi-source satellite images and delivering high-quality terrain models essential for geospatial analysis.

This work not only bridges a gap between remote sensing and computer vision but also proposes a framework for future research in the optimization of DEM generation from satellite imagery, potentially transforming practices in geospatial analysis and supporting the advancement of environmental and urban studies.

Wen Zhou, Claudio Persello, Alfred Stein, University of Twente, Hierarchical building use classification from multiple modalities with a multi-label multimodal transformer network 

Building use information is important for urban planning, city digital twins, and informed policy formulation. Prior research has predominantly focused on mapping building use in broad categories, offering general insight into their actual use. 

Our study investigates the extraction of hierarchical building categories, encompassing both broad and detailed classifications while accounting for mixed-use. To achieve this, we explore the fusion of building function information from satellite images, digital surface models (DSM), street view images, and point of interest (POI) data. 

We propose a novel multi-label multimodal transformer-based feature fusion network, which is capable of simultaneously predicting four broad categories and 13 detailed categories. 

Experimental results demonstrate the efficacy of our method, as it maps most of the building use categories, with the weighted average F1 score for four broad categories and 13 detailed categories of 91% and 77%, respectively. 

Our experiments underscore the critical role of satellite images in building use classification, with the inclusion of DSM data and POI significantly enhancing the classification accuracy. By considering detailed use categories and accounting for mixed-use, our method provides more detailed insights into land use patterns, thereby contributing to urban planning and management. 

Bingquan Li; Ling Chang, China University of Geosciences, Wuhan; University of Twente, Rapidly and Automatically Detecting Landslides by Integrating GPU-Assisted InSAR, InSAR Phase-gradient Stacking and an Improved YOLOv8 Network 

Satellite-based interferometric synthetic aperture radar (InSAR) has shown its potential in landslide detection. 

Yet, to detect slow-moving landslides in a rapid and automatic manner, on a large scale, is challenging, especially over mountainous regions. 

To take up this challenge, this study designs and demonstrates a workflow that integrates GPU (Graphics Processing Unit)-assisted computation, InSAR phase-gradient stacking, and an improved YOLOv8 (iYOLOv8), namely GIINWF, to facilitate a rapid and automatic detection of slow-moving landslides. 

GIINWF first employs GPU-assisted interferometry and phase-gradient stacking method to rapidly obtain phase gradients in the range and azimuth directions, and then utilizes an iYOLOv8 network to automatically and rapidly detect landslides. 

We tested GIINWF using Sentinel-1 SAR images between 2021 and 2022 in both ascending and descending orbital directions, over a landslide-prone area close to the city of Chengdu, China. 

The results show that we detected 298 slow-moving landslides that were validated by ground truth data. 

We conclude that our GIINWF is suited for rapid and automatic landslide detection, which opens a new perspective for its application in large-scale regions. 

Valentina Maoret, Thibault Candela, Ylona van Dinther, Kay Koster, Pietro Teatini, Jan Diederik van Wees, Claudia Zoccarato, Frans Aben, and Guang Zhai, Utrecht University, TNO, Land subsidence induced by urbanization: towards building damage predictions 

Land subsidence induced by human activities is a well-known issue. In urban areas this inflicts damage to buildings and infrastructure, leading to high costs and hazardous situations. 

A complicating factor in urban areas  is that the presence of the built environment also itself influences the processes of subsidence. 

To address these challenges, we intend to fill out two big knowledge gaps. The final objective is to disentangle the relative subsidence contribution of the presence of building in one selected urban area in the Netherlands.

The first challenge in predicting subsidence in urban areas is the relatively small spatial-scale of the subsidence processes. This requires the spatial downscaling of existing modelling approaches, since damages to buildings are driven by small-scale spatial subsidence fluctuations. 

The second one consists of assessing the effect of urbanization itself on land subsidence. Our current subsidence models disregard the presence of the build environment and thus its potential additional effect. 

We plan to combine multiple data sources (building locations/weights/years of construction, InSAR, LiDAR, and Cone Penetration Tests) with physics-based and ML-based models. 

Lina Hagenah, TU Delft, Urban Tree Classification in Delft, Netherlands 

Clustering urban trees based on their characteristics provides additional information on their ecological, aesthetic, and environmental impact. 

This research explores an automatic tree detection method from  airborne LiDAR (AHN4) data using Random Forest classification, achieving an accuracy of 85% to 90%. 

Individual trees were identified using Jinhu-Wang's Point-based Individual Tree Delineation from 3D LiDAR Point Cloud Data method. 

In Delft, a manually obtained municipal tree inventory was used for validation, detecting 70% of recorded trees with a mean location difference of 0.71 m. 

An additional 460 trees were identified, creating an inventory that includes private trees. Geometric and reflectance features were extracted, and high-resolution spectral images from SuperView Neo-1 across three seasons were used to gather spectral features, resulting in a total of 50 features, including tree height, shape, and color. 

This expanded inventory enables clustering based on geometric, reflectance, and spectral similarities using a K-means algorithm. 

While the results demonstrate successful tree detection and grouping based on similar features, limitations include error propagation and fitting parameters to heterogeneous tree data. 

Grouping trees by these features can aid future research on environmental impact assessment and monitoring temporal changes. 

Konstantin Maslov, Thomas Schellenberger, Claudio Persello, Alfred Stein, University of Twente, Towards global glacier mapping with deep learning and open Earth observation data 

Accurate global glacier mapping is important for understanding climate change impacts, yet automated methods at the global scale remain underdeveloped. 

This study addresses this gap by introducing GlaViTU, a hybrid convolutional-transformer deep learning model, and several strategies for multitemporal global-scale glacier mapping using open-access satellite imagery. We use a tile-based dataset covering 9% of glaciers worldwide and an independent acquisition dataset for separately testing generalisation across different sensors and imaging conditions. 

Our model achieves an IoU > 0.85 on previously unobserved images, decreasing to above 0.75 in debris-rich areas and exceeding 0.90 in clean ice regions. GlaViTU performance matches human expert delineations in area and distance deviations as compared to the figures reported in the literature, making it an objective tool suitable for monitoring decadal glacier changes. 

Incorporating SAR data, including backscatter and interferometric coherence, along with optical and elevation data consistently enhances accuracy, while adding thermal data, contrary to expectations, degrades the overall performance. 

We also derive and evaluate predictive confidence estimates, calibrating them to ensure reliable and more interpretable outputs. 

This work advances the automation of global glacier mapping, providing a foundation for consistent, long-term monitoring of glacier changes crucial for climate change studies. 

Weiqin Jiao, University of Twente, Deep-learning based object outline extraction from airborne sensor data 

Polygonal building outline extraction has been a research focus in recent years. Most existing methods have addressed this challenging task by decomposing it into several subtasks and employing carefully designed architectures. Despite their accuracy, such pipelines often introduce inefficiencies during training and inference. 

This paper presents an end-to-end framework, denoted as PolyR-CNN, which offers an efficient and fully integrated approach to predict vectorized building polygons and bounding boxes directly from remotely sensed images. Notably, PolyR-CNN leverages solely the features of the Region of Interest (RoI) for the prediction, thereby mitigating the necessity for complex designs. 

Furthermore, we propose a novel scheme with PolyR-CNN to extract detailed outline information from polygon vertex coordinates, termed vertex proposal feature, to guide the RoI features to predict more regular buildings. 

Comprehensive experiments conducted on the CrowdAI dataset show that PolyR-CNN achieves competitive accuracy compared to state-of-the-art methods while significantly improving computational efficiency, i.e., achieving 79.2 Average Precision (AP), exhibiting a 15.9 AP gain and operating 2.5 times faster and four times lighter than the well-established end-to-end method PolyWorld. Replacing the backbone with a simple ResNet-50, PolyR-CNN maintains a 71.1 AP while running four times faster than PolyWorld. 

Tingxuan Jiang, Harald van der Werff, Frank van Ruitenbeek, Mark van der Meijde, ITC, University of Twente, Effect of environmental factors on hyperspectral measurement and subsequent mineral classification 

This presentation explores the effects of environmental factors—specifically illumination zenith, moisture content, and shadow conditions—on the spectral variation and robustness of hyperspectral mineral classification. The research assesses how varying conditions influence the spectral measurements and classification processes used in geological remote sensing.


Key findings demonstrate that averaged endmembers enhance classification robustness across multi-temporal images more effectively than individual image-extracted endmembers. Additionally, while spectral brightness and noise levels are impacted by changing illumination angles, the center position of spectral features remains relatively stable, aiding robust classification. The study also reveals that moisture significantly alters spectral absorption features and brightness, but its effect on the center position of spectral features is the least. Finally, shadows and moisture predominantly reduce reflectance albedo and weaken spectral features, with shadow conditions showing the most significant impact on classification robustness.

Overall, the thesis outlines the importance of considering environmental variability in improving the reliability of hyperspectral data interpretation for mineral mapping. 

Menno-Jan Kraak, Universiteit Twente, Challenges faced when developing the ITC atlas for both paper and online 

The atlas narrates the activities of ITC, University of Twente. It resulted in a hybrid atlas, an atlas published on multiple mediums: digitally as an online edition and analog as a printed edition. 

The challenges faced were data, technical and design related. The data wrangling process took more time than anticipated. This was mainly due to dependencies on different sources, incompleteness and in consistencies. Additionally, the EU General Data Protection Regulation (GDPR) had an influence on the workflow since all student and staff data had to be anonymized. Technical issues were mainly related to the web mapping implementation. 

The components library offered us a considerable flexibility, but to create components for all our creative ideas proved impossible due to the project’s time budget. Design issues where mainly related to the (im)possibilities of both print and web environments. However, certain map types work well online but must be adapted for print to avoid for instance visual clutter. In some cases it might have been more effective to come up with new visualizations, instead of adapting visualizations which we originally designed for one environment for the other (e.g. make a map intended for the print atlas interactive). See https://atlas.itc.utwente.nl