Research

Machine Learning in Mapmaking

Many of the problems in making digital maps lie in the conversion or translation of data from one form to another - sometimes from a big data source. For example, translation of satellite imagery into a more schematic form like a street view. In this example, a higher dimension or richer data source needs to be condensed into a simpler one. What if the problem is the reverse: Is it possible to move from a lower dimension or quality source to a richer one?

Image processing provides many examples of adding resolution to images. Machine learning provides several techniques to translate lower resolution images to higher ones. Recent work using Generative Adversarial Networks are an example of using ML to add detail to lower resolution imagery.

Many areas of machine learning are driven by the presence of huge amounts of data and the need to quantify and analyze that data for useful (to humans) information. With the explosion of satellite data, machine learning naturally comes into play as a useful analysis tool.

We are exploring some of the opportunities and questions that go along with the abundance of imagery being produced, including:

  1. Visual detection of objects from satellite data

  2. Automated translation and conversion of GIS and satellite data

  3. Augmentation of training data for machine learning algorithms

  4. Dimensionality reduction of large dimension data sets (e.g.. k-means, PCA)


March 22, 2021


Here's an update on one of the machine learning projects we've been working on.

One of the tasks in producing our navigation apps is to create base maps or tiles for offline use. Presently, when the apps are distributed, the base offline map is simply embedded in the app's data. Creation of that geo-referenced base map currently uses GIS tools to create the hill shaded map with contours.

Building on that approach, and looking to improve the appearance of the base map, we've been exploring various blends of hill shade and satellite imagery. Starting with these two images, what blend produces the best base map?

Here's the upper right corner of the two images blended with a darken style:

The results are more informative than a straight hill shade but, well, it is a little dark (no surprise).

Here's the result from blending the same two images using an overlay blend:

An improvement with more subtle shading which makes for a better basemap without overlwhelming any information layered on top of it.

Finally, let's unleash some machine learning to see what probablity distributions can be inferred from training a model on pairs of hill shade and satellite images:

The results present a nice alternative between darken and overlay. It shows the power of generative models to discover higher dimensional relationships between model properties. By asking the GAN to fuse or blend the two images such that the cycle from the original images to the blend and back to original images is consistent, the model discovers a suitable translation.

A few notes on the process used to generate the machine learned images: The model was built using the (currently archived) Swift for Tensorflow open source project as an alternative to other machine learning languages (e.g. Python). Refreshingly expressive, let's hope the project is resurrected in the future. Development and training was performed in part on Google's Compute Engine and Colab frameworks.